Many blogs write about over-simplifications of the radiative effects in climate. Many of these blog articles review simple explanations of how it is possible for atmospheric radiative effects to increase the surface temperature – e.g. the “blackbody shell” model.

As a result many people are confused and imagine that climate science hasn’t got past “first base” with how radiation interacts with atmospheric gases.

In any field the “over-simplified analysis” is designed to help the beginner to gain conceptual understanding of the field. Not to present the complete field of scientific endeavor.

This article will try to “bridge the gap” between the over-simplified models and the very detailed theory.

Note – it isn’t possible to cover the whole subject in one blog article and a decent treatment of radiative transfer consumes many chapters of a textbook.

There will be some maths. But I will also try to provide a non-mathematical explanation of “the maths” – or “the process”.

If you find maths daunting or incomprehensible that is understandable, but there is a lot that can be learned by trying to grasp some of the basic concepts.

### Monochromatic Radiation

This means we need to treat each wavelength separately. Why? Because absorption and emission is a wavelength dependent process.

For example, here is the absorption spectrum of one part of NO2:

*Figure 1*

So when we consider radiation “zooming” through the atmosphere we have to take it “one wavelength” at a time.

There isn’t really any such thing as monochromatic radiation or being able to take “one wavelength at a time” – but that doesn’t stop us analyzing the problem..

### A Digression on “Calculus”

How does the world of science and engineering deal with continuous change?

If a force, or a ray of radiation, or a movement of the atmosphere is a continuously changing value, how do we define it? How do we deal with it?

Calculus is the answer. This branch of mathematics allows us to deal with small changes and continuous changes and provide theorems, answers and equations.

For example, if we know something about the variable distance, s, with respect to time, t, then an equation defines the relationship between velocity, v, and these other variables:

v = ds/dt

where the “d”s at the beginning means: “the rate of change of”, so the formula means – in English:

**Velocity **= the rate of change of **Distance **with respect to **Time**

Generally when you see something like “da/db” it means “*the rate of change of variable a with respect to variable b*“.

It is also common to see Δx and δx – meaning “a small change in x”. This is different from “the continuous change of x”, but the specific rationale behind when we use “dx” and “Δx” isn’t so important for this article.

The other important area of calculus is “summing” results when again there is continuous change. If someone travels at 10 km/hr for 1 hour and then at 20km /hr for 1 hr they will have traveled 10km + 20km = 30km. That’s an easy calculation. But if velocity has continuously changed with time – how do we calculate the total distance traveled?

This means, in (harder to understand) English:

**Distance **= the integral of **Velocity **with respect to **Time**, between the limits of time = t1 and time = t2.

The **integral **is like the summation of each of the tiny distances covered in each very small time period (between t1 and t2). The integral is also often referred to as “the area under the curve”.

..end of digression

### Absorption of Radiation

Let’s define a monochromatic beam of radiation, I_{λ}, travelling through the atmosphere:

*Figure 2*

We have some information we can use:

The rate of absorption of the beam of radiation as it travels through the atmosphere is proportional to the amount of absorbers at that wavelength and the ability of that absorber to absorb radiation of that wavelength

This is known as the Beer-Lambert law, and is written like this (note 1):

dI_{λ} = -nσI_{λ} .ds [1]

which means the same thing in mathematical terms, with n=number of absorbing molecules per unit volume (corrected), and σ=capture cross-section (or effectiveness at absorbing at that wavelength), and the subscript λ indicates that this equation is only true for the radiation at this wavelength

The value σ is a material property and so constant for one gas at one wavelength at one temperature and one pressure, but varies with the temperature and pressure of the gas (see comment). The value n will depend on location in the atmosphere. If we solve this equation between two arbitrary points, s1 and s2, we get:

I_{λ}(s_{2}) = I_{λ}(s_{1}). exp [ -∫σn(s).ds ] [2]

where the integral is between the limits s_{1} and s_{2}

*What it means in English:*

*The intensity of radiation at wavelength λ is reduced as it travels through the atmosphere according to the total amount of the absorber along the path. “exp” is e, or 2.718, to the power of the value in the square brackets.*

If the concentration of the gas doesn’t change along the path the equation becomes a simpler version:

I_{λ}(s_{2}) = I_{λ}(s_{1}). exp [ -σn.(s_{2} – s_{1}) ] [3]

### Optical Thickness & Transmittance

These are some important properties to understand.

**Optical thickness**, usually written as τ, is the property inside the exponential in equation [1].

τ = ∫σn(s).ds [4]

where the integral is between the limits s_{1} and s_{2}

**Transmittance**, usually written with a weird T symbol not available in WordPress, but with “t” here, is the amount of radiation “getting through” along the path we are interested in.

t(s_{1},s_{2}) = exp [-τ(s_{1},s_{2})] [5]

also written as:

t(s_{1},s_{2}) = e^{-τ(s1,s2)}

The optical thickness is “dimensionless” as is the transmittance.

So we can rewrite equation [1] as:

I_{λ}(s_{2}) = I_{λ}(s_{1}).t(s_{1},s_{2}) [6]

The transmittance can be a minimum of zero – although it can never actually get to zero – and a maximum of 1. So it is simply the proportion of radiation at that wavelength which emerges through the section of atmosphere in question:

*Figure 3*

With optical thickness, τ = 1, transmittance, t = 0.37 – which means that 63% of the radiation is absorbed along the path and 37% is transmitted.

With optical thickness, τ = 10, t=4.5×10^{-5} – that is, 45 ppm will be transmitted through the path.

A note on definitions – optical thickness is usually defined as = 0 at the top of atmosphere, where z is a maximum and a maximum at the surface, where z = 0:

*Figure 4*

So τ increases while z decreases, and z increases while τ decreases.

### Absorptance

In the absence of scattering (note 2), absorptance, a = 1 -t.

That is, whatever doesn’t get transmitted gets absorbed.

### Plane Parallel Assumption

If you refer back to Figure 2, you see that the radiation is not travelling vertically upwards, but at an angle θ to the vertical.

Using simple trigonometry, ds = dz / cos θ [7]

It’s always an advantage if we can simplify a problem and relating everything to only the vertical height through the atmosphere helps to solve the equations.

Atmospheric properties vary much more in the vertical direction than in the horizontal direction. For example, go up 10 km and the pressure drops by a factor of 5 – from 1000 mbar to 200 mbar. But travel 10 km horizontally and the pressure will have changed by less than 1 mb. Temperature typically changes 100 times faster in the vertical direction than in the horizontal direction.

And as air density is determined by pressure and absolute temperature we can make a reasonable assumption that the density at a given height, z directly above is the same as the density at the same height when we look at an angle of 45°.

Of course, by the time we are considering an angle close to 90° – i.e., horizontal – the assumption is likely to be invalid. However, the transmissivity of the atmosphere at angles very close to the horizon is extremely low anyway, as we will see.

Therefore, making the assumption of a plane parallel atmosphere is a good approximation.

Let’s review the earlier equations using a mathematical identity that reduces “equation clutter”:

μ = cos θ [8]

And rewrite equation [5]:

t(z_{1},z_{2}) = exp [-τ(z_{1},z_{2})/μ] [9]

Notice that the equations are now rewritten in terms of the optical thickness between two vertical heights and the angle of the radiation.

It might help to see it in graphical form – and note here that the optical thickness, τ, is for the **vertical** direction (otherwise the graph would make no sense):

*Figure 5*

This simply demonstrates that as the angle increases the radiation has to travel through more atmosphere.

So suppose the optical thickness vertically through the atmosphere, τ = 1, then for:

- a vertically travelling ray the transmittance = 0.37
- for a ray at 45° the transmittance = 0.24
- for a ray at 70° the transmittance = 0.05
- for a ray at 80° the transmittance = 0.003

Emission

Using Kirchhoff’s law, absorptivity of a material = emissivity of a material for the same wavelength and direction. For diffuse surfaces – and for gases – direction does not affect these material properties, so they are only a function of wavelength. (*And for all intents and purposes, absorptance is the same term as absorptivity, and transmittance is the same term as transmissivity-see comment*).

Emission of radiation at any given wavelength for a blackbody (a perfect emitter) is given by Planck’s law, which is usually annotated as B_{λ}(T), where T = temperature.

The absorptivity of a gas, a_{λ} = 1-t_{λ} =emissivity of a gas, ε_{λ}. (Corrected)

For a very small change in monochromatic radiation due to emission:

dI_{λ} = ε_{λ}B_{λ}(T) .ds [10a]

Therefore:

dI_{λ} = nσB_{λ}(T) .ds [10b]

So if now combine emission and absorption, equations 1 & 10:

dI/ds = nσ.(B_{λ}(T) – I_{λ}) [11]

If we combine this with our definition of optical thickness, from equation [4]:

dI_{λ}/dτ = I_{λ} – B_{λ}(T) [12]

which is also known as *Schwarzschild’s Equation* – and is the fundamental description of changes in radiation as it passes through an absorbing (and non-scattering) atmosphere.

It says, in not so easy to understand English:

The change in monochromatic radiation with respect to optical density is equal to the difference between the intensity of the radiation and the Planck (blackbody) function at the atmospheric temperature

Sorry it’s not clearer in English.

In more vernacular and less precise terms:

As radiation travels through the atmosphere, the intensity increases if the Planck blackbody emission is greater than the incoming radiation and reduces if the Planck blackbody emission is less than the incoming radiation

### Solving Schwarzschild’s Equation

Notice that this important equation contains the Planck term, which is for blackbody radiation (i.e., radiation from a perfect emitter), yet the atmosphere is not a perfect emitter. We definitely haven’t assumed that the atmosphere is a blackbody and yet the Planck terms appears in the equation. It’s just how the equation “pans out”..

I mention this only because so many people have come to believe that there is some “big blackbody assumption” in climate science and they might be concerned by this term. Nothing to worry about, this has not been assumed.

Let’s find a solution to the equation if we are measuring the TOA (top of atmosphere) radiation. Refer to Figure 4:

- at TOA, z=z
_{m}and τ=0 - at the surface, z=0 and τ = τ
_{m}

Now some ** maths manipulation** – skip to the end people who don’t like maths..

First we note a handy party piece:

d/dτ [Ie^{-τ}] = e^{-τ}. dI/dτ – Ie^{-τ }[13]

Now we multiply both sides of equation [12] by e^{-τ}:

e^{-τ}.dI_{λ}/dτ = e^{-τ}.I_{λ} – e^{-τ}.B_{λ}(T) [14]

Re-arranging:

e^{-τ}.dI_{λ}/dτ – e^{-τ}.I_{λ} = – e^{-τ}.B_{λ}(T) [14a]

And substituting “handy party piece” [13] into [14a]:

d/dτ [I_{λ}e^{-τ}] = – e^{-τ}.B_{λ}(T) [15]

Now we integrate [15] between τ=0 and τ=τ_{m}:

I_{λ}(τ_{m})e^{-τm} – I_{λ}(0) = – ∫ B_{λ}(T)e^{-τ} dτ [16]

where the integral is between the limits of 0 and τ_{m}

And re-arranging we get:

..*end of maths manipulation*

I_{λ}(0) = I_{λ}(τ_{m})e^{-τm} + ∫ B_{λ}(T)e^{-τ} dτ [16]

Which – for those who haven’t followed the intense maths:

I_{λ}(0) = I_{λ}(τ_{m})e^{-τm} + ∫ B_{λ}(T)e^{-τ} dτ [16]

The intensity at the top of atmosphere equals..

The surface radiation attenuated by the transmittance of the atmosphere, plus..

The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere

Don’t worry about the maths, but it is definitely worth spending some time thinking about the words in colors – to get a conceptual understanding of how atmospheric radiation “works”.

### It’s Not Over Yet – Conversion from Intensity to Flux and the Diffusivity Approximation

Remember the note about the *Plane Parallel Assumption *?

Getting equations into WordPress is painful and time-consuming so a quick explanation followed by the result, especially as maths fatigue will have set in among most readers, if any made it this far.

Equation [16] is for spectral intensity. That is, one direction rather than the complete hemispherical power (flux).

To calculate spectral emissive power (flux per unit wavelength) we need to integrate the equation over one hemisphere of solid angle. We re-write equation [16] in the form of equation [9] so that the optical thickness references vertical height, z and μ, which is the cosine of the angle from the vertical. Then we integrate from μ=0 (θ=0°) to μ=1 (θ=90°).

Transmittance, t(z,0) = 2 ∫ e^{-τ(z,0)/μ} μ.dμ

where the integral is from 0 to 1

This equation doesn’t have an “analytical” solution, meaning we can’t rewrite it in a nice equation form without the integral. But with some clever maths that I haven’t tried to follow – but have checked – we can produce an approximation which is known as the **diffusivity approximation**:

2 ∫ e^{-τ(z,0)/μ} μ.dμ ≈ e^{-τ/μ’}

Where μ’ is the “effective angle” which produces a close approximation to the actual answer without needing to integrate across all angles (for each wavelength). The best value of μ’ = 0.6.

Here is the calculated integral (left side of the equation) vs the approximation with μ’ = 0.6, as a function of optical thickness, τ, demonstrating the usefulness of the approximation:

*Figure 6*

There are some other refinements needed, for example, the reflection of atmospheric radiation for a surface emissivity < 1, which is then attenuated by the absorptance of the atmosphere before contributing the TOA measurement. But these factors can all be introduced into the equations.

### Full Color Solution

What we have produced so far is a solution for each monochromatic wavelength, λ.

Also, we haven’t explicitly stated the fact that the optical thickness depends on the concentration and “capture cross section” **of each absorber** for that wavelength. So the optical thickness, and transmittance, for each height requires combining the effects of each active molecule.

The solution for flux, W/m², requires integrating the equations across all wavelengths.

Wow. Conceptually straightforward. But computationally very expensive – check out Figure 1 – the absorption characteristics of each radiatively-active gas vary significantly with wavelength.

### Conclusion

The equation for radiative transfer is commonly known, (in differential form) as Schwarzschild’s Equation. It relies on fundamental physics.

To solve the equation requires some maths.

To solve the equation in practical terms the plane parallel assumption is used. This relies on the fact that variations in temperature and pressure (and therefore density) are negligible in the horizontal direction compared with the vertical direction.

The equation could be solved without this plane parallel assumption, but the variations horizontally in pressure and temperature are so slight that the same result would be obtained, unless extremely high quality data on temperature, pressure, density and concentration of absorbers was available.

To solve the equation in practical terms we need to know:

- the temperature (vs height) in the atmosphere
- the concentration of each absorber vs height
- the absorption characteristics of each absorber vs wavelength

In any practical field, the “proof of the pudding is in the eating”, and so take a look at Theory and Experiment – Atmospheric Radiation – where theoretical and practical results are compared.

And lastly, the Stefan-Boltzmann equation, correct and accurate though it is (check out Planck, Stefan-Boltzmann, Kirchhoff and LTE) is not used in the actual *equations of radiative transfer* in the atmosphere. Nor is any assumption of “unrealistic blackbodies”.

I only note these last points due to the high quantity (but not high quality), of blog articles and comments demonstrating the writers haven’t actually read a textbook on the subject, but still feel qualified to pass judgement on this field of scientific endeavor.

*Other articles:*

*Part One – a bit of a re-introduction to the subject
*

*Part Two – introducing a simple model, with molecules pH2O and pCO2 to demonstrate some basic effects in the atmosphere. This part – absorption only*

*Part Three – the simple model extended to emission and absorption, showing what a difference an emitting atmosphere makes. Also very easy to see that the “IPCC logarithmic graph” is not at odds with the Beer-Lambert law.*

*Part Four – the effect of changing lapse rates (atmospheric temperature profile) and of overlapping the pH2O and pCO2 bands. Why surface radiation is not a mirror image of top of atmosphere radiation.*

*Part Five – a bit of a wrap up so far as well as an explanation of how the stratospheric temperature profile can affect “saturation”*

*Part Seven – changing the shape of the pCO2 band to see how it affects “saturation” – the wings of the band pick up the slack, in a manner of speaking*

*And Also –*

*Theory and Experiment – Atmospheric Radiation – real values of total flux and spectra compared with the theory. *

### Notes

**Note 1**: There are many formulations of the Beer-Lambert law and even much dispute about who exactly the law should be attributed to.

Other formulations include using the density of the gas and a matching coefficient for the effectiveness of the gas at absorbing.

**Note 2**: When considering solar radiation (shortwave), scattering is important. When considering terrestrial radiation (longwave), scattering can be neglected. In this article, we will ignore scattering, so the results will be appropriate for longwave but not correct for shortwave.

on February 7, 2011 at 9:57 pm |Tweets that mention Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations « The Science of Doom -- Topsy.com[…] This post was mentioned on Twitter by diyana, Margie. Margie said: Understanding Atmospheric Radiation and the “Greenhouse” Effect … http://bit.ly/dT3kne […]

on February 8, 2011 at 2:25 am |jyyhJust thanks for your efforts, though I’m not good with maths, I know people that are more skeptical of the details of the greenhouse effect and more skilled in maths than me, and I’ve occasionally directed them to your blog.

on February 8, 2011 at 7:54 am |ErnestOne of the important consequences of The Equation is that it describes what we will “see” when looking at a given object. For instance, the temperature of the core of the Sun is around 13.6 million K but what we see is the object of temperature approximately 5778K.

Thus, we are not able to see the hotter part of the object if radiation from that part is absorbed on its way to the outer parts of the object. The radiation leaving the top of the atmosphere gives thus mainly the information about the temperature of the deepest part of the object that can be seen as well as about the absorption properties of the matter from this area and up to the outer part of the object.

To the contrary, if we are inside the object then we will not be able to see the colder, outer parts of the object. The only thing we will see, here feel, is the temperature of the closest to us part of the object. The changes in the temperature of the outer parts (for example due to the change in their composition) will, however, influence the rate of heat exchange between the inner and outer parts of the object, which will affect the temperature (and its variations) of the closest to us part of the object.

In order to estimate the temperature of the parts that we are not able to see, we have to perform a lot of thinking and calculations, which “science of doom” is showing us in a very pedagogical manner. Thanks for that.

on February 8, 2011 at 3:30 pm |mkellyWith optical thickness, τ = 1, transmittance, t = 0.37 – which means that 63% of the radiation is absorbed along the path and 37% is transmitted.

With optical thickness, τ = 10, t=4.5×10-5 – that is, 45 ppm will be transmitted through the path.

Can you please explain this better. How can you use the same formula with different inputs,1 vs 10, and go from a percent to a ppm?

on February 8, 2011 at 8:50 pm |scienceofdoommkelly:

They are different ways of writing the same number. A lot of people less familiar with maths have trouble “seeing” what 4.5×10

^{-5}actually is.So 0.000045 = 4.5×10

^{-5}= 0.0045% = 45 parts per million.on February 8, 2011 at 9:27 pm |mkellyThat is what I thought. I just wanted to make sure. Also please don’t be quite so condesending. I have a degree in mechanical engineering so I follow what you say. I use my J.P Holman “Heat Transfer” text book for light reading as followup to reading here.

But I think you should keep the same units when explaining things.

I enjoy this blog.

on February 8, 2011 at 10:25 pmscienceofdoomSorry, it wasn’t intended that way.

I was attempting to explain that the reason that at times I write non-standard descriptions is because many people can’t visualize a number like 4.5×10

^{-5}. It wasn’t aimed at you.on February 8, 2011 at 11:10 pm |FrankSOD: Very nice. My only complaint is that you used Kirchhoff’s Law and emissivity in this derivation. There should be no need to invoke Kirckhoff’s law or any of the “old-fashioned” physics of bulk materials that was developed in the Dark Ages before we knew that molecules existed and developed quantum mechanics to explain their behavior. The absorption coefficient or cross-section, σ – more properly written as σ(λ) – is the probability from quantum mechanics that applies to absorption and emission. These two processes are the “time reverse” of each other so the same probability, σ or more properly σ(λ), should apply to both:

1) a molecule in a high energy state emitting photon of a given wavelength and lowering its energy by hc/λ.

2) a same molecule in a lower energy state absorbing a passing photon and increasing its energy by hc/λ.

Another problem with Kirchhoff’s Law is that it was meant to describe absorption and emission from bodies or surfaces in thermal equilibrium. For surfaces – the material for which Kirchhoff developed his law – radiation that isn’t absorbed is reflected (or scattered?). For gases, radiation that isn’t absorbed is transmitted (or scattered). For liquids, all of these processes may apply.

Above you say: “The absorptance of a gas, aλ = 1-tλ =emittance of a gas, ελ.” Then you equate the “emittance” (which appears to really be “transmission” and should have nothing to do with emission of photons) with “emissivity”, which opens the door for introducing Planck’s function into Equation 10a. If this hand waving isn’t correct and if σ is the constant that applies to the quantum mechanics for both absorption and emission, how does B(λ,T) get into the equation for emission?

If there are N molecules in a state capable of absorbing a photon and n molecules in the higher energy state produced by absorption, then n could equal to N*B(λ,T). If this were the case, we might expect Maxwell-Boltzmann statistics to apply to the situation. The Planck function, however, is derived for photons in otherwise empty space from Bose-Einstein statistics. Including the photon gives two states: State1, a molecule with photon near enough to be absorbed. State2, a molecule in a higher energy state. Then B(λ,T) would be the relative populations of these two states. Perhaps I should withdraw my comments about “old-fashioned physics”, “Dark Ages” and “hand waving”, because it is becoming clear that I don’t understand the situation. All my attempts to find a QM explanation for why Planck’s function appears in Schwartzschild’s equation on the web have failed. (I did see derivations similar to SOD’s) Can anyone help?

on February 9, 2011 at 12:25 am |scienceofdoomFrank:

Kirchhoff’s law proves that bodies in Thermodynamic Equilibrium have emissivity equal to absorptivity.

Experimental work subsequently proves that the material properties of bodies don’t change when they are out of Thermodynamic Equilibrium, as explained in Planck, Stefan-Boltzmann, Kirchhoff and LTE.

Kirchhoff’s law is true and can be applied, so why not use the simple explanation?

on February 9, 2011 at 10:11 pm |FrankSOD (and DeWitt): I have now resolved my conceptual problems equating the behavior of solid surfaces and gases. For other skeptics, Kirchhoff’s Law was originally devised to explain the behavior of a surface at thermal equilibrium inside an enclosure. If absorption didn’t equal emission, the surface would warm or cool compared to the surroundings. If the surface is replaced with a volume of gas in a transparent vessel inside the same enclosure (and the other air is removed), the same reasoning appears to be true – even though the non-adsorbed radiation is reflected in the first case and transmitted in the second. In both cases, however, Kirchhoff’s “Law” is derived from common sense and should be verified. (My comments about your confusing transmissivity and emissivity don’t make sense on further review.)

However, I hate (perhaps irrationally*) the idea of treating absorption and emission as two fundamentally different physical processes, when they are simply the time-reverse of each other. At a microscopic level, why is Planck’s function (which has been derived by QM) the right factor for joining absorption to emission? The semi-classical derivation of Planck’s function begins with a box of oscillators at a given temperature and seems (if I understand correctly) to derive the energy flux (per unit volume per solid angle, if my dimensional analysis is correct) emitted in all directions from that volume – without reference to the nature of the oscillators. More fundamentally, Planck’s function is derived from Bose-Einstein statistics, wherein the relative number of particles in a higher energy state E = hv is proportional to (exp(E/kT-1))/exp(E/kT). Compared with Planck’s function, I see an extra exp(E/kT) term to cancel Maxwell distribution of energy between the two states of the GHG. Though I don’t really understand this physics, some of the right mathematical terms needed to derive Schwartzschild’s equation seem to be present and “begging” to be joined. Do we really have to go to a macroscopic scale – and invoke Kirchhoff’s Law – to derive Schwartzchild’s equation? (It seems clear that this is what some textbooks do, so I’m not criticizing anyone.) Or is Kirchhoff’s Law, like so many other macroscopic “laws”, a consequence of more fundamental physics?

*My aversion to the macroscopic descriptions is that they often get misapplied: Slabs of atmosphere don’t always emit black- or gray-body radiation. Emissivity is not a constant for gases. DLR isn’t contrary to the 2LoT. In my dreams, we would: Derive the correct laws from the microscopic scale, then show in which situations these fundamental laws do and do not produce the “laws” used to describe macroscopic behavior. This doesn’t mean I don’t appreciate SOD’s (generally heroic) efforts and the useful comments of others.

on February 9, 2011 at 10:02 am |ErnestFrank.

I interpret the appearing of the Planck function in the Schwarzschild equation as the ”background” radiation, which, if one so desires, might be put to zero. In such a case, we will get the pure Beer law.

But generally, we have a background radiation from the object A of temperature T and the irradiation from the object B. If both objects are at the same temperature then the irradiation from B will not contribute to the change of the intensity of radiation from the object A when being measured by a detector in some given direction. A and B will only exchange the same power density (W/m^2) with each other which does not affect the outgoing radiation in the direction to the detector.

If the temperature of B is higher than that of A then we will have an excess of power density from B to A and it is this excess (which might be called signal) that is denoted by I. The intensity of the signal I is diminishing exponentially along it way of propagation through the object A in the direction to the detector until it drops to the level of the background radiation. After that, the signal will not be detectable.

On the molecular level we have the molecular excitations due to the collisions between the molecules in the sample. The incoming signal increases the fraction of the excited molecules within the given volume from n to n+dn. A part of the extra excited molecules, dn, will re-emit photons while a part will lose the energy in collisions, which will increase the kinetic energy of the molecules, instead (the absorption and thermalizing process). The process of thermalizing is associated to the absorptivity of the medium in respect to the given wavelength (photon energy). The absorptivity is thus a function both of absorption properties of the molecules and of the rate of molecular collisions. Thus the absorption coefficient of molecules is not the same as the absorption properties of the sample, these different absorptions should be not confused with each other. But it is correct that the absorption coefficient of the molecules is equal to their emission coefficient for the given wavelength (in accordance to Einstein theory of interaction of matter with radiation), and that you can describe this formally as the reversion of the absorption process in time.

There will be less and less photons in the signal. The propagation of the signal (photons above the background level) creates thus the temperature gradient along the direction of propagation of the signal. Conduction, convection and radiation processes will then strive to zero thus induced gradient.

In this interpretation, the background radiation given by the Planck function plays the similar role as mc^2 plays in the special theory of relativity, m being the rest mass of the object. In the classical physics, mc^2 is put to zero as the Planck function is put to zero in the Beer law.

on February 9, 2011 at 4:39 pm |DeWitt PayneFrank,

The derivation of the Planck function in Appendix A of Introduction to Atmospheric Radiation by K-N Liou starts with this phrase:

In accordance with Boltzmann statistics,…

Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration. Low concentration means densities much less than a white dwarf. I suspect high temperature means anything well above absolute zero. IOW, a gas or a solid object on the Earth’s surface at 300 K is going to follow Maxwell-Boltzmann statistics.

on February 9, 2011 at 9:40 pm |Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Seven « The Science of Doom[…] will eventually point out that I didn’t make use of the diffusivity approximation from Part Six in my […]

on February 9, 2011 at 10:36 pm |FrankNow we can understand where the “old-fashioned” physics of bulk materials – especially black- and grey-body radiation – comes from and how it gets MISAPPLIED to GHGs. In equation 11 above, if light has been passing through a homogeneous material (gas, liquid, or solid) for long enough that emission and absorption have reached equilibrium, we get

dI/ds = nσ[B(λ,T) – I] = 0 [11a]

(I, σ and ε vary with λ.) In this case, either σ is zero or I = B(λ,T). For materials where σ is not zero at any wavelength, emitted radiation follows Planck’s Law, I = B(λ,T) and the emission integrated over all wavelengths follows the Stefan-Boltzmann Law, W = oT^4. (o is the Stefan-Boltzmann constant and σ is still the absorption coefficient.) When σ is zero at some wavelengths, there is no light absorbed or emitted at this wavelength and any light coming through the material was emitted from behind. In this case, when we integrate over all wavelengths (the intensity is either B(λ,T) or zero), we come with a modified form of the Stefan-Boltzmann Law, W = εoT^4, where ε, emissivity, is a constant between 0 and 1 that corrects for all of those wavelengths where σ is zero.

For solids and liquids, radiation traveling through the material usually has passed by enough molecules so that emission and absorption have reached equilibrium. When radiation leaves their surfaces, we say the material emits “blackbody” or “graybody” radiation. This behavior is just Schwatzschild’s equation applied to a particular situation, not a fundamental “law” of physics.

This “law” is not true for gases. Many people (not SOD) like to pretend that we can discuss the radiation leaving a “slab of atmosphere” in terms of blackbody radiation. Those who wish to be technically correct will call it an “optically thick” slab, meaning that the slab is thick enough so that equilibrium between absorption and emission has been reached. “Optically thick” doesn’t apply to the Earth’s atmosphere, especially above the tropopause, where radiative equilibrium determines temperature. There are some wavelengths, such as the 15 um CO2 band, where the Earth’s atmosphere is acts optically thick, but radiative forcing comes from the flanks of these strong bands – which are not optically thick! No wonder there is so much confusion.

Worst of all, these same people like to say that the energy emitted by a slab of atmosphere is oT^4 or εoT^4, thereby allowing us to believe that emission doesn’t depend on the number of GHG molecules in the slab. As everyone recognizes, absorption does depends on the number of GHG molecules in the slab. Thus we create the widespread illusion that GHGs TRAP energy trying to the leave the atmosphere and don’t emit it. Perhaps SOD will want to discuss this illusion some day.

on February 9, 2011 at 11:14 pm |DeWitt PayneFrank,

Then there’s the people that want it both ways. Well one person anyway. The essence of the argument as near as I can tell is that the integrated emissivity over all wavelengths for CO2 for a 1 m length path through the atmosphere is small, but increasing the path length doesn’t change the emissivity. In other words, the slab is optically thin and thick at the same time.

on February 10, 2011 at 9:11 pm |FrankIf I’m the “one person” you are referring to above, I’d really like to have it one way – the scientifically correct way – assuming I can see passed my prejudices clearly enough to recognize it. From what I’ve learned, the enhanced greenhouse effect exists, but “trapping” is an extremely poor description of the mechanism.

I appreciate your help and SOD’s when I do get off-track.

on February 10, 2011 at 9:18 pmscienceofdoomI’m guessing that DeWitt Payne is referring to his amazingly patient debate in Lunar Madness and Physics Basics.

on February 10, 2011 at 10:04 pmDeWitt PayneSoD,

That would be it. I didn’t want to mention the name as names have power and might attract unwanted attention (and I was too lazy to look up a link).

That actually proved to be useful as it gave me a better understanding of the mechanics of the Hottel/Leckner approach to calculating radiant heat transfer in gases.

on June 29, 2011 at 3:38 pmwookymake that two people

on February 10, 2011 at 4:06 am |scienceofdoomFrank on February 9, 2011 at 10:36 pm:

Perhaps I have misunderstood what you are getting at..

Let’s just consider vertically travelling radiation to simplify things (otherwise I have to amend my drawing I’m about to show and add some integrals).

The height of the thin layer = dz.

dI/dz ≠ 0 (upward radiation through the layer)

dI’/dz ≠ 0 (downward radiation through the layer)

Even in equilibrium.

If there is no convection then in equilibrium:

dI/dz – dI’/dz = 0 (with appropriate clarification of directions of I and I’).

With convection then we have to add in a convection term and sum to zero.

Even more important is that the I in this conservation of energy equation is not I

_{λ}. It is the integral over all wavelengths.on February 10, 2011 at 9:01 pm |FrankFor misapplication of blackbody radiation, try Figure 6.8 of FW Taylor’s book (which you recommended to me). Of course, he doesn’t say that this multi-slab model applies to the earth. You could add Figure 3.5, but this refers to an optically thin slab, which he says it emits oT^4. It needs an emissivity term that varies with the concentration of GHG. For me (at least), this drawing suggests that absorption depends on GHG concentration and emission does not.

I do believe that misapplication of blackbody radiation leads to the myth that the mechanism of the enhanced greenhouse effect is that GHGs “trap” energy in the atmosphere. Emission is constant; absorption increase. GHGs both emit and absorb – they are the means of getting rid of the energy extra energy they absorb. More GHGs speed up the transfer of energy, but reduce the distance over which the transfer is made. Whether more GHGs at a location causes warming or cooling is a complicated function of the temperatures at the multiple locations between which energy is being transfered.

The comments of David Reeves to part 5 are illustrative: Why doesn’t more CO2 increase DLR (since we all know CO2 traps energy)?

If CO2 traps energy, why does radiative forcing occur mostly in the wings of the 15 um band?

I should make it clear that I have no doubt about the existence of the enhanced greenhouse effect – just how it is explained. This series of posts is explaining it the right way. “Trapping” does not.

on February 10, 2011 at 10:50 am |ErnestA more simple form of (16) is

I = Io[exp(-krz)] + B[1-exp(-krz)] (16a)

representing intensity of the transmitted radiation of a particular wavelength after the passage of the distance z through the absorbing medium.

Io = intensity of the incoming radiation at this particular wavelength

k = absorption coefficient

r = density of the absorbing/emitting gas

B = Planck function defining the intensity of the thermal radiation at the given wavelength.

You can regroup this equation to the form

I = (Io-B)exp(-krz) + B (16b)

or, if introducing Is = Io – B. where Is is the intensity of the signal

I = Is*exp(-krz) + B (16c)

In this form, it is easier to discuss the transmission properties of the medium. It should be observed that B might represent either black body or the gray body.

on February 10, 2011 at 10:31 pm |FrankTo avoid confusion by changing symbols, you can copy and paste Greek symbols (but not sub- or superscripting) from SOD’s post into your comments.

Optical thickness τ is probably not the same thing as krz because it is the result of an integration in equation 4. If the atmosphere were homogeneous, τ would be krz, but n decreases with z. My guess is that your simpler equation only applies to a homogeneous atmosphere whose pressure doesn’t decrease with height.

Optical thickness is a confusing concept for me, but dimensional analysis and equation 4 makes it appear to be a quantity of GHG – the number of molecules in a long, thin cylinder with a circular base of area σ (the absorption cross-section) long enough to absorb 1/e of the radiation passing through its length. It doesn’t matter whether the GHGs are distributed evenly or decrease as you move from one end of the cylinder to the other. Using the term “optical thickness” also suggests a length for this cylinder, but it doesn’t have units of length. When I think of τ as a quantity or length, I have trouble with the concept of integrating from one τ to another. SOD’s explanation of the terms in Equation 16 in red, blue and green is elegant.

τ also describes a cylinder extending from space down into the earth’s atmosphere that contains enough GHG to absorb 1/e the radiation passing through its length. This cylinder extends to infinity at one end, but the other end defines an altitude from which 1-1/e of the emitted photons escape to space – called the characteristic emission level.

on February 10, 2011 at 11:53 pmFrankCorrection to above: τ is a dimensionless quantity. σ has units of area/molecule. τ can’t be an exponent with units.

on February 11, 2011 at 9:46 amErnestFrank.

Thank you for a tip about copying Greek symbols from SOD’s text, I was afraid that this would not work.

In the case when there is a temperature or density gradient (or both) then the Schwarzschild equation for the transmission of radiation at the particular wavelength is given by

I = Io[exp(-σns)] + B(Te)[1 – exp(-σn’s)] (16)

where n’ and (Te) refer to the density of the absorbing molecules and temperature, respectively, at the end of the air layer, compare http://www.barrettbellamyclimate.com/page45.htm

This gives

I = [Io – B(Te)]exp(-σns) + B(Te)[exp(-σns) – exp(-σn’s)] + B(Te) (16a)

or

I = Is*exp(-σns) + B(Te)[exp(-σns) – exp(-σn’s)] + B(Te) (16b)

where

Is = Io – B(Te)

is the intensity of radiation (ie intensity of the “signal”) above the background thermal radiation (ie above the “noise”).

The extra term B(Te)[exp(-σnz) – exp(-σn’z)] describes the decrease of the “noise” along the way of propagation of the “signal”.

on February 10, 2011 at 7:09 pm |FrankSOD: There appears to be a QM derivation of Schwartzschild’s equation on a microscopic scale that parallels the macroscopic derivation that you provided. (The derivation below originates with me, so there is no guarantee what follows is all or even partially right.) The derivation of Planck’s Law tells us the (energy) density of photons of a given wavelength, λ, present around one or more GHG’s inside an enclosure at a given temperature, T is given by: u(λ,T) = 4Pi*B(λ,T). The rate of absorption of photons is determined by to the number of GHGs (n), the density of photons and a QM probability of absorption (r). Absorption (per unit volume) = rn*4Pi*B(λ,T). At equilibrium inside the enclosure, the rate of absorption must be equal to the rate of emission, so emission must also be rn*4PI*B(λ,T). (The QM term r is required to be the same in both directions). When we remove the enclosure and allow the photons to escape, the density of photons can take on any value, but emission will continue at a rate of nr*4Pi*B(λ,T).

Taking the derivative of these equations with respect to position, s, gives dr/ds = σ, the cross-section for absorption. The density of photons becomes the flux of photons (per unit area) and the 4Pi term is eliminated when radiance in all direction is converted to flux in one direction. Kirchhoff’s Law would then be a consequence of the QM requirement that r be the same in both directions. There would be no need for separate bulk properties, ε and σ.

on February 10, 2011 at 10:30 pm |DeWitt PayneFrank,

I think your derivation is overly simplistic. I don’t think you can use a derivation that depends on the presence of a black body in a perfectly reflecting container to derive the radiation field for a collection of molecules which may only approximate a black body at certain wavelengths depending on the size of the box and the partial pressure of the gas. Your r is going to be a function of temperature and pressure as well as wavelength as you have both line strength and line shape to deal with as well as the superposition of multiple lines. I seriously doubt that u(λ,T) = 4πB(λ,T) at a wavelength where r is much much less than 1.

on February 11, 2011 at 11:06 pmFrankDewitt: I’m sure my derivation is overly simplistic. However, all of your caveats about r varying apply to both macroscopic scale and quantum scale.

I simply tried to use the density of a photon gas (u(λ,T)) inside an enclosure to calculate an absorption rate (which must be equal to an emission rate at equilibrium). With the assistance of Kirchhoff, SOD uses the flux from such a photon gas on a macroscopic scale to demonstrate the ε must be nσ in Equation 10.

The standard model for radiative transfer breaks down when the distance traveled (and the size of the enclosure used to derive Planck’s Law) isn’t bigger than wavelength. http://web.mit.edu/press/2009/heat-0729.html

on February 10, 2011 at 9:40 pm |scienceofdoomFrank:

I’m glad you got the book.

I returned my copy to the library, but for reference I did scan a few pages, luckily this includes Chapter 3.

I have Figure 3.5.

There is indeed a mistake in the diagram. But if you read the text and equations that is clear:

eσT

_{E}^{4}= 2eσT_{S}^{4}Where T

_{S}is the temperature of the stratosphere and e is the emissivity of the stratosphere.The optically thin stratosphere is not radiating like a blackbody.

I don’t have chapter 6.

However, I do remember reading the book and much of Taylor’s excellent explanations are helping the reader to develop a conceptual understanding, which means starting from overly simplistic models and gradually making them more realistic.

I recall a number of simplified “greenhouse” models where he says things like ‘..and so this gives us a surface temperature of 320K..” clearly out by a factor of … not bad for such a simplistic model..” and so on.

If you want formal and complete derivations with consequently much more work for the student, try the expensive “Radiation and Climate” by Vardavas and Taylor, Oxford Science Publications (2007).

on February 11, 2011 at 10:21 pm |FrankSOD: Taylor solves your equation (eσTE4 = 2eσTS4) to find that the stratosphere is 215 degK by assuming e is the same for both layers – even though one layer is the surface of the earth combined with the troposphere. Since e varies with the density of GHGs and is very different for the surface of the earth, this appears to be incorrect.

Ignoring the fact that emissivity for a gas, unlike traditional surfaces, varies with the composition of the gas is the cause of much confusion in climate physics. Picture a photon at the tropopause trying to escape to space through 1X or 2X GHG. The photon is more likely to “trapped” in the atmosphere, so the earth will warm. Now remember that there are 2X GHG’s emitting photons at the tropopause. The earth doesn’t warm. Memo to all proCAGWers: Avoid reminding anyone that emissivity depends on GHG concentration. (The proper physics is too hard to explain.)

Even worse (I hadn’t noticed before), Taylor goes on to calculated a no-feedbacks climate sensitivity of 18 degK in Table 7.3 and implies that the current absence of such a large temperature rise could be explained by a massive change in albedo! Improper treatment of emissivity appears to be the cause.

on February 12, 2011 at 2:00 am |scienceofdoomFrank on February 11, 2011 at 10:21 pm:

You are not correct about this assumption.

Note the added red text to correct the diagram.

What is the energy absorbed by the layer? It is the earth’s surface/tropospheric radiation x absorptivity of the layer

Ein = eσT

_{E}^{4}, where e = emissivity = absorptivity of the layerWhat is the energy radiated by the layer? It radiates from two “surfaces”, so:

Eout = 2eσT

_{S}^{4}Therefore, in equilibrium, Ein = Eout and so:

T

_{S}= T_{E}/2^{1/4}There is no assumption that the emissivity of the stratosphere is equal to the emissivity of the earth’s surface + troposphere.

Once you start working on the assumption that most of the content of books by Professors of Physics is correct – even, amazingly, those involved in climate science – you will probably find that you learn more from their books.

on February 12, 2011 at 4:12 am |scienceofdoomFrank:

Can you scan and post somewhere? Or email to me, scienceofdoom – you know what goes here – gmail.com.

I doubt that your criticisms will survive review. Maybe even your own review.

on February 13, 2011 at 8:11 pm |FrankSOD: You are correct that I should start out with the assumption that the mathematics and physics of most books by physics professors is correct. You are completely correct (as usual) that emissivity is not misused on p47 as written. The “error” on this page is that emissivity is assumed to be 1 for the earth+troposphere layer used in the model, but this error is compensated for by choosing a “surface temperature” of 255 degK.

When I wrote my comment, I had been spending my time looking at why emissivity seems to be absent from many of the equations in chapter 6 (radiative transfer) and chapter 7 (the greenhouse effect). I saw the equation concerning the stratosphere in your post from chapter 3 and, to save time, mistakenly used this as an example. (A poor excuse for carelessness. Since I don’t have the time to master every detail, I often use words like “appears to” or “seems to”, but sure didn’t in this case.)

With regard to Table 7.3 (which I have just emailed), Professor Taylor certainly doesn’t use the phrase “no-feedbacks climate sensitivity”. He calls it “Model results for surface temperature corresponding to different values of Earth’s albedo, A, and different atmospheric CO2 mixing ratios”. The model results are surface temperatures, so one has to subtract to come up with climate sensitivity. So I think I fairly characterized his results. I have no reason to think that Taylor’s math will be found to incorrect for the model he presents, but the final answer is absurdly wrong: a no-feedbacks climate sensitivity of 18 degK with 1/3 of this rise predicted for the “current” value of 367 ppm of CO2. He goes on to explain that some of the difference (20-50%) is due to thermal inertia of the ocean. Then he suggests that changing albedo could account for some of the discrepancy, using albedos of 0.25, 0.30 and 0.35! Albedo may be changing, but I’ve never seen changes this large.

As best I can tell, a significant amount of confusion and distortion in climate physics arises from models that treat emissivity as a constant that doesn’t vary with GHG concentration. This leads to the idea that increasing GHG absorb, but don’t emit, more outgoing radiation and thereby “trap” heat in the atmosphere. However, I’m no longer sure that this is the fundamental cause of Taylor’s misleading results.

Although we can usually count on professors of physics to get their mathematics correct, we apparently can’t count on them to be fully candid about the limitations of their work. In this case, the professor seems to be obscuring the limitations of his model with unreasonably? large changes in albedo and misdirection about climate sensitivity with and without feedbacks. I was unable to find what I understand to be the truth in his book – that the no-feedbacks climate sensitivity for 2X CO2 from our best models is about 1.2 degK, not 18 degK. Taylor does say that his result is six times to larger than the IPCC’s value, but the IPCC’s value includes feedbacks (a concept Taylor introduces later).

Steven Schneider, a pioneering climate scientist, once said: “as scientists, we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts”. Then he went on to explain what type of activities were appropriate when talking to the public in order to “make the world a better place”, which includes “telling scary stories”. Is Table 7.3 presented in a manner appropriate for “science” or is it a “scary story”?

on February 12, 2011 at 7:39 pm |DirkHFrank is annoyed by the usage of “heat-trapping gases”. I googled for the term and found it being used by the EPA, the NYT and on Scott Mandia’s blog (i could go on.). And here by Dessler, North et.al.:

http://www.chron.com/disp/story.mpl/editorial/outlook/6900556.html

(from March 6, 2010)

“Heat-trapping gases are very likely responsible for most of the warming observed over the past half century”

Draw your own conclusions.

on February 12, 2011 at 7:52 pm |DeWitt PayneI don’t find that any more misleading than the term greenhouse effect. It’s an analogy. All analogies fail when examined closely. It’s like trying to explain all of quantum mechanics using only English. You can’t.

on February 12, 2011 at 7:47 pm |DeWitt PayneFrank,

Kirchhoff’s Law and Planck’s derivation of his own function also breaks down if there isn’t an actual black body in a perfectly reflective cavity.

See: http://www.ptep-online.com/index_files/2009/PP-19-01.PDF

Absent a hole, which acts as a source of radiation, or the presence of a black body, it’s not clear that a photon gas with Planck properties exists in a perfectly reflective cavity. Since a molecular gas is very far from being black or even gray, it would be surprising to me if the photon gas in a sealed reflective cavity containing only a molecular gas had a black body spectral distribution.

on February 13, 2011 at 9:03 am |ErnestScience of doom

In my opinion, the use of the plane geometry, Figures 2 and 4 above, when studying the outgoing radiation is an unnecessary complication as compared to the spherical geometry representing the actual form of the atmosphere.

The plane geometry might be useful when studying the incoming radiation from the Sun, but in such a case one must take additionally into consideration the tilting of the absorbing surface in relation to the direction of propagation of irradiation and not only the increase of the path of propagation related to 1/μ, μ being given by [8].

In such an interpretation, Figure 2 would indicate that the radiation from the Sun is falling on the area dA*μ along the path s = z/μ (the arrow Iλ being directed down). The incoming radiation is absorbed, thermalized and emitted upward. It is sufficient to regard the propagation of the upward (long wave) radiation along the axis z, which is due to the spherical form of the surface of the Earth. This is similar as when studying the radiation from the Sun.

However, the calculation presented by “science of doom” are still valuable if assuming that the parameter τ(z1,z2)/μ in [9] relates to n/μ where μ is not necessary a cos function (but nothing prohibits us to treat μ as a cos function, either). n/μ represents the change of n, as, for example, when enriching the atmosphere by a given kind of absorbing molecules as the result of the human industrial activity.

In particular, the doubling of concentration of the absorbing molecules from the level n corresponding to τ(z1,z2) = 0.1 to the level 2n = n/cos(60) corresponding now to τ(z1,z2) = 0.2 will change the transmittance of the layer (z1,z2) from about t = 0.9 to about t = 0.82, see Figure 5. Certainly, when n goes to infinity, which corresponds to cos(90), the slab (z1,z2) will become completely opaque to the given wavelength.

One can relate the optical thickness to the distance (z1,z2) at which the transmittance drops by i/e, as Frank had mentioned. In my opinion, the optical thickness should be related to the distance (z1,z2) at which the intensity of the signal drops to zero, which corresponds to the case when the intensity of the incoming radiation drops to the level B(λ,T).

on February 13, 2011 at 7:46 pm |scienceofdoomFrank:

Thanks for sending me the relevant chapter from Taylor’s book.

Why don’t I write an article explaining many aspects? And then more people can learn and ask questions and, of course, criticize.

on February 13, 2011 at 9:16 pm |FrankSOD: A separate post on how Taylor arrives at the results in Table 7.3 would be more useful than a discussion here. I initially grabbed his book to find an example of a misleading model with an optically thick slab atmosphere, but I don’t fully understand why his calculations appear to be so far off.

How about another post on the theme “GHGs redirect, but do not trap, infrared radiation”? Stratospheric cooling proves that “trapping” is not a general property of GHGs. With more GHG’s, there are more photons traveling shorter distances. It is easy to see why this cools the stratosphere, but less obvious why it warms the troposphere.

on February 13, 2011 at 9:31 pm |FrankSOD wrote:

The absorptance of a gas, aλ = 1-tλ =emittance of a gas, ελ.

For a very small change in monochromatic radiation due to emission:

dIλ = ελBλ(T) .ds [10a]

Therefore:

dIλ = nσBλ(T) .ds [10b]

In equations 10a and 10b, you equate ελ and nσ. Emissivity often a dimensionless number between 0 and 1. nσ appears to have units of inverse distance and the potential to be >1. What am I missing?

on February 14, 2011 at 2:45 am |DeWitt PayneI think you missed

nσ = τ

For very small τ a = ε = τ

on February 14, 2011 at 6:43 pm |FrankDeWitt:

τ = ∫σn(s).ds [4]

So τ is not equal to nσ, dτ/ds = nσ. This affords the correct units (1/distance). The slope of a function is not equal to the value of the function, but when we are dealing with an exponential function like n(s), the slope is proportional to the value of the function. So we could say τ = knσ, but with no obvious reason to say that k=1. Am I being obtuse?

on February 14, 2011 at 10:42 pm |DeWitt PayneI went back to Petty on the derivation of the Schwarzchild equation. I think the confusion arises because SoD is using ελ and ε is usually emissivity. In fact, the emissivity in SoD’s notation is actually ελds. so ελ in SoD’s notation isn’t dimensionless, it also has dimension (1/distance) Petty avoids this confusion by using βa as the extinction coefficient. The absorptivity a is then βa ds The extinction and emission coefficients are equal.

on February 14, 2011 at 10:45 pmDeWitt PayneWhich means ελ in SoD’s notation is the emission coefficient, not the emissivity.

on February 15, 2011 at 7:42 am |scienceofdoomFrank:

DeWitt Payne is spot on.

There are many ways to write some parts of the equations and I have generated confusion by a lack of clarity and especially by an incorrect definition at the start.

Many others might also be confused for different reasons.

Let me review and explain a few steps, including questions not asked – for others. Repeated content from the article in

italics.1. Early on:

..with n=number of absorbing molecules per unit distance, and σ=capture cross-section..INCORRECT definition for n:

n = number of absorbing molecules

per unit volume, so the units are 1/m^{3}.σ = units are m

^{2}.** Article will be updated **

2. Therefore considering this first equation:

dI_{λ}= -nσI_{λ}.ds [1]nσ .ds has units of 1/m

^{3}. m^{2}. m = dimensionlessOf course, we need this to be dimensionless (ask if it isn’t clear why).

3.

The absorptance of a gas, a_{λ}= 1-t_{λ}=emittance of a gas, ε_{λ}.Note the subscript

_{λ}is explaining that the relationship is ONLY true for monochromatic radiation (radiation at one wavelength). Often, in derivations, the λ subscript is dropped just to make the equations clearer (less clutter). I don’t do that here because it’s very easy for newcomers to think that an identity is true across all wavelengths.So ε

_{λ}= a_{λ}= 1-t_{λ}Let’s calculate ε

_{λ}acrossa very small distance, ds:ε

_{λ}= 1 – exp(-nσ.ds)How do we manipulate this equation?

4.

Using the Taylor expansion – a mathematical manipulation method – we find that

exp(x) ≈ 1-x, for very small x

And so ε

_{λ}= 1 – exp(-nσ.ds) = 1 – (1 – nσ.ds) = nσ.dsThis expression is dimensionless, as already demonstrated, which is what we want.

5. Therefore:

For a very small change in monochromatic radiation due to emission:dIλ = ε

_{λ}B_{λ}(T) .ds [10a]Therefore:

dIλ = nσB_{λ}(T) .ds [10b]on February 16, 2011 at 10:59 pm |FrankSOD and DeWitt: Isn’t ε dimensionless in this reply everywhere but equation 10a, suggesting something is wrong somewhere? There could be a problem with infinitesimals in step 3; one side of the equation is an infinitesimal and the other side isn’t. (If we limit the discussion to a single wavelength, the λs can be omitted for clarity.)

ε =? 1 – exp(-nσ.ds) (dubious from SOD above )

dε =? 1 – exp(-nσ.ds) (this isn’t right either)

a = 1 – exp(-nσs) (homogeneous only)

ε = 1 – exp(-nσs) (Kirchhoff)

dε/ds = nσ*exp(-nσs)

dε = nσ*exp(-nσs) .ds

I = ε * B(T,λ). B(T,λ) is constant with position. If there is a dI with respect to position, there must be a dε. Therefore

dI = nσ*exp(-nσs) * B(T,λ) .ds (homogeneous only)

dI = nσ * B(T,λ) .ds (when s =0)

Before this reply, Equation 10a (with apparent units of 1/distance for ε) seemed to appear out of nowhere. In a previous (possibly erroneous*) comment, I made the assumption that Equation 10a was equivalent to a definition of emissivity written in differential form.

I = ε * B(T,λ) definition of emissivity

dI = ε * B(T,λ) .ds Eqn 10a

However,

dI/ds = dε/ds * B(T,λ) + ε * dB(T,λ)/ds

dI/ds = dε/ds * B(T,λ) + 0

dI = dε/ds * B(T,λ) .ds

Not: dI = ε * B(T,λ) .ds [Equation 10a]

When DeWitt tells me that SOD and I are using different meanings for ε, the second meaning could be dε/ds. For exponential functions like ε, dε/ds is proportional to ε, but not equal to ε. The units are also different. However, as best I can tell, SOD and I use the same ε everywhere.

*In my earlier comment, I assumed SOD had gotten Equation 10a by treating ε as a constant when writing a differential form of I = ε*B(T,λ). If he had properly differentiated both sides of this equation with respect to s and ε were constant, both dε/ds and dI/ds would have been zero.

on February 15, 2011 at 5:07 pm |FrankEquation 10a appears to be wrong. I = ε*B(T,λ), but this equation can’t be differentiated with respect to s because ε = a and absorptivity/absorbance is certainly a function of s. (Yet another example demonstrating that “all” problems with climate physics originate with the assumption that emissivity is a constant for gases. 🙂 )

I = a * B(T,λ) = [1 – exp(-τ)] * B(T,λ)

Differentiating with respect to s:

dI/ds = B(T,λ) * exp(-τ) * dτ/ds *

For an infinitesimal path ds, τ approaches 0 and exp(-τ) approaches 1. τ = Int[n(s)σ.ds], so dτ/ds = nσ

dI/ds = B(T,λ) * nσ

dI = nσB(T,λ) .ds (I = emission) [10b]

on February 15, 2011 at 10:33 pm |DeWitt PayneBut your ε is not the same as SoD’s. You’re using ε to mean emissivity. SoD isn’t. The integrated form of 10a in SoD’s terminology assuming n is constant in the layer and the layer thickness is s:

I =[1 – exp(-ελs)]Bλ(T)

I don’t know of anyone other than he who shall not be named and his followers that think that emissivity isn’t a function of path length.

on February 16, 2011 at 7:14 pmFrankAbove, I tried to offer a derivation of Equation 10b without going through Equation 10a. Unfortunately, I may have made the explanation too short when editing. Expressing Kirchhoff’s Law for a volume of gas gives:

a = ε

a = 1- I/I_0 = 1 – exp(-τ) I = transmitted light

ε = I / B(T,λ) I = emitted light

1 – exp(-τ) = I / B(T,λ) I = emitted from here on

I = a*B(T,λ) = [1-exp(-τ)]*B(T,λ) “Kirchhoff’s Law”

Differentiating with respect to s:

dI/ds = B(T,λ) * exp(-τ) * dτ/ds *

For an infinitesimal path ds, τ approaches 0 and exp(-τ) approaches 1. τ = Int[n(s)σ.ds], so dτ/ds = nσ

dI/ds = B(T,λ) * nσ

dI = nσB(T,λ) .ds (I = emission) [10b]

on February 16, 2011 at 8:45 pmDeWitt PayneFrank,

On further consideration, you’re right and 10a is wrong. ελ is defined as emittance which is a function of path length and is dimensionless, so its appearance undifferentiated in 10a is wrong.

on February 16, 2011 at 8:49 pmDeWitt PayneIn fact ελ is emissivity rather than emittance. Emittance is equal to:

ελBλ(T)

and has units of W/m2.

on February 17, 2011 at 10:16 pmFrankA convert! Your comments drove me to make a cheat sheet with all of the terms, defining equations and units. If logic prevailed, emittance would be a synonym for emissivity.

I’ve always thought of absorption cross-section in units of m2/molecule rather than SOD’s m2 and density in terms of molecules/m3 rather than 1/m3. “Molecule(s)” could be individual molecules, moles, grams, mixing ratio*density, etc. Are SOD’s units the traditional ones for this field.

on February 18, 2011 at 3:55 amDeWitt PayneAbsorptance and Transmittance are ratios, so they’re equivalent to absorptivity and transmissivity. Emittance isn’t a ratio. So SoD’s statement that:

“The absorptance of a gas, aλ = 1-tλ =emittance of a gas, ελ.”

is wrong. It’s absorptivity equals emissivity. So 10a is wrong and unnecessary. It should be eliminated and 10b renamed 10.

Petty doesn’t use absorptivity and emissivity when he derives the Schwarzchild equation. He uses the extinction coefficient. Emittance isn’t even in the index, just emission and emissivity

on February 16, 2011 at 12:32 am |Antiquated ToryJust went back and read that thread, DeWitt Payne. Nyarlathotep’s balls, He Who Shall Not Be Named was a ===== (moderator’s note, please observe the Etiquette, painful though that might be).

A certain regular commenter took him seriously, too.

What happened to Mark? Would be sad if he gave up reading this blog.

on February 16, 2011 at 10:36 am |Miklos ZagoniSoD,

You say:

“The value σ is a material property and so constant for one gas at one wavelength.”

and:

“The intensity of radiation at wavelength λ is reduced as it travels through the atmosphere according to the total amount of the absorber along the path.”

The period at the end of your statements suggests that this is the full explanation. But I think you should say a word about the fact that σ = σ (T, p).

Therefore I think the full (I mean, the correct) statements would be these:

“The value σ is a material property and so constant for one gas at one wavelength at one temperature and one pressure, but varies with the temperature and pressure of the gas.”

and:

“The intensity of radiation at wavelength λ is reduced as it travels through the atmosphere according to the total amount of the absorber along the path, and according to the temperature and pressure distribution of the gas along the path.”

on February 16, 2011 at 10:56 am |scienceofdoomMiklos Zagoni:

You are correct and I have added this note to the article.

I am working on another article to explain this subject more fully.

on February 20, 2011 at 3:18 am |scienceofdoomFrank on February 16, 2011 at 7:14 pm (and also DeWitt Payne on his followup).

Sorry it has taken a few days to respond. Unfortunately it has been a very busy week and some questions need mental energy that has been in short supply.

For some reason I started using the term emittance instead of emissivity.

I have no idea why – except that I was focused on the maths. Emittance is definitely the wrong term.

I will update the article soon and also address your maths points at the same time.

on March 12, 2011 at 1:20 pm |Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Nine « The Science of Doom[…] as transmittance decreases. Check out the heading Optical Thickness & Transmittance in Part Six if this isn’t […]

on March 12, 2011 at 1:21 pm |Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Nine « The Science of Doom[…] previous articles in this series I created two fictitious molecules, pCO2 and pH20, and solved the radiative transfer equations for a variety of conditions for these two molecules through the […]

on March 26, 2011 at 9:08 am |scienceofdoomOn another blog I have said that I will provide evidence that the article writer has made false claims.

Here is an extract from Grant Petty,

A First Course in Atmospheric Radiation, demonstrating that the atmosphere isnotconsidered as a “blackbody” with an emissivity = 1:on March 26, 2011 at 11:30 pm |scienceofdoomReferencing my comment above, here is an extract from

Radiation & Climate, Vardavas & Taylor (2007):Note the highlighted equation.

The term S is the “source function” – which, when the atmosphere is in local thermodynamic equilibrium (LTE), is equal to the Planck function.

So if the emissivity was assumed to be 1 – a blackbody – then this would NOT be multiplied by the emissivity term, exp (-(t2λ-t1λ)/μ.

And by the way, this equation is the solution at one wavelength.

on April 2, 2011 at 3:48 am |scienceofdoomFinally following up on the earlier comments (e.g. February 20, 2011 at 3:18 am) about the correct terms..

Having consulted 5 textbooks covering a period of 4 decades it appears that the term emittance is a problem term and used differently by different authors.

There is no confusion in the basics of heat transfer in these books, as they define their terms.

Emittance– if consistency was observed – would relate toemissivitythe same way thattransmittancerelates totransmissivity.Essentially, transmittance is the ratio of transmitted radiation, as is transmissivity. The first is the extensive term, the second is the intensive term.

Following this convention – as some have tried to do including Lienhard (2008) – emittance would be the fraction of radiation from a real surface as a proportion of blackbody radiation.

However, most people have come to use emittance as the actual emitted flux.

Therefore, I will use emissivity and am correcting the text accordingly.

So as a result we have “Absorptance = Emissivity” which I have changed to “Absorptivity = Emissivity” so as not to confuse readers more than necessary(?).

on April 5, 2011 at 11:26 pm |Simple Atmospheric Models – Part One « The Science of Doom[…] Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations – the actual equations of radiative transfer (no blackbodies or Stefan-Boltzmann equations to be seen) […]

on April 7, 2011 at 11:12 am |Simple Atmospheric Models – Part Two « The Science of Doom[…] Note 1: Optical thickness is proportional to the number of absorbers (molecules that absorb radiation) in the path. So as the atmosphere thins out the density reduces and, therefore, the optical thickness must also reduce. You can read more about the equations of optical thickness in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations […]

on April 13, 2011 at 6:36 pm |Ben ChasteenRSS feed request to Before It’s News

Hi,

My name is Ben Chasteen and I’m the Science editor at Before It’s News http://www.beforeitsnews.com. Our site is a People Powered news platform with over 2,500,000 visits a month and growing fast.

We would be honored if we could republish your blog RSS feed in our Science category. Our readers need to read what your Science of Doom blog has to say.

Syndicating to Before It’s News is a terrific way spread the word and grow your audience. Many other organizations are using Before It’s News to do just that. We can have your feed up and running in 24 hours. I just need you to reply with your permission to do so. Please include the full name and email of the person who will be attached to the account, and let me know the name you want on the account (most people have their name or their blog name).

You can also have any text and/or links you wish appended to the end or prepended to the beginning of each of your posts on Before It’s News. Just email me the text and links that you want at the beginning and/or ending of each post. If you know html you can send me that. If not, just send me the text and a link to your site. It should be around 200 characters or less (not including links).

You can, if you like, create a custom feed for Before It’s News that includes multiple links back to your blog or web site. We only require that RSS feeds include full stories, not partial stories. We don’t censor or edit work.

Thank you,

Ben Chasteen

Editor, Before It’s News

http://www.beforeitsnews.com

on April 21, 2011 at 7:55 am |Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Eleven – Heating Rates « The Science of Doom[…] The only way to answer these questions is to solve the Schwarzschild equation – see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. […]

on April 22, 2011 at 7:57 am |The Mystery of Tau – Miskolczi « The Science of Doom[…] can find a more complete explanation of optical thickness in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations, which I definitely recommend reading even though it has many equations. (Actually, because it has […]

on April 26, 2011 at 5:21 am |The Mystery of Tau – Miskolczi – Part Three – Kinetic Energy « The Science of Doom[…] calculate this value, you need to solve the radiative transfer equations, shown in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. These equations have no “analytic” solution but are readily solvable using numerical […]

on May 15, 2011 at 7:15 am |The Mystery of Tau – Miskolczi – Part Five – Equation Soufflé « The Science of Doom[…] simple – and is not used to really calculate anything of significant for our climate. See Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations for the real […]

on May 20, 2011 at 5:13 pm |FlashI think the Greenhouse effect is much simpler than this explanation.

The essential simplified model to understand ‘global’ warming of Greenhosue effect is that of a ‘filtered blackbody’ (fBB).

Imagine a theoretical blackbody (BB) at 255K. It is emitting power right across the theoretical BB spectrum.

Now what happens if if you wrap the BB with a filter that covers the 600-700 wavenumber range?

The fBB quickly changes to a new dynamic thermal equalibrium: it emits more radiation over the non-filtered wavenumbers but overall emits the same total power as before.

So it ‘looks’ to an outside observer like a BB at 295K but with a chunk of the spectrum missing. And at the macro (global) leve this is fairly much what would happened.

You would be able to say that the ‘Greenhouse’ effect results in a global temperature rise of 40K.

Now look at an emissions spectrum for Earth.

This looks fairly much like an fBB to me. Visually you can see that it looks like something at about 295K with some ‘chunks’ missing and has the equivalent power output of a true BB at 255K.

Next question is what would be the impact on the above graph of doubling CO2 concentation in the atmosphere?

I haven’t been able to find an answer, but what I have seen so far indicates, well, not much would happen. Absorption at CO2 and H2O wavenumbers is near satured and while there might be an effect in the ‘wings’ of the CO2 absorption spectrum this would be very minimal.

Apparently some radiance might be filtered out round about wn740 … but how much would the power output across the rest of spectrum need to increase to compensation?

We would be talking about a very minor impact indeed on the total Greenhouse effect.

on May 20, 2011 at 10:45 pm |scienceofdoomFlash:

It depends on what you mean by “not much”.

The answer is given in CO2 – An Insignificant Trace Gas? Part Seven – The Boring Numbers.

An insight into the effect of CO2 across its spectrum is shown (along with the calculations) in Part Nine – you can see that absorption is not “saturated” along with the calculation of transmittance through the atmosphere with a doubling of CO2.

And you can find explanation about the question of “saturation” in CO2 – An Insignificant Trace Gas? – Part Eight – Saturation.

on May 20, 2011 at 11:16 pm |FlashThere’s some complex theory on ‘wings’ and ‘shoulders’ and stuff. A bit over my head, but the general gist seems to be a widening of the potential absorption band by a little bit (we’re talking 2-15 wavenumbers here depending on source and only in one direction because the other way we’re into the already saturated H2O spectrum).

In paractice, there are some observations of the change in the ‘filtered’ spectrum between 1970 and 1997. Some of that was due to inreased H2O absoroption but some apparently due to the CO2 ‘wing’ effect in wavenumbers ~735 (so it would appear to be a real and observable phenomenon). But not by a lot (a change in intensity over a narrow band of wavenumbers by no more that 1.5Teff). This was from a ‘warmists’ site so I assume is the worst case … reality might be less. Dunno, needs more observational data.

I took the suggested extra absorptions over the 27 years (1970 -1997) and then multiplied them by 10(!) to represent the standard CO2 doubling and then factored them in to a new Greenhouse Effect as per my filtered Blackbody (fBB) model. Net impact was about 0.2oC.

Does anyone have a model of the emissions spectrum they think that Earth will have if CO2 doubles? That will tell us all we need to know.

on May 20, 2011 at 11:36 pm |scienceofdoomYes, see the above link in my earlier comment: CO2 – An Insignificant Trace Gas? – Part Eight – Saturation.

on May 20, 2011 at 11:38 pm |scienceofdoomSee Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Twelve – Curve of Growth

And also: Part Nine.

on May 20, 2011 at 11:49 pm |FlashHi SoD,

Lots of (probably interesting) but not overly relevant stuff.

My question is … can you show me what the models say the emissions spectrum of Earth looks like with double 1970’s CO2?

Take this as a reference point …

I want something that shows how this changes. How does this change if there is 2*70’s CO2?

This is the ONLY thing that tells me what the impact of CO2 is on the Greenhouse Effect.

Thanks,

Andrew

on May 21, 2011 at 12:04 am |FlashHi SoD,

Yeah. So, okay, the ‘Doppler’ effect as the recipient moves forward and backward smears the absorbable quantum wavenumber out a bit. It still does not open up much of a ‘wing’ in the filtered spectrum.Doesn’t this just affect absorption in neighbouring wavenumbers?

As per my fBB model Greenhouse warming is a factor of the total absorbed spectrum ‘v’ the unabsorbed ‘window’.

on May 21, 2011 at 3:09 am |scienceofdoomFlash:

This was in the link of “not overly relevant stuff”.

It is relevant because it directly answers your question.

Strictly speaking it doesn’t actually answer the question you asked – because you asked about doubling 1970’s CO2. Whereas the answer everyone else poses and considers (for consistency) is what happens when we double CO2 from pre-industrial levels.

on May 21, 2011 at 7:19 am |FlashI picked 1970 as a base year because we seem to have some records from then!

No point starting with a point we cannot measure reliably.

Pre-industrial we don’t have any records of what Earths transmission spectrum looked like.

The diagrams above are not an Earth’s emission spectrum … which is what I am looking for.

Does anyone have a model emissions spectrum of what they think it will look like if CO2 levels go up?

If the answer to this is NO, no-one has ever done this that would be interesting too.

A.

on May 28, 2011 at 8:55 am |The Mystery of Tau – Miskolczi – Part Six – Minor GHG’s « The Science of Doom[…] of 289K. The diffusivity approximation was used to estimate total hemispherical transmittance (see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations). The wavenumber step, Δν = 1 cm-1. The calculations were done from 100 cm to 2500 cm (4μm […]

on August 12, 2012 at 2:58 am |Temperature Profile in the Atmosphere – The Lapse Rate « The Science of Doom[…] the atmosphere is not gray so this is not a simple problem, but it can be solved using the radiative transfer equations with numerical […]

on January 3, 2013 at 9:29 pm |Visualizing Atmospheric Radiation – Part One « The Science of Doom[…] Radiation does go in all directions. The plane parallel assumption has very strong justification and – in simple terms – mathematically resolves to a vertical solution with a correction factor. You can see the plane parallel assumption and the derivation of the equations of radiative transfer in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. […]

on February 2, 2013 at 7:58 am |Kiehl & Trenberth and the Atmospheric Window « The Science of Doom[…] What’s the Palaver? – Kiehl and Trenberth 1997 Do Trenberth and Kiehl understand the First Law of Thermodynamics? Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The… […]

on August 19, 2013 at 7:45 pm |JWRSoD

http://www.tech-know-group.com/papers/Beer-Lambert.pdf

I wrote a paper on my doubts about the Beer-Lambert hypothesis and the resulting Schwarzschild equation.

on August 20, 2013 at 8:52 am |scienceofdoomJWR,

It’s nice to see someone attempt to address the real physics.

I haven’t been very active on the blog for some months due to “real life” but will take some time to review your paper.

Being curious about how anyone could dispute the (>150 year old) Beer-Lambert law of absorption I had a quick scan and noticed you said:

Yet the equation has the dependent term I

_{λ}, which is the source radiation, which depends on the temperature of the source.Do you have a comment on this simple point?

I will comment more fully in the next few days.

on August 20, 2013 at 4:10 pm |Pekka PiriläBeer-Lambert law gives directly relevant results under very specific circumstances only. What’s required is that:

1) radiation is monochromatic or the absorptivity of the absorbing material is independent of the wavelength over the range of wavelengths considered.

2) emission by the absorbing material can be disregarded as very weak in comparison to the incoming radiation that passes trough the full thickness of absorber

The second condition is commonly true for SW radiation (visible, UV, X-ray, gamma) but not for LWIR.

When LWIR in atmosphere is considered, no serious source assumes that Beer-Lambert is directly applicable. Specifically this series of posts on understanding atmospheric radiation is not to the least based on assuming applicability of Beer-Lambert law.

on August 21, 2013 at 7:59 pm |DeWitt PayneThe Beer-Lambert Law originated from observations in the laboratory. The derivation came later. Are you saying then that spectrophotometric analysis is somehow flawed? That would be news to a lot of chemists.

on August 22, 2013 at 9:45 pmJWR@de Wit

The discussions with SoD are already much further.

Try to give a positive contribution.

on August 23, 2013 at 6:04 pmDeWitt PayneJWR,

Here’s a positive contribution: Do a bit more research before you start spouting off about things you clearly don’t understand very well.

on August 20, 2013 at 5:54 pm |JWR@SoD

in the Beer-Lambert constitutive equation which I write here as

dI_lamda = -N*alfa_lambda*I_lamda

it is generally understood that I_lamda is the current value at point z.

I forgive you for the remark since you did it in a hurry.

@Pekka

Instead of alfa_lambda you can write alfa, assuming that the absorption is independent of wavelength.

In the time you and your CPU are spending with the “code” (ref [5]) as described in the post , I see tau and exp(-tau) all over the place!

on August 20, 2013 at 8:48 pm |Pekka PiriläJWR,

tau and exp(-tau) may appear in numerous texts that try to explain, what’s going on in the atmosphere, but they are not used in the actual calculations. It’s not uncommon that a detail of the final results of such an actual calculation is expressed using these concepts through a backwards calculation.

on August 20, 2013 at 10:17 pm |scienceofdoomJWR,

I

_{λ}(z) is the current value of intensity at point z. You are correct.It came from a source and therefore at its starting point, 0, the intensity I

_{λ}(0) depended on the source temperature. During its journey it is attenuated by the absorptivity of the absorbing material and this is not dependent on the temperature of the absorber.Therefore I

_{λ}(z)has a dependency on the temperature of the source(the Planck function for the source temperature at that wavelength x the emissivity of the material at that wavelength), and not on the temperature of the absorber.So back to your statement:

You seem to have misunderstood the Beer-Lambert hypothesis. The Beer-Lambert hypothesis does not make this claim.

on August 22, 2013 at 7:08 amJWR@SoD

Thank you for your explanation that I(z) does not represent

the real intensity at level z but that part which is remained of the

surface flux of the Prevost type I(0)= eps*sigma*T^4 .

Our different interpretation is the reason of this misunderstanding.

You think in terms of Prevost fluxes up and down, I make my

reasoning according to the generalized Stefan Bolzmann equation

as explaned in my reference [3]. I should have interpreted your I(z)

in the way you use it. Sorry for that.

What I discovered from our discussion is that the Schwarzschild

approach with the Beer-Lambert relation is in fact the same as my

chicken wire approach.[1]and [2].

The Beer-lambert relation gives the distribution of the absorbers based

on a geometrical cross-section.

Discretized by means of nodes in z-direction it represemts the

distribution and the thickness of the wires the absorption coefficient f.

No further physics invoved.

The quantification of the emission=absorption is decribed by the

Planck function in the Schwarzschild approach:

exchange of heat between nodes : sum(B(Ti)-B(Tj))

However it is not written down explicitly, SoD starts from the

Prevost upward flux Iup(0) , calculates the part of that Prevox surface

flux which remains at position z, using the geometrical absorber

distribution and Planck absorption (=emission):

Iup(z)=Iup(0)- function1(B)

And the same for Idown(z) = Idown(TOA)+ function2(B).

In fact the exchange of heat becomes in some way :

function(Bi-Bj)

In the chicken wire model the exchange of heat between stations

i and j is calculated inmediately by sum(fi*fj*(Ti^4)-Tj^4))

The CPU consumption in the SoD code is reported to be important,

the chickenwire model is very: on my laptop the results are inmediately.

The Sod code uses of course a lot of CPU by extracting data from

HITRAN. But the SoD code is also not very strict in saving work.

In fact for Iup and Idown the contribution of the Planck function B

is the same (it is written also in a comment that they are the same,

but it has been programmed twice in order to change it in the futur)

My suggestion to SoD is to look for the similarity in the variation

of Iup and Idown and indeed look for pairs in the process such that

somewhere appears B(Ti)-B(Tj) such that for Ti=Tj no exchange

of heat according to the second law.

CONCLUSION

I will change the title of the paper about the Beer-Lambert.

It is indeed the same as my chicken-wire representation.

The Schwarzschild equation was a nice way in 1905 to attack

the problem by means of an analytical solution.

The integral in that solution had to be determined by

numerical methods, but you can play with it, for teaching purposes

by inserting functons B which can be integrated analytically.

The introduction of auxillary variables Iup and Idown is not

necessary anymore in 2013.

We can write down inmediately

the discretized equations for the heat balance in a node.

When there is a dicreapancy in a node in the radiation heat

balance we know it is from convection. [2] and [3].

In September I will publish the MATLAB program (2 A4)

with the overlaying finite elements[2].

on August 23, 2013 at 12:56 pmPekka PiriläJWR,

The Matlab code of SoD described in

https://scienceofdoom.com/2013/01/10/visualizing-atmospheric-radiation-part-five-the-code/

and discussed in the whole series of postings is essentially written to perform the kind of calculation you are planning to present. It may be a little more complex as it takes into account the detailed absorption spectrum of all atmospheric gases that absorb and emit IR at a significant level. The basic idea is, however, the same.

on September 13, 2013 at 7:26 pmJWR@SoD

I have used your explanation concerning Beer-lambert to correct my earlier paper. Thank you once more.

http://www.tech-know-group.com/papers/planckabsorption.pdf

In the above link I have studied in more detail your equations and your code.

I repeated your analysis for the fictive upward flux , and I added a similar one for the fictive downward flux.

I think to have detected an error in the sense that two different tau distributions should be used, one for the upward flux and an other for the downward flux. The error is easy to be corrected as I have indicated in the paper.

I have also included an appendix, in which for a planet with an atmosphere modelled with two screens, the two-way Schwarzschild formulation and the one-way finite element formulation indeed give the same equations.

The conclusion is simple: back-radiation is not a physical phenomenon and the surface is not radiating very much, apart from the window,

Convection of sensitive and latent heat to higher levels is the mechanism.

At higher levels IR-sensitive molecules emit the heat to outer space.

on September 13, 2013 at 9:05 pmDeWitt PayneJWR,

If that’s the case, then please explain to me how an IR thermometer actually works and why when you point it at the sky, you measure an apparent temperature.

Also, convection from the surface to the atmosphere is mostly latent heat transfer from the evaporation and condensation of water. But the scale height of water vapor in the atmosphere is only 2 km compared to the scale height of all the other gases of 8 km. There simply isn’t enough physical circulation of the atmosphere present to transport the quantities of energy involved.

on August 20, 2013 at 5:56 pm |JWRcorrection

dI_lamda = -N*alfa_lambda*I_lamda*dz

on August 20, 2013 at 6:16 pm |JWR@SoD

Take your time.

For me it is now Tuesday 20h00

I am leaving for a family trip early Thursday morning.

I will be back on september 10

I take my laptop with me but I don’t know whether I find time to answer your eventual comments.

on September 17, 2013 at 5:28 pm |JWR@de Witt

Usually I write “back-radiation of heat” does not exist.

I forgot to add “of heat” in the answer to SoD.

In my papers, in particular ref [3] I give the generalized Stefan-Boltzmann equation which gives the heat flow between two surfaces with temperature and emission coefficients (T1, eps1) respectively (T2,eps2):

q=eps1,2*sigma*(T1^4-T2^4)

1/eps1,2 =1/eps1 +1/eps2 -1.

In plain words: two remote surfaces exchange information to each other on

their temperatures and on their emissivities and on the basis of which a heat flow is exchanged from the warmer surface to the colder.

There is no back-radiation of heat from the colder surface to the warmer.

See also the examples given in my ref [3].

The pyrgeometers use above equation.

Surface 1 is the sensor surface of the pyrgeometer,

with known temperature T1, known eps1, known sigma,

known electrical input q.

Unknowns are the data to be measured of a remote surface: eps2 and T2.

The manufacturers are clever enough to include e.g. an eps2 in the chip to measure the remote T2, or they make two measurements with a different set (T1, eps1, q).

That is the choice of the different manufacturers.

As concerns your remark that there is not enough convection available to evacuate 109 Watt/m^2, that remark is addressed to all other, with back-radiation of heat.

I only find from my analyses that there is no back-radiation of heat, that the LW emission is much smaller.In fact the K&T diagrams of NASA say the same.

on September 17, 2013 at 10:11 pm |Pekka PiriläJRW,

In the spirit of classical thermodynamics your way of describing radiative heat transfer makes sense, but that’s not a valid argument against anyone who prefers to consider separately radiation from body A to body B and from B to A. Both approaches are correct when used systematically. Thus you are justified in saying that you can describe these processes without back-radiation, but you are wrong in saying that back-radiation does not exist even if you add “of heat” to that.

Classical thermodynamics is not the only truth, it’s just one way of describing those phenomena that in can describe. It cannot describe all important phenomena, therefore other theories are in many ways better.

It’s in easier to do the calculations discussing the back-radiation as another subprocess. It’s not necessary to know, where the radiation is going to calculate the emission from a surface. This is a huge advantage in practice, and not the only advantage. Therefore no working scientist follows your approach.

on September 20, 2013 at 1:07 pm |DeWitt PayneJWR,

So all that money spent on the SURFRAD ( http://www.esrl.noaa.gov/gmd/grad/surfrad/ ) network is wasted? They aren’t really measuring anything?

But forget pyrgeometers and absolute cavity radiometers. How about an FT-IR spectrophotometer such as this one: http://www.arm.gov/instruments/aeri ? How come you can point one at the sky and record a spectrum if there is no such thing as back radiation? Even better, you can calculate that spectrum using a line-by-line program radiative transfer program with the HITRAN molecular spectroscopy database and have the measured and calculated spectra agree within a few percent.

on September 20, 2013 at 8:20 pmDeWitt PayneJWR,

Something else you might want to consider is the Rosseland diffusion approximation for radiative energy transfer in optically thick media. It’s used to model energy transfer in solar atmospheres.

on September 21, 2013 at 10:14 am |scienceofdoomJWR from 2013/09/13 at 7:26 pm:

I had a look at your paper.

This, as in the previous version of your paper, highlights a question. The answer to this basic question is not clear from your paper (at least not to me) so I hope you can resolve my confusion via this simple thought experiment.

We have a box containing a gas and from left to right the transmissivity, at wavelength, λ = 0.5. Scattering is negligible at this wavelength.

1. I propose, therefore, that this box has a transmissivity from right to left of 0.5 at this same wavelength.

– True or False?

2. When we shine a beam from left to right, I(lr)

_{λ}the absorption of this beam is 50%, the transmission is 50%. (Zero reflection).– True or False?

3. With condition 1 & 2 in place, we now shine a beam, of the same wavelength, from right to left, I(rl)

_{λ}. The absorption of this beam is 50%, the transmission is 50%.– True or False?

4. As a result of condition 3, the total absorption is double the absorption as a result of condition 2.

– True or False?

on September 21, 2013 at 4:27 pm |JWRSoD

I made an update of the paper with an appendix 1, in which I compare analytically, for an atmosphere consisting of two layers, the two-way Schwarzschild formulation with the one-way finite element formulation.

I address there the absorption of the flux.

In order to get a real flux I have to subtract: U-D.

I think to have to subtract the downward absorption from the upward absorption! The upward flux U is too high, with an amount D, and the absorption due to D has to be subtracted in order to get the real absorption of (U-D).

At least I go in the direction of the one-way absorption from the FE formulation.

Your box thought experiment is therefor more or less treated in the appendix of the version of 19 september.

http://www.tech-know-group.com/papers/planckabsorption.pdf

on September 22, 2013 at 8:42 pmscienceofdoomJWR,

Perhaps it’s only clear to you, but Appendix 1 does not answer the above questions.

It raises more questions.

5 (continuing my numbering from earlier) Is Appendix 1 the basis of the paper (the premise), or does it follow from the paper?

6. a) You list, in Appendix 1 in figure 1:

Imaginary downward fluxandImaginary upward flux. Do you think flux is not a real physical property?b) You then state, in Appendix 1,

This OLR is in fact the only real value in the Schwarzschild formulation.However you have just claimed it. How did OLR go from “Imaginary upward flux” whatever that is, to being a real physical property, while “Imaginary downward flux” whatever that is, is treated as still imaginary at the end?You have a curious statement:

U3 [equation] ..real value for U3 since D3=0. What is the physics basis for this claim?And just to help newcomers who might have reviewed your paper, one of the terms in the imaginary upward flux (then real OLR) is f3.θ3. One of the terms in the imaginary downward flux (then still imaginary back radiation) is also f3.θ3. Both are the emission of radiation from node 3 in this simplified atmosphere. One is the emission upward from the node, one is the emission downward from the node.

c) Are both real, both imaginary, or somehow does the node radiate something real upwards and imaginary downwards?

When claiming a serious attempt at overturning basic physics it is important to be clear and precise about the basis for each step in the formulation.

And this is why I asked my questions 1-4 on September 21, 2013 at 10:14 am – because I need to understand what you believe about emission and absorption of thermal radiation.

7. You state:

– what do you mean by this and what is the basis for this claim?

And perhaps it is all made clear by the preceding statement:

This is completely false.

Absorption of radiation is absorption of radiation. Downward radiation which is absorbed in a body is

addedto upward radiation which is absorbed in the same body.[

Clarification note added a few minutes later in case my writing wasn’t clear: Downward radiation absorbed in a body is]addedto absorbed upward radiation in the same body.This is the essence of the first law of thermodynamics.

It’s pointless explaining what’s wrong with your paper if you don’t agree on basic premises of physics – so please take a few minutes to help me, and others understand – by answering questions 1-4 from earlier, and these questions from today.

on September 23, 2013 at 7:10 amJWRThankyou for having studied the appenduix in detail.

It is completely my fault if I have not been able to anticipate your questions and remarks.

I will repeat your remaks and questions , and answer them by putting in front JWR and finish with endJWR. I tried italics and bold and colours but it did not work..

Prhaps it’s only clear to you, but Appendix 1 does not answer the above questions.

It raises more questions.

5 (continuing my numbering from earlier) Is Appendix 1 the basis of the paper (the premise), or does it follow from the paper?

JWR

Appendix 1 is a comparison of an analytical example for 2 layers.

Analytical because for 2 layers we can follow up the numerical process and express it in algebraical form.

endJWR

6. a) You list, in Appendix 1 in figure 1: Imaginary downward flux and Imaginary upward flux. Do you think flux is not a real physical property?

JWR

A flux is of course a real quantity.

When I put the adjective imaginary in front of i, it is not anymore real.

endJWR

b) You then state, in Appendix 1, This OLR is in fact the only real value in the Schwarzschild formulation. However you have just claimed it. How did OLR go from “Imaginary upward flux” whatever that is, to being a real physical property, while “Imagienary downward flux” whatever that is, is treated as still imaginary at the end?

You have a curious statement: U3 [equation] ..real value for U3 since D3=0. What is the physics basis for this claim?

JWR

The imaginary downward flux is positive when going down.

The imaginary upward flux is positive when going up.

The real flux is therefor U – D.

At the TOA OLR =U3-D3 = U3 (since at TOA the incoming D3 =0)

Indeed the OLR, calculated by the two-way Schwarzschild formulation, is equal to the OLR, calculated by thee one-way FE formulation.

endJWR

And just to help newcomers who might have reviewed your paper, one of the terms in the imaginary upward flux (then real OLR) is f3.θ3. One of the terms in the imaginary downward flux (then still imaginary back radiation) is also f3.θ3. Both are the emission of radiation from node 3 in this simplified atmosphere. One is the emission upward from the node, one is the emission downward from the node.

JWR

Our discussion is about the Prevost proposal, who in 1771 suggested that every surface emits as if it were looking to outer space at zero K.

Fourier at that time did not agree.

We are now 250 years further.

And the discussion still goes on.

You are a follower of Prevost.

I am more inclined to Fourier, by saying that the real emission depends also on at what temperature the surface is looking at.

If you are interested in the history of back-radiation go to www,tech-know-group.com, where you can find papers of Keespies.

This more or lees answers also your question 6c).

endJWR

c) Are both real, both imaginary, or somehow does the node radiate something real upwards and imaginary downwards?

When claiming a serious attempt at overturning basic physics it is important to be clear and precise about the basis for each step in the formulation.

And this is why I asked my questions 1-4 on September 21, 2013 at 10:14 am – because I need to understand what you believe about emission and absorption of thermal radiation.

JWR

See also my answer to 6b)

I believe that the generalized Stefan-Boltzmann equation (10) is the real law for heat exchange between two walls.

The most transparent way is described in my ref [3] where I have used it.

If SB written as eps1,2*sigma*(T1^4 – T2^4), see also Wikipedia, that relation can only be explained in words by: surfaces at a distance exchange informatoion with each other concerning their temperatures and concerning their surface conditions, on the basis of which a heat flow is establisged from the warmer surface to the colder one. No heat is going from the colder to the warmer. Only information between the plates, thus also information from the colder surface to the warmer one, but no heat from the colder to the warmer.

As already said in my ref [3] I give the examples from Cristiansen 1883, but you can find it in Wikipedia.

endJWR

7. You state:

Absorption remains a problem anyhow with the Schwarzschild formulation.

– what do you mean by this and what is the basis for this claim?

JWR

As said earlier in order to be able to follow up the numerical processes ,

in appendix 1 I give an example of an atmosphere represented by two nodes.

I follow the fluxes in both the one-way finite element formulation and the two-way Schwarzschild formulation.

I compare the quantities U-D , for TOA and for the surface, between the two formulations. OLR at TOA is the same.

At the surface U1-D1 is nearly the same as qs in the one-way FE formulation, but a factor f1 is making a slight difference (for f1 NE to 1).

I also cmpare the absorption in the two formulations, and I see differences.

(I jump now to your next statement)

endJWR

And perhaps it is all made clear by the preceding statement:

The absorption of the downward flux should therefor be subtracted from the upward flux absorption!

This is completely false.

Absorption of radiation is absorption of radiation. Downward radiation which is absorbed in a body is added to upward radiation which is absorbed in the same body.

This is the essence of the first law of thermodynamics.

It’s pointless explaining what’s wrong with your paper if you don’t agree on basic premises of physics – so please take a few minutes to help me, and others understand – by answering questions 1-4 from earlier, and these questions from today.

JWR

Now we start to have more serious different interpretations.

I say explicitly that in order that the absorption of the two-way formulation goes in the direction of the absorption of the one-way formulation, I have to subtract the absorption of the downward flux D in formulation 2way from the absorption of the upward flux from formulation 2way.

Also to me , at first sight , that looked strange!

But it is not, the upward flux is too high I.e.it is higher then the real flux because we have to subtract the downward flux: real flux = U – D.

In other words the absorption of the upward flux U is too high!

When I subtract the absorption of the downward flux D,

then the difference of the two absorptions is closer to the one of U-D.

And indeed I get the same expression as for the FE formulation , except of course the difference due to the imaginary back-radiation.

And – strange enough – the back-radiation flux of formulation 2way, appears in the analytical expression for total absorption by means of the one-way FE formulation 1way!

You say that it is the essence of the first law!

But the splitting up of the real flux in U and in D is not very physical.

It is more a mathematical trick to be able to write an analytical solution.

That is the reason I call them imaginary fluxes, only their difference can give real physical values..

Or sometimes I also call them emissions of the Prevost type which are only real when the surface emits to outerspace at zero K.

I am not going to repeat here the paper.

I hopethat I have given now enough arguments and

explanations for a better understanding of my views.

And SoD,I want to thank once more to have explained me that I have to interpret Beer-Lambert in the sense of the two-way formulation.

on September 23, 2013 at 12:24 pmPekka PiriläAll simple equations like the Stefan-Boltzmann law, Beer-Lambert law and Schwarzschild formula are very limited in their applicability. They are true only when certain assumptions are true, but those assumptions are seldom strictly true. All these equations can be derived from more detailed micro level physical theory. From the micro level physical theory, it’s also possible to derive the limits of applicability of these equations.

Classical Thermodynamics is also an extremely limited theory. It was conceived at a time, when microscopic understanding of physics didn’t exist. At the time it was not possible to study, how heat is transferred. Classical Thermodynamics is an abstract mathematical theory that abstracts and summarizes a number physical realities, and is extremely successful in that. Due to abstraction inherent in Classical Thermodynamics the concept of

heatwas defined stating that it can be observed only as balancing energy that can be transferred between bodies needed to satisfy the First Law of Thermodynamics.In the very limited approach of Classical Thermodynamics, it’s correct to say that only the net heat transfer is real. Dividing that as heat flowing from A to B and at the same time from B to A is contrary to the rules of abstraction of Classical Thermodynamics. In that limited world it does not make sense to say that radiation can transfer heat simultaneously in both directions.

Present day understanding of physics is not restricted by the abstractions of Classical Thermodynamics. Now we do understand that radiation goes both ways, and that radiation transfers energy simultaneously in both directions. Both fluxes are real, and neither imaginary. For black and gray bodies of uniform temperature we can use the (generalized) Stefan-Boltzmann law to calculate the net heat transfer, but for heat transfer in atmosphere that’s not possible. For atmosphere we must use more detailed theories of interaction of radiation with matter. The theory must take into account the emissivity/absorptivity of the molecules of the gas. It must also be applicable for a material where the temperature changes continuously with altitude.

When we want to learn about the radiative heat transfer in atmosphere we must forget the Stefan-Boltzmann law. Beer-Lambert law is also applicable only with large reservations. The Schwarzschild formulation is more useful, but only for each wavelength separately. Dividing radiation to upward and downward is a possibility and may lead to reasonably accurate results, but a fully correct calculation requires that all directions are considered separately.

on September 23, 2013 at 11:13 pmscienceofdoomJWR,

You say:

Science is not about arcane historical disputes and our difference of opinion is not about being a follower of someone in the 1700s.

This blog states in the Etiquette:

The reason for this statement is to avoid lengthy disputes over basic science.

The last 100 years of physics uncontroversially accepts the reality of thermal emission of radiation. This emission

isdependent on temperature and material properties of the body radiating. Itis notdependent on the temperature of any body towards which is it radiating.If you can find a few undergraduate physics textbooks from the last few decades that agree with your understanding of radiation please provide the details.

The confusion of your approach would be clearer to you and to others if you answered the specific questions 1-4 already raised.If you answer “True” to questions 3 and 4, then it contradicts your lengthy explanation of atmospheric radiation.

It seems that you think –

just having a stab at this myself because you haven’t stated your answers– that the answer to item 3 is -50% and the answer to item 4 is not double, but zero. Or maybe the proposed answer is when we turn on the second source of radiation (from right to left) there is no radiation any more because radiation works differently from how it’s described in all physics textbooks.This (suggested) “result” is contradicted by all experimental evidence.

—

Note: You can see how to format comments in Comments and Moderation.

on September 24, 2013 at 1:14 pmDeWitt PayneJWR,

Only in your mind and those others who refuse to accept modern physics. That discussion was settled definitively when Planck formulated his equation for the spectrum of black body radiation. There is no term in that equation that is a function of the temperature of any other surface. You have no physical mechanism of black body emission to support Fourier.

You and Claes Johnson should get together, just not here.

on December 9, 2013 at 8:40 pm |JWR@SoD

I have now updated my earlier paper.

In appendix 2 but more in appendix 3, I compare the results of the Schwarzschild procedure as I found in your “equations” and the “code” with the one-way FEM model of a stack.

I can’t find your results, so I programmed the original Schwarzschild procedure( according to your equations and code) and I find different results.

It is not due to the numerics, in fact I added two numerical procedures and they give the same results as the original ones: it is the Schwarzschild procedure.

I modified the Schwarzschild procedure with two ameliorations and the Schwarzschild results come closer to the FEM results

.

Conclusion.

It has nothing to do with quantum mechanics,

I use the work of Christiansen from 1883.

I find a solution for the 240W/m^2 for ftot=0.86 (optical thickness.)

The atmospheric window factor 1-ftot = 0.14 which gives qwindow =53.

I do not see any results from your code.

Where can I find them?

I have included the MATLAB program with the “green” comments.

http://www.tech-know-group.com/papers/planckabsorption.pdf

on December 11, 2013 at 7:02 am |scienceofdoomJWR,

Many results from my code are in Visualizing Atmospheric Radiation – Part Two and subsequent articles in that series.

Your old paper is no longer available. Can you provide the old one along with a list of what specifically has changed (deletions, additions and changed equations).

In my comments of September 22, 2013 at 8:42 pm I asked 4 basic questions because your apparent assumptions about basic physics were wrong.

Can you please confirm what those answers are?

DefinitionsWhen I start reading your revised version it is clearly not precise in many important areas. This makes is less interesting to try and follow all the way through especially without the ability to see what old mistakes have been corrected and what new mistakes have been added (from the last version).

For example:

1.

The physical properties (U

_{λ}, D_{λ}) you are discussing here are not flux. Flux is the spectral intensity integrated over all wavelengths and directions in the direction normal to the surface. This is measured in W/m^{2}.The property you are describing is the spectral hemispheric emissive power, which is in W/(m

^{2}.μm).Often when people use terminology in non-precise ways I don’t try and pick it up and many times I try not to use strict terminology so the blog is clear and readable to non-scientific people.

However, given that you are trying to overturn basic physics, it is important to be precise.

2.

What is “fictive” about radiation? When you write fictive it means it doesn’t exist.

You could write “hypothetical” if you are unsure about an idea.

From recollection your earlier paper switched properties from fictive to non-existent and others from fictive to real and measurable with no explanation as to why, or how it was determined that a given property was real or not real.

3.

Is this the premise? Or the conclusion?

If it is the premise then you need to establish it. Claiming it doesn’t make it true.

If it is the conclusion then you need to demonstrate it, and it would be appropriate to introduce it to the reader as “the conclusion that will be proven via these following equations … or pages..”

4.

This is the Planck function for emission. This is not a function for absorption.

The material properties you might be thinking of are emissivity and absorptivity. Emissivity = Absorptivity for the same wavelength for a given material.

5. It will be easy to make mistakes if you confuse wavelength and wavenumber. You might be doing that in your paper.

In your parameters you are apparently using wavenumber, e.g.:

Spectral emissive power is usually written like this when using wavenumber, although in this units “cm” should be cm

^{-1}.Possibly you are using wavelength measured in cm?

Planck’s law for emission of radiation is a completely different formula if it is in terms of wavelength, λ compared with wavenumber, v, so you need to ensure you have the right formula.

6.

and the integral of the three are in each case the fluxes, not the total intensity. It’s possible to get everything right with the maths, while using the wrong terminology (even though it will confuse readers). I’m just commenting as I go.

Ditching the Schwarzschild equation before Using It?7. Then you switch to a new approach – a “more modern technique” as you describe it – without explaining how you can make it work. Or its relationship to the previous equations.

Firstly, you have introduced exchange of radiation between two surfaces with no absorption by the intervening atmosphere.Secondly, you have introduced a general emissivity term which is not wavelength dependent.The only way to solve the problem of radiative transfer in the atmosphere where the gases have strongly varying absorptivity/emissivity with wavelength is to keep track of the spectral power at each wavelength and each height in the atmosphere.

So you created the Schwarzschild equation but then ditched it before solving it and instead used the aggregated formula for exchange of radiation between two surfaces?

Then having not used the equations you derived you decide that your result proves those equations give the wrong result??

I have to admit to being lost at this point, but equations 6, 9, 10 and following are not relevant to solving the problem.

To be clear,

they are not capable of solving the problem at hand.Have you used the HITRAN database to get the spectral dependency of absorptivity/emissivity for each gas and then used the concentration of each gas with height?

At this point I gave up trying to understand your approach. I suspect you have created equation souffle with no validity in physics.

By the way, if you take a look at Part Twelve of the above linked series you can see “heating” curves for the atmosphere:

which contains the enigmatic statement underneath:

Notice that the heating rate is mostly negative, so the atmosphere is cooling via radiation..on December 11, 2013 at 3:16 pmDeWitt PayneA concept which G&T fail to grasp when they condemn energy balance diagrams as scientific fraud.

on December 11, 2013 at 10:48 am |scienceofdoomJWR,

I read a bit further on in what is clearly a fantasy paper:

[Bold emphasis added].

Readers with zero physics knowledge might go along with JWR.

Anyone with a passing familiarity with the theory of radiation & heat transfer will know this is “invented physics”.

The reason this is against “SoD etiquette”! is because inventing basic physics is not an interesting subject for discussion at this blog.

It is a

personal preferenceof the blog owner to discussClimate Science within the Frame of Physics.Rather than to discuss

Climate Science within the Frame of Fantasy Physics & The Easter Bunny.Anyway, once again, in a futile attempt, I ask JWR to produce a physics text book where his inspiration is accepted.

As I requested on September 23, 2013 at 11:13 pm:

As any readers who review his paper can see, he has produced equations for radiative transfer as a function of wavelength (7) & (8), and then for flux exchange between two bodies (9) and then, without reference to any textbooks, papers, or experimental work to support his assertion, simply determined that flux from a surface is a completely different value from those equations. That is, he has derived equations and then claimed they are wrong.

This is called, in technical terms:

Making up random stuff and asking people to believe in it because it sounds nice.Well, don’t forget the Easter Bunny.

on December 11, 2013 at 11:47 am |Pekka PiriläThe idea that downwelling radiation would not be real, and that there might instead be an effect that reduces the upwards radiation is a rather common one. There seem to be two separate origins for this kind of thinking.

1) Classical thermodynamics and definition of heat as a net effect with the additional notion that this is the full definition, and that it’s not possible to divide it to further parts.

Using this argument is based on the fallacy that Classical Thermodynamics would present a comprehensive description of physics for the related phenomena. The truth is, of course, that present physics covers very much that Classical Thermodynamics cannot even discuss and that one physical theory cannot set limits for what other theories can describe.

2) Theory of electromagnetic radiation. Studying electromagnetic radiation on the basis of Maxwell’s equations the approach is built on electric and magnetic fields that form a totality where Poynting vector provides a single (net) directional energy flux density at every point, This observation seems to be the argument for the more sophisticated arguments for the idea of the first paragraph of this comment.

We know that photons are also electromagnetic radiation. Thus it’s natural to think that the energy fluxes related to photons should be understood based on the Poynting vector. The photons cannot, however, be understood from Maxwell’s theory alone, but are fundamentally quantum mechanical objects. The theory of radiation before quantum mechanics could not be made self-consistent, but led to the ultraviolet catastrophe. Planck invented a trick to get the correct radiation formula, but could not explain that from fundamentals, because only QM has been able to provide the needed fundamentals. That required the use of a method called

second quantification, where particles (photons) can be created and destroyed. This approach was refined and made (essentially) self-consistent by the Quantum Electrodynamics (QED).In QED it’s seen that photons interact very weakly with each other. Therefore every single photon behaves as if it would be the only photon. That makes every photon equally real. Some of them go down and some up. All of them must be considered, and they cannot cancel each other. Each of them has it’s own Poynting vector. The electromagnetic fields or Poynting vectors cannot be summed up, only the energy fluxes that result from them. Mathematically that’s true because the quantum mechanical phases are uncorrelated (incoherent).

on December 12, 2013 at 5:41 pm |BryanSoD

As Pekka says

“Classical thermodynamics and definition of heat as a net effect with the additional notion that this is the full definition, and that it’s not possible to divide it to further parts.”

The greenhouse theory in a nutshell, is solar radiation (UV => IR) makes it easily through the atmosphere but the long wave IR from surface is absorbed by greenhouse gases and partially radiated back to surface warming it up in the process.

There is little controversy about the solar radiation effects.

However since the IR radiation and its interactions can be explained by classical physics then an approach using Maxwell’s Equations and Gibbs Thermodynamics should be valid.

That as I understand it is the approach adopted by JWR.

He introduced his paper in a Tallbloke thread.

Tim Folkers went through the numbers and found to his surprise that they were in the ‘right ball park.’

So was your remark , “Well, don’t forget the Easter Bunny” really appropriate?

Perhaps you were having a ‘DeWitt Payne’ moment.

on December 12, 2013 at 8:43 pm |DeWitt PayneBryan,

Please cite a reference that calculates the IR absorption spectrum of CO2 using only classical physics.

on December 13, 2013 at 3:40 pmBryanDeWitt and Pekka

The classical physics approach would be via the bulk thermodynamic quantities of such as the specific heat of individual gases.

For instance in the range 250K to 350K the specific heat of CO2 increases by 13%, this reflects the increasing radiative activity.

Whereas the SH of N2 is almost constant over this range.

There is no need to look at individual wavelengths to investigate heat transport in the troposphere.

The radiative properties of the molecules are naturally included in the bulk quantities.

on December 13, 2013 at 4:28 pmPekka PiriläBryan,

Changes in the specific heat of CO2 do not affect anything in the atmosphere significantly. That’s a really insignificant change, and is not directly connected to the GH effect. Both the changes in specific heat and the GHG properties of CO2 are due to the quantum mechanical properties of CO2 molecules. Neither can be explained without. They are linked through this, but adding the change of specific heat to the classical thermodynamic description of the atmosphere tells nothing about GH effect.

To understand what happens it’s absolutely necessary to consider also the wavelength dependence of absorption and emission of IR.

The radiative properties of molecules are not included in any classical description of bulk properties.

It seems that you erred fully on every count of your comment.

on December 13, 2013 at 5:58 pmBryanPekka you say

“That’s a really insignificant change, and is not directly connected to the GH effect”

But then perhaps there is no greenhouse effect.

How would you account for the increased Specific Heat of CO2 other than by increasing vibrational modes?

Other bulk quanties include the transport coefficients.”

The thermodynamic effect of an object radiating on another object ihas already been taken into account by the coefficient of thermal conductivity which, despite its name, measures all kinds of diffusive heat transport including radiation.

G&T write the following in their first falsification paper:

“A physicist starts his analysis of the problem by pointing his attention to two fundamental thermodynamic properties, namely the thermal conductivity, a property that determines how much heat per time unit and temperature difference flows in a medium;”

In their reply to Halpern et al. they write:

“Speculations that consider the conjectured atmospheric CO2 greenhouse effect as an “obstruction to cooling” disregard the fact that in a volume the radiative contributions are already included in the measurable thermodynamical properties, in particular, transport coefficients.”

If you want to know the thermodynamic effect of doubling the CO2 concentration you only need to measure the changes in the transport coefficients. These changes will of course be unmeasurable (although there is probably some tiny factual difference).

No need for any redundant radiative transfer calculations.

on December 13, 2013 at 6:15 pmPekka PiriläBryan,

That you present an erroneous claim does not make GHE any less likely.

As i wrote explicitly: The same QM phenomena are behind both, but as I also said you cannot pick one of the consequences and plant it to classical thermodynamics and make it give the right results.

Thermal conductivity does not normally include radiative effects. The radiative heat transfer cannot be described fully by the equations of thermal conductivity, and exactly this deviation is crucial for GHE.

G&T write so much rubbish, that I don’t usually comment on them. What they say about what a physicist does first proves absolutely nothing. Their answer to Halpern is simply untrue.

The GHE is finally determined by outgoing radiation at TOA. Transport coefficients don’t tell anything on this most important factor.

on December 14, 2013 at 11:25 amBryanPekka

This total conviction you have in the physics behind the greenhouse gas theory is not universally shared.

Professor Mehmet Kanoglu indicates three stages of interaction of IR active gases in a radiation field.

1. Low temperature – non participating

2. Moderate temperatures – participate as absorber

3 High temperature (furnaces) participate as absorber and emitter.

Page 35 Heat and Mass Transfer Chapter 13 Fourth Edition(2011)

Professor Alfred Schack, the author of a standard textbook on industrial heat transfer was the first scientist who pointed out in the twenties of the past century that the infrared light absorbing fire gas components carbon dioxide (CO2) and water vapour (H2O) may be responsible for a higher heat transfer in the combustion chamber at high burning temperatures through an increased emission in the infrared.

He estimated the emissions by measuring the spectral absorption capacity of carbon dioxide and water vapour.

In the year 1972 Schack published a paper in Physikalische Blatter entitled

“The influence of the carbon dioxide content of the air on the world’s climate”.

With his article he got involved in the climate discussion and emphasized the important role of water vapour.

Schack discussed the CO2 contribution only under the aspect that CO2 acts as an absorbent medium.

Schack decided not to use the radiation transport calculations.

G&T agree with this decision on the basis that the transport formulas were derived for Stars and cannot simply be applied to the Earths atmosphere for several physical differences.

Page 72 of their falsification paper.

So what about the “bite” around 15um in SoD post below perhaps you will say?

It’s my opinion that emission energy leakage to the much more probable lower energy H2O bands is responsible.

Notice that the graph shows that observed value is higher than the theory value.

So back to my original point .

Given the lack of evidence that CO2 does not seem to track with climate temperatures a little soul searching might be appropriate from climate science.

Perhaps going back again to classical thermodynamics as in the case of JWR might not be such a bad idea

on December 14, 2013 at 12:22 pmPekka PiriläBryan,

Everything I have written in these comments is standard content of textbook physics and

isuniversally shared by physicists who have studied these issues enough to have a personal view. Being universally shared does not mean that it would be impossible to find a few individuals who declare themselves as physicists, and who do not share these views.Here we are not discussing 98% vs 2%, but rather something like 99.99% vs. 0.01%. How can I be so sure? Because that is true for everything contained in physics textbooks of the level needed here. There are questions, where physics does not give equally certain answers. There are many such questions in atmospheric physics as well, but nothing that we have discussed here depends on those less certain areas.

What you tell as counter-evidence is either written for some other purpose and valid only under assumptions accurate enough for that purpose, or examples of the 0.01% that I mentioned above.

We are clearly moving towards issues that SoD has declared as being excluded from this site. He writes:

This blog accepts the standard field of physics as proven. Arguments which depend on overturning standard physics, e.g. disproving quantum mechanics, are not interesting until such time as a significant part of the physics world has accepted that there is some merit to them.. Claiming that modern physics should be excluded and QM dismissed in favor of Classical Thermodynamics alone is against his policy. Therefore I refrain from discussion such points, but I may continue to explain, if relevant questions are presented on the way QM operates here.on December 14, 2013 at 3:27 pmDeWitt PaynePekka,

I would go further and say that the thermal conductivity of a gas at normal temperatures

neverincludes radiative absorption/emission. The standard method for measuring thermal conductivity is by determining the reduction in temperature (increase in resistance) of an electrically heated thin wire caused by the presence of a gas at different temperatures and pressures. Using a transient technique to eliminate induced convection caused error means that the thermal diffusion boundary layer is so thin that any absorption/emission is infinitesimal. And, of course, optically transparent gases like helium and the other noble gases do have thermal conductivity.In Meteorology, the thermal conductivity of the atmosphere is considered small enough to be ignored, except at the surface where the thermal gradient can be high enough that the flux from conduction is significant. Even then, most heat transfer from the surface to the atmosphere is latent rather than sensible.

In engineering calculations, radiative heat transfer is calculated separately from conductive/convective heat transfer. Rather than using the radiative transfer equation directly, graphs, tables or fitted equations of total emissivity have been constructed originally by measurement but now by using this procedure for a band model. The total emissivity is calculated at different partial pressures and path lengths such that the product of path length and partial pressure is constant and extrapolated to zero partial pressure/infinite path length. The individual lines on the graph are for different amounts of CO2 for a unit area expressed as bar cm. A bar cm is the amount of pure gas at one bar pressure that would have a thickness of 1 cm for a given unit area. Converting total emissivity at zero partial pressure to a real situation requires the use of another equation.

Bar cm, by the way is similar to Dobson Units for total column ozone. The Dobson unit is a thickness of 10 μm at 1 atmosphere rather than 1 cm at 1 bar and the path length is the entire atmosphere, ~8 km at 1 atmosphere. So 200 Dobson Units would be 2 mm of ozone in 8 km. Total column CO2 is about 300 bar cm, if I remember correctly. Eyeballing the graph linked above, that would be a total emissivity of ~0.2.

on December 14, 2013 at 4:13 pmPekka PiriläDeWitt,

I don’t go further to the actual subject, but make only one comment.

It’s really pity that we have all the units. First figuring out, what they mean and then keeping track of the conditions used in defining each of them is just a wasted effort.

Units based on column height have a simple relationship to the total amount of air in a vertical column and concentration expressed in ppm (or ppb), but the exact relationship depends on the pressure (atm or bar) and temperature (0C, 15C, 20C or 25C) used in the definition, and all this essentially for no useful purpose. Standardization takes too long.

on December 14, 2013 at 9:58 pmDeWitt PaynePekka,

It did indeed take me a while to understand the meaning of atm cm, or in the case of MODTRAN, atm cm/km, when I first encountered it, but it is convenient that the individual components add up to a nice round 100 for a cubic meter of air at STP rather than 44.64… moles, which, I think, would be the correct SI unit.

on December 12, 2013 at 8:44 pm |Pekka PiriläBryan,

Interactions of IR radiation with matter

cannotbe explained by classical physics. Quantum mechanics is essential in that, and even quantum mechanics without Quantum Field Theory has some problems of self-consistency in that.on December 12, 2013 at 9:17 pm |scienceofdoomBryan says:

Was this taught as a method of proving or disproving a theory when you did your physics degree?

I hope so.

Onto serious stuff, can you demonstrate how equations 9-12 can be derived from the correct fundamental equations 7 & 8. (And to make this a super-hard problem, we’ll go “old-school” which means the method of proof “I met a guy down the pub who took a look and reckoned it wasn’t bad” is excluded, valid though it may be in some enlightened circles).

on December 14, 2013 at 9:38 pmDeWitt PayneBryan,

The irony also gets rather thick when you take me to task for citing Denker on thermodynamics while lauding G&T on the greenhouse effect. All Denker wants to do is rationalize the terminology to minimize confusion when teaching the subject, not overthrow the fundamentals.

on December 12, 2013 at 10:03 pm |DeWitt PayneBryan,

By the way, given the difficulty Roy Spencer had with creating identically behaved boxes when trying to replicate the Wood experiment, Vaughan Pratt’s observation of the large temperature gradient inside his boxes, making thermometer placing critical, and ‘he who shall not be named for fear of moderation’s’ use of insulation on his box with the IR transparent cover while not insulating the other boxes when ‘replicating’ the Wood experiment, do you still believe that Wood actually used all of his formidable experimental skills when doing his experiment and that it is still definitive?

One of these days, if I can catch up on my DVR backlog and don’t have anything better to do, I’ll finish building what I think will be a better mousetrap, as it were. I’ll put an electric heater on the bottom of the box, rather than using sunlight, and turn it upside down to minimize convection. Then I should even be able to use an uncovered box as well as covers with different spectral properties. I have all the pieces.

From your comment at Roy Spencer’s:

It’s all about differences. DLR does not warm the surface, it raises the surface temperature necessary to maintain the same upward energy flux. The surface inside a glass covered box does not get as cold at night as fast as it does in a box with an IR transparent cover, provided you don’t get dew or frost on the IR transparent cover, drastically increasing IR absorption. The cooling curves are quite different. With an IR transparent cover, the interior surface cools faster than the cover, much like what happens on calm, clear sky winter nights when there is a strong temperature inversion within a few meters of the surface. In a glass covered box, the cover cools faster than the interior surface.

Horace de Saussure in 1767 built a three glass layer hotbox that reached an internal surface temperature of 230F or 383K. That’s equivalent to 1220 W/m² for an emissivity of 1. In my version de Saussure’s experiment the interior surface reached a temperature of ~410K. Channel 1 is ambient air temperature, 2 is the interior surface temperature and 4, 5 and 6 are the temperatures of the innermost glass cover, the intermediate polycarbonate cover and the outside acrylic cover. That’s equivalent to 1600 W/m² at an emissivity of 1 for the interior surface. That’s far above the ~1000W/m² of solar radiation that reaches the Earth’s surface on a normal day. And that doesn’t include the reflection and absorption losses of incident radiation at each of the three covers. You can’t explain a temperature that high based on just preventing convection.

M. D. H. Jones and A. Henderson-Sellers should be embarrassed to have perpetrated all this on the scientific community by resurrecting Wood’s brief note from its well deserved repose since it was eviscerated by Charles Greeley Abbot shortly after its publication.

on December 14, 2013 at 3:08 pm |BryanPekka

Nothing I have said contradicts quantum mechanics.

Quantum mechanics is certainly required for many aspects but perhaps long wave infra red is not one of them.

Classical physics did not have the catastrophe until it dealt with the UV.

There is a lot to be said for using the simplest tools at hand.

The passage plans of space probes are worked out using Newtonian Mechanics rather than Relativistic Mechanics.

If however asking whether the Schwarzschild equations are appropriate for the Earths atmosphere is going beyond the pale then I will terminate this discussion

on December 14, 2013 at 3:37 pmPekka PiriläBryan,

It has been told to you several times that photons are a QM concept. They do not exist at all without QM. Everything in understanding interaction of IR with matter depends on QM and photons. If you disagree, you should tell who and where has been capable of explaining the IR absorption spectrum of CO2 or any other gas without QM.

You keep on insisting that the analysis should be done without the only tools that are valid for it.

You are contesting basic physics, whether you admit that or not.

on December 12, 2013 at 9:47 pm |scienceofdoomFor non-technical readerswho can’t follow maths I explain briefly the (first) confusion of the paper.The problem we are trying to solve is calculating radiation absorption and emission in the atmosphere.

Now let’s make our desired outcome

very simple, and say that we don’t care what the (upward) spectrum at the top of atmosphere looks like and we don’t care what the (downward) spectrum at the surface looks like.That is, we only want to calculate the upward flux at TOA and the downward flux at the surface. This flux is power per unit area, which is in the units W/m

^{2}.The difficulty is that different components of this flux are absorbed at completely different rates by certain gases. So for example, 95% of radiation at 15 μm is absorbed within 1m (at surface pressure) by CO2. But very little of 9.8 μm is absorbed within even a 12km path through the troposphere (lower atmosphere) by any gases. And there is everything in between these extremes. Here is an example of transmission through CO2 at different wavelengths:

The only way that I know of to keep track of the intensity of wavelength 12 μm (for example) is by keeping track of the intensity of wavelength 12 μm on its path up through the atmosphere, and separately down through the atmosphere.

This is what equations 7 & 8 actually do. They let you track each wavelength at each height in the atmosphere. Then we say – how much CO2, how much water vapor exists at each height in the atmosphere, and then we can work out (from a database called HITRAN) what the absorption of wavelength 12 μm is through the first km of the atmosphere and the second km through the atmosphere and so on. And we can also work out what the emission of wavelength 12 μm is from each of these layers in the atmosphere.

When we’ve calculated the absorption and emission through the atmosphere we get the values for each wavelength at the top of atmosphere. Then we can sum it all up into the total flux. We also have the added benefit of knowing what the spectrum looks like. For example:

From Theory and Experiment – Atmospheric RadiationLet’s say instead, let’s dispense with that convoluted approach and let’s just track total flux.

First problem– in the journey of this flux from the surface up to 1km what is the absorption by the atmosphere? Well, we are going to have to calculate a general absorptivity term. It’s one number. We have to do this by averaging the absorptivity over all the wavelengths of interest for all the gases present in their respective concentractions (water vapor, CO2, CH4 etc).And we can’t just do a “flat average”, we have to weight the averaging process because the intensity of radiation from the surface at 10 μm is much higher than the intensity of radiation at 25 μm.

This is a soluble problem but I can’t see whether JWR has actually done this, or even understands that this has to be done.

Second problem– this one is an insoluble problem by the way – now we have a flux value for upwards radiation at let’s say 5km. Now we want to work out the “absorptivity” of the atmosphere between 5-6km. So we can work out the concentration of each gas, work out the total number of molecules, look up the absorptivity at each wavelength for each type (in the HITRAN database) and do some averaging?No. We can’t.

The reason we can’t is we don’t know what the spectrum looks like so we can’t work out what weights to assign to the different components of absorptivities. Here is an example upwards spectrum at 20km (top graph):

From Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part TenWhen we

knowwhat the spectrum looks like we can see how to weight the different absorptivities to see the effect on total flux. But when we don’t know we can’t calculate “total flux absorptivity”.And that is why we have to keep track of each wavelength and only sum up at the end.

on December 20, 2013 at 5:36 pm |JWR@SoD

http://www.tech-know-group.com/papers/Planckabsorption.pdf

I have updated the paper.

1) I changed “fictive” in “hypothetical” as you suggested. Thank you.

2) What concerns the textbooks, I refer to my lecture notes of 1953 of the

University of Delft, in which it was suggested to follow up the work of

Christiansen of 1883 to define the heat exchange by radiation between

two surfaces.

Indeed Chrisiansen used, back in 1883, forward and backward radiation

to end up with a converging geometrical series.

You can find it also in Wikipedia, emission, with the same proof.

In my reference [3] I have explained it by means of practical examples.

In fact reference [3] shows how radiation works according to engineers.

3) The stack model is a monochromatic one.

It has been monochromatic from the very beginning.

In my very first paper [1], in which I have written the equations

on the basis of a finite difference scheme, I changed the concentration and

the distribution of IR-sensitive trace gases in such a way that

OLR=240 as reported by K&T type publications (see [1], to find the

references of the papers by which I validated the monochromatic model.)

I deduced that ftot = 0.86! And ftot is the optical thickness at the surface.

No other data has been used, just logical thinking.

The monochromatic model uses as variables theta= sigma*T^4.

It is shown that indeed theta is the monochromatic equivalence

of the Planck function B

In fact by changing the concentration and the distribution

( with respect to the temperature distribution)

of IR-sensitive trace gases , it was found that the full color model is not

of a high priority: the monochromatic model is indeed within 10% what

you can expect of full-color models. See [1].

4) Nevertheless I want to go to a full-color model.

As I have indicated by introducing parallel finite elements.

And since not everybody is familiar with finite elements,

I have now shown in the update that, by creating for each wave

length interval k a “monochromatic” matrix relation,

it is sufficient to add them.

It will take me some time to extract from the various data bases the

concentration of IR-sensitive trace gases, and other parameters which

define B(T(z), … , … ). There is not a consensus which one is correct.

5) My study of the Schwarzchild equations is done for the monochromatic case,

in fact SoD is doing the same in “the equations” [4] and in “the code”[5].

And I have done it for ftot = 0.001 to 0.86 and for different distributions ,

indicated by the parameter “m”: as argued above I have made numerical

experiments to simulate full-color model.

I have applied the Schwarzschild procedure to an atmosphere which I discretize

as a stack of gauze’s. SoD also introduces sampling points, or nodes, to simulate a continuous problem. I like to call my model the “chicken wire” model, just to

show that plain engineering intuition brings you fast to a result.

You do not need to mention quantum mechanics just to make impression on the reader.

Application of the Schwarzschild procedure – dividing up the radiation in an

hypothetical up-ward flux and an hypothetical down-ward flux, and determining

the OLR from the Prevost surface flux as boundary condition and by multiplying

per layer the upward flux by (1-f) – turns out not to be a correct procedure.

There is no physical argumentation in the Schwarzschild procedure.

6) Instead admitting a heat exchange between pairs of surfaces, it turns out that for a temperature distribution defined by an environmental lapse rate, the layers absorb

lesser LW heat than they emit, and for a steady state temperature distribution the

difference of heat should come from other mechanisms than LW radiation:

convection of sensible and latent heat, absorption of SW radiation by aerosols etc

That is the physics of the one-way model.

I was glad to read that at least the blogger Bryan said that he, and others he referred to , judge that they understood it and that I make a point there.

I have included in the paper the listing of the MATLAB program.

Every body with a MATLAB on his PC can do numerical experiments with it.

A 30 layer model runs in a few seconds.

One does not need any external files.

The scope of the paper is to show that CO2 is not a dangerous gas, it is a fertilizer for plants. Doubling the concentration gives an increase of surface temperature of 0.1 K, according to the model.

http://www.tech-know-group.com/papers/Planckabsorption.pdf

on December 21, 2013 at 4:07 pm |DeWitt PayneJWR,

You do know that this isn’t exactly news, see, for example, the TFK09 energy balance which is discussed at length here. It proves nothing about one way or two way radiation emission/absorption as the energy balance is exactly the same in either case.

One could probably construct a thought experiment that proves that one way transfer violates causality. Consider infinite parallel planes separated by a large distance, say one light second. Now vary the temperature of one of the planes. Solving the energy transfer problem with standard two way physics is no problem.

Referring to Bryan favorably doesn’t help your cause. If I read his recent correspondence correctly, he doesn’t believe that there is

anyradiative exchange in the atmosphere. It’s all convection. Unfortunately for him, the known movement of air in the atmosphere and the 2 km scale factor for water vapor means that convection alone cannot be sufficient. Radiative transfer as a percentage of the total energy flux increases rapidly with altitude.on December 21, 2013 at 6:46 pmPekka PiriläThe point that the net influence of LW absorption and emission is cooling in the troposphere was discussed in the recent post https://scienceofdoom.com/2013/01/30/visualizing-atmospheric-radiation-part-twelve-heating-rates/ . That’s actually the fundamental property that makes troposphere different from stratosphere.

on December 21, 2013 at 7:20 pmBryanDeWitt Payne says

“Referring to Bryan favorably doesn’t help your cause. If I read his recent correspondence correctly, he doesn’t believe that there is any radiative exchange in the atmosphere.”

Point to any statement of mine that comes anywhere near what you claim.

Of course there is radiative exchange in the atmosphere.

What I have said is that in the troposphere the radiative contribution is already included in bulk thermodynamic quantities like specific heat capacity.

The radiative component of neighboring volumes is largely self cancelling.

Above the troposphere the radiative components will depart from the bulk.

For long wave radiation > 5um a classical approach should be valid because the law of energy equipartiion still largely holds.

The quantum approach is dropped by De Witt when convenient to the global warming story.

Not that long ago (here on SoDs site) I said that for quantum reasons blue light is not absorbed in pure water.

De Witt was then arguing instead the ready thermalisation of all light in pure water.

on December 21, 2013 at 8:22 pmPekka PiriläBryan,

I don’t fully understand what you mean by many of your sentences as they don not have any meaning in the description of atmosphere, oceans and radiation in textbook physics.

1) Excitation of vibrational modes affects both the emission/absorption of IR and the specific heat of atmosphere. The latter effect is, however really negligible while absorption and emission of IR affects atmosphere a lot. The negligible effect cannot explain the important one.

It’s true that changes in convection compensate in some situation within the troposphere changes in radiative heat transfer, but other effects of emission and absorption of IR remain large, in particular the influence on OLR at the top of troposphere and downwelling radiation to surface.

2) Classical physics cannot explain any part of emission and absorption of IR by gas molecules. If you disagree, please tell, how it explains the emission ans absorption spectra, which are totally essential for correct calculation of heat transfer.

3) All wavelengths of SW are absorbed in deep water, only difference between wavelengths concerns the depth where that occurs. Well less than 0.1% of solar SW penetrates deeper than 1 km, about 6% passes more than 100m in pure water, and about 23% more than 10m.

on December 21, 2013 at 9:05 pmDeWitt PayneBryan,

Umm, No, I wasn’t. Here’s the start of the exchange:

https://scienceofdoom.com/roadmap/atmospheric-radiation-and-the-greenhouse-effect/#comment-20180

Where in that exchange did I drop the quantum approach? You were wrong then and you’re still wrong. You’re hypothesis that absorption of blue wavelengths in sea water is only by organic matter suspended or dissolved is simply wrong, as is your claim that there are no energy levels in liquid water in the blue part of the spectrum:

on December 22, 2013 at 11:35 am |scienceofdoomJWR:

No one is in disagreement about heat exchange by radiation between two surfaces.

In fact, I have frequently provided examples from

Fundamentals of Heat & Mass Transfer, by Incropera & DeWitt (2007) to demonstrate the basics of radiation (and also conduction). Their work, along with every other engineering textbook on radiative heat transfer includes as the staple – how to calculate heat transfer between surfaces usingexactly the method you describe.Well, there are two questions relating to this particular subject for you:

1. Where is the second surface? (Earth’s surface is surface 1)

2. The specifics of the calculations of radiative heat transfer between two surfaces neglect absorption by the intervening atmosphere – where is your demonstration that atmospheric absorption can be neglected ?

My questions here are very simple – you derive equations 7 & 8 from first principles, so:

3. Just to be clear – you are now claiming these equations (7&8 in your paper) are wrong? Please confirm.

4. Given the answer to question 3 above, is the derivation incorrect or are your assumptions incorrect?

4. Why do you not address these points in your paper?

In simple terms, I am confused by your paper. It has the merit of deriving equations, it does not disprove these equations, yet your commentary on your own paper is that these equations are not correct or not applicable.

I’m sure there is a technical term for this approach, but in plain English your paper makes no sense unless it addresses these points.

As a further explanation for you (my working assumption is you don’t actually understand the basics of this subject), the Planck equation gives the monochromatic (wavelength by wavelength) “output” of a “black body”.

The monochromatic emissivity (values 0-1) shows how the material property departs from a black body at each wavelength.

When you integrate the Planck equation over all wavelengths and all directions it results in the “blackbody” Stefan-Boltzmann equation – σT

^{4}. That is, the SB equation is the “aggregation” of all wavelengths.When you integrate the Planck equation x the monochromatic emissivity over all wavelengths you get the modified SB equation – εσT

^{4}, where ε is the “average” emissivity for that body at that temperature.In summary, the SB equation used for radiative transfer between bodies

is the simplified version of the fundamental equationand whenever there is any confusion about the applicability of a result always start with the fundamental equations and determine what simplifications can be made.This determination needs to be explicitly stated. You don’t start with the simplified version of a fundamental result, assume it is the actual real physics and then write off the fundamentals because your simplified solution produces a different result.

In case, what I am writing is still not clear –

if the simplified result derived from the fundamental physics disproves the fundamental physics then the simplified result has also been disproven (because it depends on the fundamental equations).And if the paragraph above is not clear I applaud your total lack of comprehension of logic, science and the last 500 years of the enlightenment and turn to other matters.

on December 26, 2013 at 2:58 pmJWR@SoD

I will follow your remarks

Remark by SoD

No one is in disagreement about heat exchange by radiation between two surfaces.

In fact, I have frequently provided examples from Fundamentals of Heat & Mass Transfer, by Incropera & DeWitt (2007) to demonstrate the basics of radiation (and also conduction). Their work, along with every other engineering textbook on radiative heat transfer includes as the staple – how to calculate heat transfer between surfaces using exactly the method you describe.

Well, there are two questions relating to this particular subject for you:

1. Where is the second surface? (Earth’s surface is surface 1)

2. The specifics of the calculations of radiative heat transfer between two surfaces neglect absorption by the intervening atmosphere – where is your demonstration that atmospheric absorption can be neglected ?

answer by JWR

Ad 1

You agree with the work of Christiansen of 1883. In particular I suppose the exchange of heat by radiation between surfaces with different emission coefficients!

I repeat it in plain words:

“two surfaces exchange information concerning their temperatures and their surface conditions on the basis of which heat is exchanged from the warmer to the colder surface”:

q(1–>2)=eps12*sigma*(T1^4-T2^4)

In my reference [3], I give examples of applications of these equations. It is shown there that the heat exchange by radiation is an one-way traffic from the higher temperature to the lower temperature. Trying to apply the proposal of Prevost is not working, in particular not for two surfaces with different emission coefficients. See my reference [3].

In my first paper on the stack model [1], I use the one way heat flow between grids, that are plates with holes.

Already at that time I anticipated your question “where is surface 2?”.

I insisted in [1], that I was analyzing stacks of grids in vacuum, or air without IR-sensitive trace gases. I postponed the discussion whether the stack of gauze’s represents a semi-transparent atmosphere with IR-sensitive trace gases, until numerical results of the stack model were obtained.

I made models with stacks with different absorption coefficients f, where (1-f) represent the holes in the gauze, different number of layers, the usual numerical experiments to test the convergence of procedures.

Only at the end of the paper [1] I suggested that with f=beta*delta_z looks like an atmosphere wiyh absorption beta and I validated it by means of the published results of 3 K&T type papers.

This answers your question what is surface 2, it is the discretized layer of an atmosphere with f=beta*delta_z. In [2] is also presented how by changing a parameter “m” of the beta distribution the validation with K&T papers could be carried out.

Ad 2

In [1] the equations were developed by means of a finite different mesh. The mesh points were the layers. Between two adjacent layers there were no IR-sensitive trace gases, they are concentrated in the mesh points.

In a second paper [2] I introduced the concept of finite elements. In the elements with nodes belonging to adjacent layers , the resulting equations are the same.

The finite elements are overlaying, and in elements with nodes of layers not adjacent to each other, a viewfactor was introduced. See[2].

It turned out that the viewfactors do not have a big influence on OLR. The reason is that in the K&T global and annual mean atmosphere the heat transport from the surface to the atmosphere is not governed by radiation but rather by convection. I understood that Bryan is also making this point.

Question by Sod

My questions here are very simple – you derive equations 7 & 8 from first principles, so:

3. Just to be clear – you are now claiming these equations (7&8 in your paper) are wrong? Please confirm.

4. Given the answer to question 3 above, is the derivation incorrect or are your assumptions incorrect?

4. Why do you not address these points in your paper?

Answer by JWR

ad 3 and ad 4

In the paper we are discussing , with the MATLAB listing, both the results of [1] and [2]. They are presented in appendix 4.

The finite difference equations in [1] were not wrong, they used a windowF vector, the finite element equations in [2] is a refinement with the viewfactorF matrix.

In option 6 the different viewfactor matrices and the window vectors are depicted graphically.

Conclusion:

the viewfactorF matrix is to be preferred, it followed from the more transparent finite method of developing equations.

The Schwarzschild method gives rise to a viewfactorS matrix, which is also depicted in option 6 of the MATLAB program.

In appendix 2 where the equations for a two-layer model are written explicitly, you can observe the similar structure of the FEM equations and of the Schwarzschild proposal. Only the viewfactor matrices are different: viewfactorF and viewfactorS respectively. Inserting the viewfactorS matrix in the FEM stack model gives the same result as the Schwarzschild procedure: OLR is not decreasing for increasing ftot = optical thickness at the surface.

The only drawback of the present stack model is that the Planck functions for a layer are represented by the mono-chromatic theta=sigma*T^4. As said previously introducing the wavelength dependent Planck functions did not have a high priority for me. The dependance of OLR on the distribution of beta*delta_z was studied in [1]. It turned out not to be important. I am now shopping around to find Planck functions B(T(z),…,…) to replace the theta distribution, as indicated in the paper.

It is clear from what I write in my papers and in this blog that I am trying to find out what is the scope of SoD code.

From my models I concluded already in the December 2012 paper [1] that the heat evacuation from the surface of the planet to higher layers of the atmosphere is mainly by convection, the FEM paper [2] of April 2013 confirmed it even more.

If radiation is determining the heat evacuation, ―like in the case of an hypothetical atmosphere without a heat exchange between the 99% bulk and the IR-sensitive trace gases ―, it follows that the trace gases are much colder than the experimental observed temperature with corresponding environmental lapse rate.

How come that you claim that radiation is heating up the atmosphere? My model is not showing that.

What is the reason of the huge number of iterations?

What are you iterating for in SoD code?

I my paper of 2012 , reference [1], I found out that two-way heat flow between a stack of plates gives spurious absorption.

From your code , I see that the absorption of the hypothetical up-going component is used to heat up the atmosphere.

From SoD code, my reference [5]

% the upwards radiation leaving the layer, then a heating

Eabs(i)=Eabs(i)+(radu(i,j)-radu(i+1,j))*dv;

And a similar way you add the term from the hypothetical down-ward flow:

% accumulate energy change per second

Eabs(i)=Eabs(i)+(radd(i+1,j)-radd(i,j))*dv;

My conclusion in my 2012 paper [1] that the two way heat flow gives a far too high absorption is confirmed.

I tried to draw your attention on it earlier, but your answer was that it is based on “first principles”.

It is my opinion that the Schwarzschild procedure , as far as I could deduce it from your “equations” and your “code”, is violating both the first and the second law.

on December 26, 2013 at 3:40 pmPekka PiriläJWR,

You are fighting against that part of physics where the agreement between theory and experiments is verified perhaps most accurately of all parts of physics, namely Quantum Electrodynamics developed by Feynman, Dyson, Tomonaga, and Schwinger based on work of other great scientists from Planck to Dirac and beyond. Their theory tells that radiation between bodies is a two-way phenomenon. It’s well known that attempts to get correct results with any other approach are hopeless. In a formal sense that’s possible, but it’s known that the approach leads to the same results, and that the only way the calculations can be done is exactly the one that is best described by two-way exchange of photons.

Everything that you discuss concerns phenomena which can be analyzed fully by this very well known and verified theory. Whenever you get the a different result your result cannot be but wrong, because the alternative has been verified. When you get the same result, your calculation may be correct, but even then almost certainly just confusing.

on December 26, 2013 at 4:11 pmDeWitt PayneJWR,

Two surfaces with different emissivities can’t exchange information by any means other than the exchange of photons. A surface with an emissivity ε less than unity must have a reflectivity equal to 1-ε, that is it absorbs

or reflectsphotons. The result of this is that if the two surfaces are the same temperature, the energy distribution of the photons inside the box is not dependent on the emissivities of the walls but is identical to the distribution that would result if ε was equal to 1 for all surfaces. That’s why a box with a hole in it with the walls at constant temperature is such a good approximation of a black body. I don’t see how this could happen if energy exchange is only one way. How do the inside surfaces of the box ‘know’ that there is a small hole and emit just enough radiation in exactly the right direction to give the appearance of a black body spectrum? It’s much simpler to have photons with a black body energy distribution always present.This is another place where causality raises its head. Suppose the box is very large with the walls one light second apart. Now put a hole in one wall. Do you immediately observe black body radiation or do you have to wait two seconds for the opposite wall to detect that there are no photons coming from the new hole? Immediate detection with one-way transmission would seem to require that information be transferred by means other than photons faster than light.

on December 26, 2013 at 5:52 pmDeWitt PayneJWR,

Then you’re not doing it correctly. It works just fine for me as long as you remember to add reflected radiation to the emitted radiation from each surface before calculating absorption by the other surface. An iterative approach in a spreadsheet rapidly converges to the correct solution for infinite parallel planes at temperatures T1 and T2 with emissivities ε1 and ε2:

Q = σ(T1^4-T2^4)/(1+(1-ε1)/ε1+(1-ε2)/ε2)

1-ε1 is equal to the reflectivity of surface 1.

on December 26, 2013 at 6:51 pmDeWitt PayneJWR,

In your Appendix 5 you state:

Not very well. The trace gases absorb strongly at some wavelengths and hardly at all at others. A semi-transparent grid absorbs and emits equally at all wavelengths. This means that energy absorbed by the grid is then emitted with a Planck spectrum over all wavelengths rather than the wavelengths appropriate to the specific trace gases. In addition, as pressure and temperature decrease with altitude, the molecular lines narrow so the average emissivity decreases. A semi-transparent grid, therefore, is only a crude approximation to a real atmosphere and calculations based on this approximation prove nothing about the real atmosphere.

on December 26, 2013 at 7:13 pmDeWitt PayneJWR,

Your reference 3 claims to prove that surfaces with different emissivities at the same temperature are not in radiative equilibrium with two way exchange of radiation. That reference is wrong because it neglects reflection. Instead of each surface emitting only εσT^4, the surface emits that amount of radiation and reflects (1-ε)σT^4. The sum is then σT^4, identical to a black body.

This is a really, really dumb error. The fact that you didn’t catch it is strongly indicative.

And again, what mechanism other than photons could a surface possibly use to exchange information on its temperature and emissivity with another surface?

on December 27, 2013 at 12:40 amDeWitt PayneJWR,

Then there’s the premise that pyrgeometers don’t actually measure radiant flux, they measure something else. But that something else produces exactly the same result as if were radiant flux. I believe that falls under the logical fallacy called a distinction without a difference.

on December 28, 2013 at 5:07 amscienceofdoomMy comments following JWR on December 26, 2013 at 2:58 pm are posted at the end of the comment section to break them into manageable pieces.

on December 22, 2013 at 11:47 am |scienceofdoomJWR,

Additionally, did you address this problem highlighted on December 12, 2013 at 9:47 pm:

on December 21, 2013 at 9:06 pm |BryanPekka

I was quite specific.

Blue light is not absorbed in pure water.

You reply

All wavelengths of SW are absorbed in deep water

What depth of PURE water is required to absorb blue light?

on December 21, 2013 at 9:32 pm |Pekka PiriläThe largest penetration depth of any wavelength in pure water is about 220m. That’s the average depth of point of absorption for the most penetrating violet light (wavelength about 412 nm). A small fraction penetrates several penetration depths deep.

Blue light penetrates about half as deep as that most penetrating wavelength.

on December 21, 2013 at 11:56 pm |DeWitt PayneBryan,

Your question reflects a lack of understanding of absorption spectrometry. There is no single depth. Absorption is exponential in path length, see for example the Beer-Lambert Law. You need to specify how much of the initial intensity is absorbed. 100% is not an option as the logarithm of zero is undefined. The 1/e depth at 400nm is ~200m. So even at a path length of 4 km, some tiny fraction of the initial intensity is still present. But it’s quite small, ~2E-7%. For every ~460m of path length, the intensity is reduced by 90%.

on December 22, 2013 at 9:41 amBryanDe Witt and Pekka

Attenuation of a beam of blue light by pure water is not the same as absorption of the blue light.

The light is scattered.

That’s why pure water appears to an observer to have a blue tint.

But all this is drifting away from JWRs paper.

It is a reasonable approach to use classical physics for atmospheric heat transport in the troposphere.

For long wave IR >5um equipartition theory largely holds.

Tim Folkerts (known to some of you as a working physicist who supports IPCC science in the main) did the numbers and found them in the same ball park as measured values

Much of the theory used in climate science anyway predates the quantum theory (1905)

Beer Lambert Law (1729)

Kirchhoff‘s Law (1862)

Stefan–Boltzmann law (1886)

Also spectroscopy was not born as a result of quantum theory, we will not go back as far as Newton or Hershel( although we could).

Spectroscopy lets say was born in 1801, when the British scientist William Wollaston discovered the existence of dark lines in the solar spectrum.

Thirteen years later, Fraunhofer repeated Wollaston’s work and hypothesized that the dark lines were caused by an absence of certain wavelengths of light .

In 1859 Kirchhoff was able to successfully purify substances and conclusively show that each pure substance produces a unique light spectrum, that analytical spectroscopy was born.

Kirchhoff went on to develop a technique for determining the chemical composition of matter using spectroscopic analysis that he, along with Robert Bunsen, used to determine the chemical makeup of the sun .

on December 22, 2013 at 1:46 pmPekka PiriläBryan,

Many things have not changed with the introduction of QM, many other things have changed.

All experimental evidence tells that every time QM differs from older theories, QM is correct. Because you don’t know a priori, where the theories differ, you must check every time, what QM tells and use the older theories only when the theories agree.

What is the equipartition theory for LWIR > 5µm? If you can first tell that then are you sure that it agrees with the correct QM based theory?

Actually it’s certain that no classical theory can agree with the thoroughly verified QM based theory, because the absorption spectrum is a purely QM result, and a result that’s essential for the correct understanding of radiation in the atmosphere. Your statement lacks all merit – or where is the merit to be found?.

As long as you stick to outdated or otherwise wrong theories you have no change of understanding the atmosphere.

JWR’s paper is not much better than before, but too much effort has already been spent in pointing out its weaknesses.

on December 22, 2013 at 4:17 pmDeWitt PayneBryan,

The reduction in intensity by scattering is also exponential with path length. However, if it were only scattering, i.e. no absorption at all, the diffuse scattered flux upward at the surface would be a large fraction of the initial intensity, much how a cloud reflects sunlight in the visible spectrum by scattering from individual droplets. The ocean would appear to glow like the diffuse solar radiation from molecular scattering makes the sky glow in the blue. In addition, when looking at the source, from a depth, it would appear to be shifted to the red end of the spectrum like the sun near the horizon. But it doesn’t because the rest of the visible spectrum is absorbed, not scattered, more strongly than the blue and near UV. But the ocean doesn’t glow because the scattering coefficient is small compared to the absorption coefficient. Clouds appear white because the path length is too short to absorb significantly in the visible, but the near IR is strongly absorbed.

on December 22, 2013 at 6:30 pmBryanDe Witt

So now sea water is the same as pure water?

You will need to try harder.

on December 22, 2013 at 6:25 pmDeWitt PayneBryan,

You have no actual evidence for your assertion that blue light is attenuated only by scattering and not absorption in pure water. Just because there are no fundamental resonances near 400 nm, where attenuation in pure water is at a minimum, does not mean that there can be no absorption. A molecule can absorb at overtone frequencies that are ~2, 3 ,4… times the fundamental frequency. There are combination bands as well. See this reference. The probability of absorption of a photon by an overtone band is much lower than for the fundamental resonance, but it’s not zero. The transition at 401 nm, av1 + bv3 a+b=8, means that the transition is a combination of the symmetric stretch, v1, and asymmetric stretch, v3, with the sum of the quantum number change equal to 8. So you could have v1 + 7v3, 2v1 + 6v3, 3v1 + 5v3, 4v1 + 4v3, etc. What you don’t know about molecular spectroscopy would fill books. I suggest you read one.

on December 22, 2013 at 7:18 pmPekka PiriläBryan,

Several different outcomes are possible for a photon that hits the sea ssurface:

1) it can be reflected from the surface

2) it can be absorbed in sea water

3) it can reach reach the bottom and be absorbed there

4) it can enter the sea and be refracted or be scattered in a way that brings it back to surface where it exists water again

The alternatives (1) and (4) contribute to the albedo, (4) very much less than (1). The alternative (4) does not affect any calculation of energy balance at a level that would be significant. Arguing further on that is irrelevant.

on December 21, 2013 at 9:49 pm |BryanPekka perhaps you should re-read the link below

http://hyperphysics.phy-astr.gsu.edu/Hbase/chemical/watabs.html

Further on in the previous discussion we found out any absorption was by harmonics of the fundamental.

This reminds us of the wave properties of the photon.

on December 21, 2013 at 10:16 pm |Pekka PiriläBryan,

That page tells about the absorption by water molecules in gas. I based my numbers on measured absorption in pure liquid water.

on December 21, 2013 at 11:29 pm |DeWitt PayneBryan,

Does the relative absorption graph in your link go to zero anywhere? Obviously, it doesn’t. You’re neglecting the effect of the wings of the strong absorption bands at shorter and longer wavelengths. You get pressure broadening of absorption lines in a gas. The effect of the structure of a liquid, particularly water where there is strong hydrogen bonding, is going to be even greater. Water is not perfectly transparent anywhere. In fact, there is nothing in the real world that is perfectly transparent, perfectly reflective or perfectly absorptive. The path length in intergalactic space is very long, but not infinite.

on December 22, 2013 at 1:56 pm |Pekka PiriläA link where also the absorption of radiation in liquid water is discussed is this

http://www1.lsbu.ac.uk/water/vibrat.html

It has links to additional data sources on absorption of radiation in liquid water. My own calculations on the shares of radiation that penetrates to various depth are based on those sources together with some standard data on the spectrum of solar radiation at Earth surface.

The link I gave contains a curve that shows the absorption coefficient in liquid water. The minimum of about 0.00005 1/cm is in the UV. The inverse of that is about 200m (more accurately from the numerical data the value is 220m).

on December 22, 2013 at 4:22 pm |BryanPekka

You will need look at my post and answer the points I make rather than make up your own straw man to answer.

Is that too hard to do?

I said

“Attenuation of a beam of blue light by pure water is not the same as absorption of the blue light.The light is scattered.

That’s why pure water appears to an observer to have a blue tint.”

You say

“A link where also the absorption of radiation in liquid water is discussed”

You finish by saying

“As long as you stick to outdated or otherwise wrong theories you have no change of understanding the atmosphere.”

Nowhere have I questioned quantum mechanics as the blue light in pure water proves.

However anyone who used relativity theory rather than Newton to calculate the speed of a car on a road would be considered a crank.

Its true that the crank would get a more accurate answer if it could be measured.

The point I make is that it is reasonable to use classical physics to deal with radiation > 5um

Are you trying to imply that nobody knew about radiation before 1905 or dealt with it in heat transfer problems?

Likewise a correct consistent theory of climate involving quantum mechanics would be marginally more accurate in this area.

However you must be very complacent if you think that that such a consistent theory presently exists.

Perhaps testing each theory by experiment and comparing might give valuable insights.

But I forgot IPCC science is perfect in all respects and beyond the need for any such testing.

Advocates of IPCC science are apt to act like propagandists for a cause rather than scientist.

on December 22, 2013 at 5:18 pm |Pekka PiriläBryan,

Please tell, how classical physics can be used to get anything that’s even remotely correct on the radiative heat transfer in the atmosphere.

People had learned tricks to handle radiative heat transfer until QM provided an explanation. Planck made an intermediate step by making an ad hoc assumption to derive the right formula, but even that was an unexplainable trick at the time. It could not and cannot be understood without QM.

Not a single atomic or molecular spectrum can be understood at all without QM.

People use classical physics to “disprove” GHE. Disproving something requires the use of a theory confirmed as valid in the particular field of physics, but classical physics is not confirmed for these applications, it’s disconfirmed. All these “poofs” are the result of using theories known to be totally wrong.

The question is not about minor adjustments, it’s about the difference between correct and totally wrong.

Most of thermal IR is at > 5µm, the 15µm CO2 absorption peak is not far from the peak of the IR spectrum. Nothing on influence of that peak can be understood without the shape of the absorption/emission spectrum. It’s no wonder that people who don’t accept that create “proofs” that have no real value.

on December 22, 2013 at 7:10 pmBryanPekka

The specific heat capacity of CO2 drops by 13% from the Earth surface to tropopause.

Since the mass does not change then the 13% represents the radiative energy lost in the vertical column.

It will be passed on vertically since the horizontal components self cancel

Similarly for H2O but much more significant than CO2

How would you account for the increased Specific Heat of CO2 other than by increasing vibrational modes?

Other bulk quintiles include the transport coefficients.

The thermodynamic effect of an object radiating on another object has already been taken into account by the coefficient of thermal conductivity which, despite its name, measures all kinds of diffusive heat transport including radiation.

G&T write the following in their first falsification paper:

“A physicist starts his analysis of the problem by pointing his attention to two fundamental thermodynamic properties, namely the thermal conductivity, a property that determines how much heat per time unit and temperature difference flows in a medium and the isochoric thermal difusivity, a property that determines how rapidly a temperature change will spread, expressed in terms of an area per time unit.”

In their reply to Halpern et al. they write:

“Speculations that consider the conjectured atmospheric CO2 greenhouse effect as an “obstruction to cooling” disregard the fact that in a volume the radiative contributions are already included in the measurable thermodynamical properties, in particular, transport coefficients.”

If you want to know the thermodynamic effect of doubling the CO2 concentration you only need to measure the changes in the transport coefficients. These changes will of course be unmeasurable (although there is probably some tiny factual difference).

No need for any redundant radiative transfer calculations.

http://arxiv.org/pdf/0707.1161)

on December 22, 2013 at 7:42 pmPekka PiriläBryan,

G&T have been discussed in separate threads on this site already. Their work doesn’t count as support for anything.

Otherwise we are just repeating the same arguments. Thus continuing is of no value.

on December 23, 2013 at 12:22 amDeWitt PayneBryan,

No, they’re not. That is an assertion by G&T that is unsupported by any literature citations or data. As a result, it amounts to an argument from authority and proves nothing.

By the way, nice goal post move in your reply to me above. You’re the one who asked about the absorption of radiation at the blue end of the spectrum by pure water. Yes, sea water is not pure and attenuates more strongly than pure water. Your contention, however, that in the absence of impurities, water does not absorb radiation with wavelengths near 400nm is still false. You have yet to admit that.

on December 23, 2013 at 9:11 amBryanDe Witt

As G&T say

“the radiative contributions are already included in the measurable thermodynamical properties, in particular, transport coefficients”

How could they not be?

If you take a mole of CO2 at say 500C, it will have a much higher Cp than one at -50C.

This is because a much higher % of the molecules will have their vibrational modes occupied.

If this mole is is placed inside an IR transparent container and the container placed in outer space then the CO2 container will emitt IR radiation as the temperature drops.

The Cp will show a corresponding drop.

How could it not?

on December 23, 2013 at 9:25 amPekka PiriläCp is not a radiative property. The share of molecules in vibrationally exited state is not a radiative property. Absorptivity and emissivity are not determined by that share, they are determined by a different mechanism. The only direct connection is that both require the existence of a vibrationally exited state, but the numerical values are not directly related.

Both can, however, be explained by QM based on the same model of molecules. Neither can be explained without.

on December 23, 2013 at 9:20 am |BryanDe Witt

“Your contention, however, that in the absence of impurities, water does not absorb radiation with wavelengths near 400nm is still false. ”

The hyperphysics link says it for me rather well!

“It doesn’t absorb in the wavelength range of visible light, roughly 400-700 nm, because there is no physical mechanism which produces transitions in that region ”

If you want to ignore the valuable insights offered by quantum mechanics in the last 100 years then thats up to you.

http://hyperphysics.phy-astr.gsu.edu/Hbase/chemical/watabs.html

on December 23, 2013 at 9:35 am |Pekka PiriläBryan,

That’s not true for liquid water. See the link that I gave. Hyperphysics does not refer to the liquid at all in this section.

Your ignorance of physics does not make the reality what you imagine it to be.

on December 23, 2013 at 9:48 amBryan|Pekka

What makes you say that the link refers to water vapour rather than water?

Nothing in the article refers to water vapour.

You might as well say they were talking about ice.

on December 23, 2013 at 11:05 amPekka PiriläThe statements are true for for water molecules in gas, not for liquid. That’s a good reason to conclude that it refers to gas.

In general interaction of water molecules in gas are discussed all around, because the details of that are so important, the details of absorption of radiation in liquid water are important only for some specific applications, therefore people even forget often to mention that they are discussing water in gas.

From extensive studied ans also research of physics I happen to know enough to help me understand correctly what other physicist are trying to tell even when they forget to mention some implied assumptions. That helps me also to identify physics related crap as crap even when the author has some apparent credentials that would one to expect that he knows something. G&T are a case in point. They claim to know physics, and have some credentials, but evidently don’t understand much at all on the issues they write about.

on December 23, 2013 at 5:18 pm |DeWitt PayneBryan,

There is a physical mechanism for absorption by liquid water from 400-700 nm. It just requires a very long path length to observe. Even if the quantum number change greater than six combination transitions do not contribute significantly, the wing of the long wavelength vibrational band must eventually overlap the wing of the short wavelength electronic transition band. Absorption lines are broadened in liquids so much that you generally only observe bands of overlapping lines, not the lines themselves. As a result of the broadening, those bands have long tails. They don’t have a brick wall cutoff. You continue to ignore the spectrum on the link you cite which does not go to zero anywhere and looks exactly like the overlap of two bands in the visible. The y axis is labeled relative

absorption, not attenuation.The closest thing to a brick wall cutoff in spectrometry are the absorption edges observed in x-ray spectrometry. But even those have fine structure which can be used for analysis.

on December 23, 2013 at 11:16 am |BryanPekka

I said

“Attenuation of a beam of blue light by pure water is not the same as absorption of the blue light. The light is scattered. That’s why pure water appears to an observer to have a blue tint.”

Nowhere have I questioned quantum mechanics as the blue light in pure water saga proves.

Do we always have to use quantum mechanics?

No!

Do we now use relativity theory to calculate the speed of a car on a road ?

Its true that any pompous twit who so insisted would get a more accurate answer if it could be measured.

However this is getting well away from JWRs paper so I refuse to be distracted further by irrelevances.

Stick to the main point and don’t flit about like a butterfly because you are unable to address the substantive points that JWR and Schact and G&T make.

How can you separate radiation from the physical causes that produce radiation?

The substantive point I make is that ;

It is reasonable to use classical physics to deal with radiation > 5um.

You disagreed

You are trying to imply that nobody knew about radiation before 1905 or dealt with it in heat transfer problems?

A correct consistent theory of climate involving quantum mechanics would be marginally more accurate in this area.

However you must be very complacent if you think that that such a consistent theory presently exists.

17 years of the recent temperature record shows no link to increasing CO2.

There is no historic link either.

Has the thought never occurred to you that you are pushing a FAULTY interpretation of quantum mechanics ?

Perhaps testing each theory by experiment and comparing might give valuable insights.

IPCC science is ’settled’ and beyond the need for any such testing you say.

However reality cannot be ignored

on December 23, 2013 at 11:26 am |Pekka PiriläDo we always have to use QM?

No!

Do we have to use QM, when it makes a difference?

YesI

Does it make a difference for interaction of radiation with matter and radiative transfer in gases?

Yes, it leads to totally different conclusions (what else would be reason skeptics often want to ignore QM).

on December 23, 2013 at 11:30 am |Pekka PiriläActually I should correct my previous comment.

There’s no classical theory of radiative heat transfer in gases and no consistent classical theory of interaction of radiation with matter. Thus all supposed classical theories are either old and known to lack self-consistency, or new inventions that have never been more than products of imagination of a few people.

on December 23, 2013 at 12:09 pmBryanPekka

Classical Physics was quite smug around 1900 thinking all that was left to do was to get the physical constants to a few more decimal points.

The vibrational modes of IR gases are still treated with a semi-classical visual model.

Vibrating spheres connected by springs is often used as an analogy.

All this concerns radiation > 5um

For radiation < 1um, Quantum Mechanics is required as my blue light in pure water example shows.

In 200 years time perhaps there will be a consistent correct QM theory of the climate .

If then calculations are made and compared to Classical Theory the results will be roughly the same and I agree that the QM result will be more accurate.

on December 23, 2013 at 5:35 pmDeWitt PayneBryan,

That analogy is only used for instruction, not for actual calculations. Most of the observed transitions that are tabulated in the HITRAN database can be calculated by QM

ab initio. It is already possible to calculate atmospheric emission spectra that agree with observed spectra to a precision of about 1%. The limit of the precision is the knowledge of the atmospheric temperature and partial pressure profiles, not QM, with the possible exception of water vapor continuum absorption. Radiative transfer is defined by QM and is considered to be well understood. Climate is more complicated as the movement of air and liquid is involved. QM has nothing to say about that.on December 23, 2013 at 7:54 pm |BryanPekka and De Witt

If there was a prize for missinterpreting other peoples posts then you would be joint winners.

You both have the knack of getting the ‘wrong end of the stick’.

Woods would not be seen for the trees .(Pun intended)

Insignificant side comments are examined forensically to avoid dealing with any matter of substance.

Experimental reality does not matter in your make believe world.

R W Woods experiment falsifying the effect has not been seriously challenged.

There is no historic al evidence of CO2 driving the climate.

In fact quite the reverse.

The recent 17 years confirms the historic record

No serious physics or thermodynamics textbook mentions the ‘greenhouse effect’.

Perhaps it is time to look at alternatives like the paper by JWR to see if it has any answers.

My more substantive point has been sidestepped.

For LWIR >5um the heat transferred by a gas cooling down by radiation only would work out about the same for both classical and QM calculation.

on December 23, 2013 at 10:59 pm |DeWitt PayneBryan,

That depends on who’s voting. SoD, Pekka and I would give the prize to you hands down.

That’s rich coming from you. But then irony always increases.

It was, in fact, challenged immediately, as it was counter to all previous experimental results starting with de Saussure in 1767. Read Abbot’s rejoinder to Wood published in the same journal shortly after Wood’s note.

The actual reality is that Wood’s experimental results have never been replicated, in part because his description of the experiment is sadly lacking in details. Roy Spencer and Vaughan Pratt produced the opposite results from Wood as have I. The one published result claiming to reproduce Wood was fatally flawed as should be obvious to even you from the picture of the experimental apparatus. Guess which box had the IR transparent cover. Wood also did not attempt to defend his experimental results after Abbot’s critique. That should be a critical point for you as you denigrate Pratt’s results because you claim that he is not defending them from criticism by every Tom, Dick and Harry.

Undergraduate physics textbooks in general only spend a few pages on absorption and emission of radiation by atoms or molecules. The Feynmann Lectures on Physics, for example, devotes ~2 1/2 pages to Einstein’s laws of radiation (Volume I. chapter 42-5) and a few pages more on the general theory of radiation absorption derived from Maxwell’s equations. The specifics of radiative transfer through the atmosphere is too specialized a subject for an introductory physics text, or chemistry text for that matter. The same should hold true for introductory thermodynamics textbooks. Many textbooks for Atmospheric and Oceanic science courses will contain sections on the greenhouse effect. A classic textbook on heat transfer mentions the greenhouse effect but doesn’t go into detail (see below)

If you apply a 65 year moving average to the instrumental record, you get a curve that looks much like the increase in ghg forcing over that same time period. That doesn’t prove anything, but it is evidence. There is evidence of long term periodicity in the instrumental record with a period of ~65 years and a magnitude that explains the recent slow down in global temperature as well as the one between ~1950-1970 without invoking the aerosol kludge.

And precisely how are you going to calculate the emissivity of the gas in question without QM? Then how are you going to calculate total emission without the Planck equation or its integrated form, the Stefan-Boltzmann equation? The Planck equation was formulated because classical physics failed dismally to predict emission from a black body. There is no complete classical description of radiative cooling of a gas. The classical solution to the radiative transfer problem requires that you know the source function. Without QM, you don’t know it.

All modern emissivity tables use radiative transfer models to calculate effective emissivity. See Modest, Heat Transfer, Third Edition, for example. As I mentioned above, the greenhouse effect is mentioned on page 2 and 96 of this standard text on heat transfer. In the past, Hottel measured emissivities at different temperatures, pressures and path lengths and was fairly close. The calculated results, which are ultimately traceable back to line-by-line models, are more accurate.

From Modest, page 2:

Please don’t pull a G&T and pick nits about how CO2 doesn’t absorb solar energy directly. The energy absorbed by CO2 does come from the sun, just not directly. You’re always going to be able to pick nits with any one sentence description of the effect.

on December 24, 2013 at 10:37 am |BryanDe Witt

Here is a simple example of the difficulty of separating radiation from the particles that cause the radiation.

We have a cooling IR active gas like CO2 losing heat by radiation conduction and convection.

All three methods involve the specific heat.

Do we use the specific heat that includes a radiative component after stripping out the radiative energy lost?

Perhaps a new specific heat value is used in which the radiative contribution has removed.

In this case I would be very interested in the experimental arrangement to calculate the SHC of CO2 with no radiation at a particular temperature.

Pick either Cp or Cv.

on December 24, 2013 at 11:36 am |Pekka PiriläBryan,

As long as we have a stationary state where overall cooling and warming are as strong, the specific heat makes no difference.

When the temperature is changing, the relevant specific heat is Cp of air including all its components at the local temperature. CO2 has a very small influence on that proportional to its concentration.

Specific heat of CO2 is influenced by the energy levels of vibrational excitations, but not by the emissivity/absorptivity related to these levels. The existence of those levels does not tell about the transition probabilities between the ground level and the excited levels due to IR radiation. The occupation rate of the levels is determined by collisional excitation and deexcitation, not by emission and absorption. The dependence of the occupation rate on temperature affects specific heat, emission and absorption have no influence on that.

Rate of emission depends on the temperature according to Planck’s law, but the emissivity coefficient in the formula of the emission strength is determined by the coupling constant that’s a independent quantity as I wrote above.

on December 24, 2013 at 11:59 am |BryanPekka

You have misunderstood my post to De Witt.

The gas is not air but CO2

Lets make it a little more concrete

Lets say we have one kilogram of CO2 at 350K in a one cubic metre container with IR transparent walls with negligable heat capacity which is itself in a vacuum.

How much heat is lost by the gas if it cools down to 250K

on December 24, 2013 at 12:05 pm |Pekka PiriläI don’t think that there’s any disagreement on that as long as everyone knows whether that happens at constant pressure or constant volume.

Why should anyone be interested in that question?

When you can show the relevance of that question to the discussion on radiative energy transfer in atmosphere and into or out from the atmosphere, can be consider that. Bringing in totally irrelevant issues has no other effect than confusing the discussion. That seems to be your goal.

on December 24, 2013 at 1:16 pm |BryanPekka

I think in your rather grudging reply you have confirmed that the answer would be exactly the same worked out by classical physics or quantum mechanics.

on December 24, 2013 at 1:38 pm |Pekka PiriläAs long as classical physics cannot give any answer at all, that’s a moot proposal that cannot be true. Claiming that I would have confirmed is ridiculous – how far can you go in your misrepresentation of others?

As I have written many times, classical physics does not have any self-consistent description of the required physics. It doesn’t not give any answer on these questions.

If you disagree, you must be able to tell, how the answer can be obtained from classical physics or where we can find a valid calculation based on classical physics. Making unsubstantiated claims on what you believe on the state of physics around year 1900 is not enough.

Physicists where able to make some rough calculation based on their knowledge as the work of Arrhenius shows. Those calculations were, however, not fully consistent, and must be considered only early and preliminary estimates.

All self-consistent calculations depend on the quantification and the resulting concept of photon. That theory is also thoroughly verified empirically.

on December 24, 2013 at 3:41 pm |BryanPekka says

“If you disagree, you must be able to tell, how the answer can be obtained from classical physics ”

Easy

Specific Heat in this case is Cv

No work is done in cooling

Work out the loss of internal energy between 350K and 250K

Since the only way to lose energy is by radiation then it’s all radiative loss.

No mention of photons required.

Now that’s me signing of for the season.

You can work out the same problem using QM

The answers will be the same.

on December 24, 2013 at 5:56 pm |DeWitt PayneBryan,

The

amountof energy lost from the temperature change is not the issue. It’s therateof loss that’s important. That tells you how much energy needs to be supplied to maintain a constant temperature. Please detail how you would calculate the rate of energy loss using only classical physics. Then you can also calculate the cooling rate of dry air containing 280 ppmv CO2 and an equal mass containing 560 ppmv CO2.Don’t let the door hit you on the way out.

on December 24, 2013 at 8:18 pm |Pekka PiriläBryan,

I hope that you understood that you didn’t give any real answer at all. If not, then there’s even less hope that this discussion will lead anywhere.

on December 24, 2013 at 10:44 pm |DeWitt PayneBryan,

Heat capacity at constant pressure is important to the structure of the atmosphere because it determines the value of the dry adiabatic lapse rate. It also helps to determine the cooling rate by radiation of a parcel of gas with a given mass, but only in conjunction with the rate of emission of radiation. The cooling rate of a kilogram of argon at 350K is going to be many orders of magnitude slower than the cooling rate of a kilogram of CO2 even though the difference in heat capacity between argon and carbon dioxide is less than a factor of two and gets smaller as the temperature drops. Classical thermodynamics will not tell you this.

on December 26, 2013 at 5:01 pm |Pekka PiriläThe resent discussions with JWR and Bryan tell once more, how hopeless it is to resolve issues by pointing out errors and weaknesses in a highly incomplete “theory”. Such “theories” redefine concepts, and do that in a way that allows new redefinitions when errors are shown with use of the earlier ones.

It’s, of course, impossible that an individual could create a fully specified alternative for the present theories. Thus the vagueness of the presentation is not an additional weakness of the theory, but rather a factor that makes constructive discussion impossible, when it’s in some way accepted that the rules of discussion are set by the creator of the new theory.

We know from own learning experience that the only working alternative is to build the understanding on the work of past scientists, and to a major part on the knowledge learned from good textbooks or teaching based on such textbooks.

Discussing separate issues in the interpretation of established theories and results of scientific research is illuminating and productive, but discussion of these

new theoriesseems to lead nowhere. If a person refuses to accept the standard approach, not by pointing out where he sees a problem, but by proposing something very different as alternative, the only result seems to be an endless discussion.on December 28, 2013 at 4:36 am |scienceofdoom[Picking up a few different threads from the above discussion with JWR, from his most recent comment of December 26, 2013 at 2:58 pm which followed my comments and questions of December 22, 2013 at 11:35 am]

JWR,

Originally your point appeared to be:

And I picked this up because a common theme in the more physics-challenged blog world is “

engineers know how to do radiation heat transfer calculations and climate science ignores it to invent new stuff because it doesn’t know how to do the basics”So my point was – engineers

don’tuse the equations you provide to do a calculation with one surface and an atmosphere. In fact, engineers who have to consider atmospheric absorption for, say, furnace calculations refer to Goody (for example) for how to do calculations of radiative transfer in the atmosphere. And engineers who have to deal with an atmosphere that absorbs have to make use of calculated absorptivities at a given temperature and a given CO2 concentration.It is quite legitimate to attempt to use basic radiation exchange between surfaces by putting one layer of the atmosphere as a surface – so long as you understand the limitations and approximations involved.

Just don’t claim this is the “engineering way” because it’s not – or please cite a paper or textbook for how engineers do radiative transfer through the atmosphere.

on December 28, 2013 at 4:44 am |scienceofdoom[Picking up a few different threads from the above discussion with JWR, from his most recent comment of December 26, 2013 at 2:58 pm which followed my comments and questions of December 22, 2013 at 11:35 am]

JWR said:

I gather from your comment that Christiansen 1883 or your interpretation of said text is in opposition to every radiative heat transfer textbook (all engineering textbooks of the last 50-100 years) and all physics textbooks for the last 100 years.

Therefore, I rest my case.Physics textbooks – all of them – confirm that thermal emission of radiation is a real two way process, not a process where the hotter surface works out how much radiation to emit based on its understanding of the temperature of the colder surface.

I can help you rewrite your paper if you like.

You should start it something like:

Your readers would then understand.

I suspect that you do not even realize this is the claim of your paper.And given that DeWitt has already replied on this specific subject explaining the error in your calculation, perhaps you can respond to him on this.

on December 28, 2013 at 5:05 am |scienceofdoom[Picking up a few different threads from the above discussion with JWR, from his most recent comment of December 26, 2013 at 2:58 pm which followed my comments and questions of December 22, 2013 at 11:35 am]

I had asked:

Answer by JWR:

What does this mean? You don’t know? You don’t understand the question?

Your earlier comment: “

It is shown there that the heat exchange by radiation is an one-way traffic from the higher temperature to the lower temperature.” states – in opposition to a century of uncontroversial basic physics – that radiation is NOT a two way process (see note 1).You derived the Schwarzschild equations which rely on the well-proven fact that there IS a two-way process.

Your paper before last (I haven’t checked this one) just blithely claims that absorption of radiation by the atmosphere is only absorption of the NET radiation, not absorption by radiation in any direction. This is in OPPOSITION to the Schwarzschild equations and to all experimental spectroscopy for the last 100 years.

At this point I believe readers who have stayed this far (if there are any) and struggle with radiative physics can draw comfort – there is a point to this paper.

People who comment favorably on this paper, and blogs who discuss it without pointing out these obvious “smack around the side of the head with a large brick” shortcomings should be avoided if the intent of the reader is to learn anything about reality.

JWR,

It doesn’t seem that you understand the first point about the subject you are writing a paper on. Your paper is not about Finite Element Analysis, of which you probably have some understanding.

It needs correct equations to go into an FEA.

It needs an understanding of how to derive one equation from another, what constitutes proof, and what constitutes assumptions.

I cannot help you.

[

Note 1– Just to be clear, radiation goes in all directions. As explained in this article, in the atmosphere the plane parallel assumption works pretty well, and therefore the problem can be resolved down to a manageable set of equations. That is why we refer to the problem as two way (up and down) – it is shorthand terminology. ]on January 2, 2014 at 9:27 am |JWR@de Witt

Thank you to correct me that I forgot to take into account

the reflection in a statement about Prevost.

It gave me a hint to find a more elegant proof of the

Christiansen law of 1883.

As usually we call theta=sigma*T^4, that make things

easier to edit.

Considering two surfaces with theta1 and theta2 and with emissivities eps1 and eps2, than we can write for a hypothetical

q1 (in the Prevost sense, from 1 in the direction of 2) and and an hypothetical q2 (from 2 in the direction of 1):

q1 = eps1*theta1 + (1-eps1)*q2 (1)

q2 = eps2*theta2 + (1-eps2)*q1 (2)

The reflexion (1-eps1)*q2 for surface 1 is taken into account,

and in the same way for surface 2, (1-eps2)*q1.

We have two simultaneous linear equations for q1 and q2

which can be solved analytically. No need for iterations!

With a=1, b= – (1-eps1), c= – (1-eps2), d=1

│ a b ││q1│ = │eps1*theta1│

│ c d ││q2│ = │eps2*theta2│

solution:

│q1│ = ( 1/det)│ d -b││eps1*theta1│

│q2│ │-c a ││eps2*theta2│

det = a*d-b*c = (eps1+eps2-eps1*eps2) (3)

For the two hypothetical fluxes q1 and q2 we obtain:

q1 = (eps1*theta1+eps2*(1-eps1)theta2) /det (4)

q2 = (eps1*(1-eps2)theta1+eps2*theta2) /det (5)

The real heat flux from 1 to2, for theta1>theta2:

q(1→ 2) = q1-q2 = eps12*(theta1 – theta2) (6)

1/eps12 = 1/eps1 +1/eps2 -1 = det/(eps1*eps2) (7)

This is the relation from Christiansen from 1883, including reflection. For theta1=theta2 we find qreal = 0.

The important lesson is that we find always the combination (theta1-theta2): for emission alone, for a combination of emission and reflexion,… It reflects the second law, the heat flux is zero when the temperatures are equal.

In case of an atmosphere, discretized in N levels, we have

N*(N-1)/2 pairs (theta(i), theta(j)). In order that the second law is satisfied, it is heuristic that the physical phenomena can be described by fe(i , j)*(theta(i)-theta(j)).

That is what the stack model is doing: defining the heat transport by radiation between pairs of layers.

The concept of finite elements gives an elegant and transparent way to introduce pairs.

The physics consist of the determination of

the factors fe(i , j).

And if we want to introduce wave-number dependent Planck functions then we consider for each wave number k: fe(i, j, k)* (B(T(i),k)-B(T(j),k). In this way,

for T(i) = T(j), the contribution is zero for each wave number interval k.

Once more, my apologizes for the slip of the pen and forgetting the contribution of reflexion in a statement concerning Prevost.

The slp of the pen was versus the end of the paper.

What about the other examples in ref 3?

In the two slabs separated by a vacuum with eps1=eps2=1, fluxes and temperatures are continuous at the borders between the slabs and the vacuum, no place for absorption of the Prevost fluxes between the two slabs and emit the heat back by means of back-radiation.

And what about the stack of black slabs in ref 1 which absorb in the two way formulation of heat a huge amount of heat?

Like the Schwarzschild procedure described in “the code” and in “the equations” of SoD, where a too high absorption is noticed.

on January 2, 2014 at 12:09 pm |Pekka PiriläJRW,

You must note that in a multilayer model you have:

1) energy transfer between all pairs of layers

2) energy transfer between the surface and each layer

3) energy loss from each layer and surface to space

4) all of the above are influenced by absorption in intervening layers

5) all the above have wavelength and azimuthal angle dependent coefficients. Nothing can be calculated correctly by Stefan-Boltzmann law, because the wavelength dependence acts in a way that makes it necessary to use Planck’s law for emission as well as wavelength dependent emissivities/absorptivities for all layers. Only the surface may be considered to be close enough to gray without essentially distorting the results.

Trying to do that with one way heat transfer is essentially bound to fail as it cannot explain the radiative warming of gas layers by the sum of radiation from below and radiation from above. (Perhaps someone may propose an extremely complex description that succeeds in that, but why should anyone bother when the standard explanation based on two way radiative heat transfer explains everything correctly in a simpler, more intuitive – and at least in normal thinking physically more correct way.

(I added the words “normal thinking” in the above, because I could imagine that a formally correct alternative could be presented, but that would give the same final results.)

on January 2, 2014 at 9:32 am |JWR@SoD

I have already answered de Witt and make my apologizes

for the statement that in case of different emission coefficients

the Christiansen law would indicate that the heat transport by radiation is a one-way traffic. The heat transport is a one-way traffic but it follows not from the Christiansen law.

I have made that statement in my reference [3] at the end of the paper. I suppose that de Wit agrees with the other conclusions of my reference [3].

As concerns your remarks in my last paper I repeat equations which have been derived in earlier papers, with a finite difference scheme in [1] of December 2012 and with a FEM technique in [2]

of April 2013.

What my point is that heat transport is proportional to

(theta(i)-theta(j)), and in a wavelength dependent analysis it is proportional to (B(T(i) – B(T(j)) for each wave-number interval.

In reference [1], I show that the two way heat transport with Prevost fluxes , a stack of 100 slabs absorbs in the lower slab 100 times the amount what gives the one way formulation.

In your “code “ I see that you also absorb the ,what you call, the back-radiation.

I try to conclude what are my opinions in order we can polish up the discussion.

1)

The Schwarzschild procedure looks like the stack model.

I use in that stack model a viewfactorF.

I have found that the Schwarzschild procedure, as I found in your “code” and “equations” can be

written also by means of a viewfactorS matrix.

In the paper I give a 3D picture where I compare the two

matrices. Needless to repeat that I think that the

viewfactorF is more correct.

It might be that Schwarzschild has used his mehod for cases

where the difference did not matter.

2)

Back-radiation which is calculated by the Schwarzchild

procedure (multiplying by (1-f) in the downward direction)

looks like the “back-radiation expression” of the stack

model based on a one-way heat transport by radiation.

In the stack model it is not a flow of heat, it is the sum of the

terms with a negative sign of the LW surface flux.

In the paper it is always indicated by high lighting those

terms.

3)

The big objection to SoD code and equation, and this has

nothing to do with smaller mistake as discussed under 1), is

the huge absorption which SoD and other IPCC authors and

bloggers conclude from what they call the fundamental

physics.

The stack model, which I sometimes call the chicken wire

model, does not show these absorptions, for a temperature

distribution corresponding to the environmental lapse rate.

It has been validated in [1] by comparison with the K&T type of diagrams, however, without the back-radiation.

on January 2, 2014 at 9:59 am |JWR@de Witt

Thank you to correct me that I forgot to take into account

the reflection in a statement about Prevost.

It gave me a hint to find a more elegant proof of the

Christiansen law of 1883.

As usually we call theta=sigma*T^4, that make things

easier to edit.

Considering two surfaces with theta1 and theta2 and with emissivities eps1 and eps2, than we can write for a hypothetical

q1 (in the Prevost sense, from 1 in the direction of 2) and and an hypothetical q2 (from 2 in the direction of 1):

q1 = eps1*theta1 + (1-eps1)*q2 (1)

q2 = eps2*theta2 + (1-eps2)*q1 (2)

The reflexion (1-eps1)*q2 for surface 1 is taken into account,

and in the same way for surface 2, (1-eps2)*q1.

We have two simultaneous linear equations for q1 and q2

which can be solved analytically. No need for iterations!

With a=1, b= – (1-eps1), c= – (1-eps2), d=1

│ a b ││q1│ = │eps1*theta1│

│ c d ││q2│ = │eps2*theta2│

solution:

│q1│ = ( 1/det)│ d -b││eps1*theta1│

│q2│ │-c a ││eps2*theta2│

det = a*d-b*c = (eps1+eps2-eps1*eps2) (3)

For the two hypothetical fluxes q1 and q2 we obtain:

q1 = (eps1*theta1+eps2*(1-eps1)theta2) /det (4)

q2 = (eps1*(1-eps2)theta1+eps2*theta2) /det (5)

The real heat flux from 1 to2, for theta1>theta2:

q(1→ 2) = q1-q2 = eps12*(theta1 – theta2) (6)

1/eps12 = 1/eps1 +1/eps2 -1 = det/(eps1*eps2) (7)

This is the relation from Christiansen from 1883, including reflection. For theta1=theta2 we find qreal = 0.

The important lesson is that we find always the combination (theta1-theta2): for emission alone, for a combination of emission and reflexion,… It reflects the second law, the heat flux is zero when the temperatures are equal.

In case of an atmosphere, discretized in N levels, we have

N*(N-1)/2 pairs (theta(i), theta(j)). In order that the second law is satisfied, it is heuristic that the physical phenomena can be described by fe(i , j)*(theta(i)-theta(j)).

That is what the stack model is doing: defining the heat transport by radiation between pairs of layers.

The concept of finite elements gives an elegant and transparent way to introduce pairs.

The physics consist of the determination of

the factors fe(i , j).

And if we want to introduce wave-number dependent Planck functions then we consider for each wave number k: fe(i, j, k)* (B(T(i),k)-B(T(j),k). In this way,

for T(i) = T(j), the contribution is zero for each wave number interval k.

Once more, my apologizes for the slip of the pen and forgetting the contribution of reflexion in a statement concerning Prevost.

The slip of the pen was versus the end of the paper.

What about the other examples in ref 3?

In the two slabs separated by a vacuum with eps1=eps2=1, fluxes and temperatures are continuous at the borders between the slabs and the vacuum, no place for absorption of the Prevost fluxes between the two slabs and emit the heat back by means of back-radiation.

And what about the stack of black slabs in ref 1 which absorb in the two way formulation of heat a huge amount of heat?

Like the Schwarzschild procedure described in “the code” and in “the equations” of SoD, where a too high absorption is noticed.

on January 2, 2014 at 6:14 pm |DeWitt PayneJWR,

I don’t have the patience of SoD to wade through all your math to find all your mistakes. Suffice it to say, that if you end up with energy flows in and out in individual layers not balancing using the two way approach, you’re making mistakes somewhere.

In the atmosphere, it’s well known that radiation is not the only method of energy transfer. In fact only about 40%, on average, of the net energy flow from the Earth’s surface is by radiation. The rest is by latent and sensible heat transfer, convection for short. In the TFK09 energy balance, 97 W/m² of the 161W/m² of the global annual average incoming solar radiation absorbed by the surface is lost by convection and only 63 W/m² is lost by radiation. This leaves a radiative imbalance at the surface of ~1 W/m², so the energy must be accumulating. We see that, in fact, it is, although at a lesser rate, from the measure increase in ocean heat content over the years.

If you try to create a purely radiative system with an opaque surface and a partially transparent atmosphere, you will get a temperature discontinuity at the surface. The surface temperature will be much warmer than the atmosphere immediately above it. And the temperature in the atmosphere will decline at a rate higher than the real atmosphere or even the dry adiabatic lapse rate.

Because most of the convective energy transfer in the atmosphere is by latent heat transfer and the scale factor of water vapor in the atmosphere is only 2km compared to 8km for dry air, convective flux declines rapidly with altitude and is effectively zero at the tropopause.

on January 4, 2014 at 9:32 pm |JWR@de Witt

You are saying exactly what I am saying!

From the 168 W/m^2, 109 is by convection, 52 through the window and 6 by LW radiation into the atmosphere.

You seem to agree with Bryan and myself!

I do suggest to read my papers from December 2012 ,ref [1] in my last paper,, from April 2013, ref [3] in my last paper, and my lasy paper of December 2013.

I am not claiming to give the exact figures, I only claim to find figures which are in the right ball game, as a friend of Bryan described it.

Since you now argue that the heat evacuation from the surface to higher layers is by convection, which I tell you since more than a year, you have difficulties like myself with the SoD approach based on what is claimed “fundamental Schwarzschild physics”.

I wish you all the best for 2014.

on April 5, 2014 at 1:22 am |RWSoD,

“The intensity at the top of atmosphere equals..The surface radiation attenuated by the transmittance of the atmosphere, plus..

The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere”How does this relate to the net transmittance (or net transparency) of the whole mass of the atmosphere from the surface all the way through to the TOA? This seems to be the critical point that is eluding me for some reason.

I understand that each layer emitting up ultimately has a fraction of its power absorbed and transmitted to space. I just don’t understand how the effects of each layer are summed together and somehow quantifies the net transparency of the power radiated from the surface (i.e. the fraction of surface emitted power that is transmitted to space and the fraction that is absorbed by the atmosphere).

on April 5, 2014 at 7:24 am |Pekka PiriläRW,

The relationship between the intensity at TOA that results after all absorption and emission within the atmosphere and the transmittance of atmosphere from surface to TOA is complex.

It’s simple to calculate what happens to radiation from the surface in each thin layer of the atmosphere. It’s only a little more difficult to calculate what that thin layer does for total intensity of upwards radiation given the intensity radiation that enters that layer from below and the temperature of that layer. In both calculations the radiation from below must be known in detail, not only the total intensity, but separately the intensity for each wavelength and each azimuthal angle of the direction of the radiation.

These two differential effects of a thin layer are also rather simply related for every wavelength and azimuthal angle separately. Summing over all wavelengths and azimuthal angles is enough to make the relationship complex.

The relationship involves in addition the temperature of the layer, and this temperature depends on altitude.

To conclude: There is a rather simple relationship between quantities that are inside a multiple integral, but that relationship depends strongly on the variables to be integrated over. When the integration has been done, the results are not related in a simple manner. Increasing CO2 concentrations affects both in the same direction, but with a very different strength that can be determined only by a detailed calculation of the type described in the thread Visualizing Atmospheric Radiation – Part Five – The Code and the related posts of that series.

on April 5, 2014 at 8:54 am |scienceofdoomRW,

It is great to see you wrestling with conceptual problems surrounding radiative transfer.

When I first tried to understand radiative transfer I had many conceptual problems as well.

Many of these articles are written with the aim of giving insight into a complex problem where multiple variables interact in different (and non-linear) ways.

Being mathematically correct is essential. But whether they succeed in providing any enlightenment is more the concern.

This is always a challenge.

Let me suggest a few ways of ways of thinking about the problem.

1. Suppose that net transmittance is not really that important. Suppose if you never knew the value of net transmittance it wouldn’t matter.

Suppose that, given:

a) the surface temperature and therefore the surface upward flux

b) the temperature profile in the atmosphere

c) the concentration profile of various GHGs

– you could determine the TOA flux. And suppose that you could graph out the change in TOA flux as the other values changed, so you could get a bit of a feel for how TOA changed with more water vapor, colder atmospheres, more CO2, a higher surface temperature..

Would that be useful even if you never knew this interesting value of net transmittance?

2.

There is no simple relationship between the radiative transfer and the net transmittance. This is because there is a very strong dependence of TOA flux (the dependent variable) on the atmospheric temperature. So for the same surface temperature and the same GHG concentrations you can get very different TOA flux as the atmospheric temperature profile changes.

So with an isothermal atmosphere, the TOA flux stays the same regardless of GHG concentrations, ie regardless of transmissivity. Because emission and absorption are equal at each point in the atmosphere.

For very low transmissivities, the atmospheric temperature profile won’t have much effect. Because all of the surface radiation escapes to the TOA without being absorbed so it doesn’t much matter how much is emitted (update to clarify – it doesn’t matter how the temperature changes because the emission change will be small in relation to the total TOA flux – the emissivity of the atmosphere is low in this case).

3. The net transmittance is only one part of the formula – the first part (the blue bit in the equation). The other bits can dominate.

The local heating or cooling of each part of the atmosphere is determined by absorption of solar radiation + convective energy received from below + the absorption of longwave radiation received from below – radiative cooling to space.

Each part of the atmosphere may not be in balance. If the net effect locally is negative then that part of the atmosphere cools down. And the converse.

When you see heating curves (really cooling curves), as shown in Part Eleven – Heating Rates you start to appreciate that the atmosphere has to be in a state of

radiative coolingbecause it receives convective energy from the surface, and you start to appreciate that locally each part of the atmosphere has quite unique cooling attributes dependent on the amount of water vapor and the temperature of the atmosphere.Hope some of this helps. It might not.

I found two things to be very helpful in gaining insights –

a) reading more than once textbook, because each explanation approaches the topic differently and multiple explanations can give the conceptual insight that is missing.

b) playing around with some simple models or simple graphs – what happens if I do this – what is the result? Then follow the cause and effect.

Just my $2 worth.

on April 5, 2014 at 10:11 am |Pekka PiriläPlaying around could be started by a model that has

– two or three layers in the atmosphere

– two or three wavelengths or bands of IR with very different absorptivities/emissivities. By very different I mean that the transmissivity of one band in a single layer is perhaps 10 % while that of another is 90 %.

Playing would then mean varying those numbers and varying temperatures of the layers compared to the surface.

on April 22, 2014 at 11:16 am |spilleautomater pa nettWhat’s up, I read your new stuff daily. Your story-telling style is witty,

keep up the good work!

on August 12, 2014 at 2:55 am |The Atmosphere Cools to Space by CO2 and Water Vapor, so More GHGs, More Cooling! | The Science of Doom[…] 4: The equations for radiative transfer are found in Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. The equations prove this […]

on February 4, 2015 at 3:56 am |The Holocaust, Climate Science and Proof | The Science of Doom[…] The equations are crystal clear and no one over the age of 10 could possibly be confused. I show the equations for radiative transfer (and their derivation) in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations: […]

on February 5, 2015 at 6:56 am |science of doom – denial | East X South[…] The equations are crystal clear and no one over the age of 10 could possibly be confused. I show the equations for radiative transfer (and their derivation) in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations: […]

on January 1, 2017 at 10:13 pm |FrankThe discussion in this pdf provides an unusually clear explanation of the difference between thermodynamic temperature and Planck temperature and then explains why the Schwarzschild equation (and Kirchoff’s Law) depends on the existence of LTE.

http://www.inscc.utah.edu/~tgarrett/6020/Radiation/LTE.pdf

on January 2, 2017 at 10:10 pm |RWInteresting read, but this gets back to a previous discussion about variants of the manifestation of LTE, in particular that of radiating gas such as the Earth’s atmosphere. No doubt for Kirkchoff’s law to be satisfied LTE is required, i.e. for the radiant energy flux absorbed to equal the radiant flux emitted (or for the emissivity to equal the absorptivity). And this is a fundamental tenant of the established view on atmospheric radiation.

However, at least one wiki source claims the LTE manifestation (for a gas) does not necessarily require the absorbed EM radiation be converted, i.e. transferred, into the linear kinetic energy of the gas molecules in motion, and subsequently back into emitted EM radiation, for LTE to still exist. This was the subtle, nuanced point GW was making sometime back that everyone dismissed.

As a precursor, the way LTE is being defined in this field and in that document you cite is equal distribution of all storage modes by collisions, including absorbed EM radiation in this case. Generally, this is the assumed physical manifestation of LTE, and its physics applies the same way as it would in the liquid or solid. Right? Or at least there is no implied differentiation.

The smoking gun that demonstrates this manifestation of LTE is not (or is at least arguably not) occurring in the atmosphere is the following hypothetical experiment:

Lets say we have device that can emit a stream of IR photons of only one wavelength and we point the device toward a container with liquid water in it (in a state of thermal equilibrium) so the stream of photons is absorbed by the liquid water, causing an energy imbalance. That is, the water is receiving more (net) energy flux than it’s radiating away, causing the water to warm and radiate more. Is the additional radiation emitted by the water (from the warming of the water) all re-radiated in the same wavelength as the single wavelength emitting device? Or is it re-radiated as a broad band spectrum based on the increased temperature of the water according to Planck’s law?

If your answer is no to the former and yes to the latter, do you then agree that what is occurring is a process of narrow band absorption being converted into broad band (Planck) emission? Surely, the answer is no to the former and yes to the latter.

It’s my understanding talking to you (and others here and elsewhere) that it is thought that the same exact fundamental physical processes are at work in the gases of the atmosphere as they are (or would be) in liquid or solid so far as absorbed radiant energy being thermalized by collisions and manifesting LTE. For a liquid or a solid, there is universally only one line of processes by which all absorbed forms of EM energy can be converted back into the EM form, and that is via broad band Planck emission (based on the temperature of the liquid or solid). That is, in a liquid or solid, the absorbed EM energy is universally converted entirely, i.e. thermalized, into the mechanical energy of molecules in motion and the only way back to EM form is via broad band (Planck) emission according to Planck’s law. There is no scenario, for a liquid or solid, where narrow band EM absorbed can be converted back into same narrow band EM emitted (yet this is what occurs in the atmosphere, except if absorbed by the water or ice in clouds). In short, if the absorbed IR in the atmosphere is being thermalized by collisions as claimed, GW is looking for there to be a conversion of narrow band absorption into broad band Planck emission, as it should be happening if the energy is being shared and transferred by collisions, but it’s not happening (and not matching what should be the wavelength intensities observed at the TOA if the conversion were occurring).

Now, the mainstream view has apparently gotten around this by applying Kirkchoff’s law to each wavelength independently. This is the only way they can accurately predict the correct spectrum. Now of course, the emitted spectrum is itself broadband in that it consists of multiple wavelength intensities, but those emitted intensities per wavelength are specifically proportional to the absorbed intensities per the same wavelength, right?

Now, whether this has any validity or not — I really don’t know, but this seemed to be the point and argument GW was making — if I understood it correctly (which I may not have). If it were true, nothing would change so far as how the IR intensity changes as it moves through the absorbing and emitting layers (predicted by the Schartzchild eqn.), and it would get the same final result as the mainstream model would (And in fact, GW gets the same results). It would however mean that gases are not emitting according to their temperature, if by ‘temperature’ you specifically mean as solely the direct result of the speed of its molecules in motion; however, the measured temperature and emission rates would still be the same as they are observed to be.

So what’s the big deal? What’s so spectacularly wrong? Even GW himself said his proposed dominant mechanism of emission by GHGs would only be a slight adjustment of established theory and not a complete re-write. And the gas and subsequent emission is LTE.

The concept of ‘temperature’ in a thin radiating gas such as the atmosphere is a fuzzy one, both conceptually and in terms of emission rate as a result of measured ‘temperature’.

on January 2, 2017 at 11:00 pmRWSome quotes from your source:

“Translational energy is what we sense as temperature.”

“This energy is what we interpret as “temperature” in daily life (more on this later). It is the kinetic

energy of the molecules that causes the pressure on our skin that we interpret as heat.”

In the case of a liquid or solid, then yes it is the kinetic energy of molecules in motion that we primarily sense as ‘temperature’, but in a radiating gas isn’t what we sense as ‘temperature’ always the combination of a flux of incident EM radiation and that of a locally present kinetic flux of molecules in motion? The same goes to a thermometer measuring ‘temperature’. If dipped in an opaque liquid, the measurement is essentially entirely that of the kinetic energy of molecules in motion in the liquid. If placed in the gases of the atmosphere, it’s always measuring a combination of flux of incident photons and locally present kinetic flux of molecules in motion. The thermometer cannot distinguish one from the other. Hence, the ambiguity of what constitutes so-called ‘temperature’ in the atmosphere.

I note there also seems to be some belief that the mechanism of emission in the atmosphere somehow affects how the IR intensity changes as it moves through the absorbing and emitting layers of the atmosphere. I know of no reason why that would be the case. The atmosphere as a whole mass passes more IR to the surface than out the TOA (about a ratio of 2 to 1)because the rate of emission decreases with height, which itself is independent of the mechanism initiating the emission. So even if what’s proposed were occurring, it’s effect on IR intensity at the surface, the TOA, or anywhere in between, would be zero.

on January 2, 2017 at 11:56 pmMike M.RW,

You wrote: “(yet this is what occurs in the atmosphere, except if absorbed by the water or ice in clouds)”.

That is entirely wrong. Up to that point, you seem to have made only one subtle error that led you astray. Broadband thermal emission is not the Planck spectrum. It is the Planck spectrum multiplied by the emissivity.

on January 3, 2017 at 1:20 amDeWitt PayneSucked in again.

RW,

No.

Kirchoff’s Law only requires that emissivity is equal to absorptivity. There is no such requirement about absorption and emission except at thermodynamic equilibrium. But the atmosphere is not at thermodynamic equilibrium. The atmosphere emits more radiation than it absorbs directly. Approximate energy balance in the atmosphere requires net convective heat transfer from the surface.

on January 3, 2017 at 7:27 pmFrankRW: You have written far too many words to compose a sensible reply. However, in one spot above you wrote:

“In the case of a liquid or solid, then yes it is the kinetic energy of molecules in motion that we primarily sense as ‘temperature’, but in a radiating gas isn’t what we sense as ‘temperature’ always the combination of a flux of incident EM radiation and that of a locally present kinetic flux of molecules in motion?”

When LTE doesn’t exist, then two different CONCEPTS for temperatures are used – NOT A COMBINATION:

1) Thermodynamic temperature – proportional to mean kinetic energy

2) Planck Temperature – proportional to the fraction of molecules in an excited state. The rate at which photons are emitted depends on the number of molecules that are in an excited state – the Planck temperature.

When LTE exists, collisions redistribute energy between kinetic (translation) and molecular excited states (electronic, vibrational, rotational) much faster than photons are emitted. This creates a Boltzmann distribution of energy between translational and molecular excited states. In that case, Planck and thermodynamic temperature are the same.

Planck derived his law by POSTULATING a Boltzmann distribution of states. So Planck’s Law assumes that thermodynamic and “Planck temperature” are the same. His Law was based on one CONCEPT of temperature. (I don’t know why they put Planck’s name on a concept for temperature where a Boltzmann’s distribution doesn’t exist.)

****** LTE exists everywhere in the atmosphere below 70 km. The atmosphere above 70 km is UNIMPORTANT to climate. OLR doesn’t change above 70 km. DLR from above 70 km is negligible. *****

So, you are both right and wrong. For climate (the purpose of this blog), the incident flux of radiation is unimportant because collisional excitation and relaxation are much faster than emission of photons. That means that thermodynamic and Planck temperature are equal – that the “flux of incident EM” hasn’t made Planck temperature any higher than the thermodynamic temperature. When discussing radiating atmospheres at this blog, you would avoid confusing yourself AND OTHERS sticking with the practical principle: Emission depends only on thermodynamic temperature. Many skeptics believe that the only way a molecule can emit a photon is to absorb a photon. They think photons are trapped, re-emitted, and conserved. They act as if the only concept that exist is Planck temperature (though they don’t use this term). This is insanity.

A blog on the thermosphere or interstellar gases would enjoy discussing the difference between Planck and thermodynamic temperature.

on January 4, 2017 at 5:53 amRWFrank,

“RW: You have written far too many words to compose a sensible reply.”OK, maybe I was a little overly long winded. The rest of your post I did read a couple times. I understand what you’re saying, but you’re really just more or less degreeing various things. There’s nothing inherently wrong with that per say, but how about a little healthy skepticism or open mindedness? It wouldn’t kill you (or anyone else here, BTW).

I think the best way to go about with is with baby steps to avoid confusion or misunderstanding.

First of all, even if what I’ve proposed (or really what GW proposed) is the actual physical reality, several important things DO NOT change, i.e. do not differ from the mainstream view or mainstream model of atmospheric radiation, in anyway at all.

They are:

1) The final result produces the exact same macroscopic view of measured temperature and emissions (even down to the individual wavelength).

2) The Schartzchild eqn. and what it predicts so far as how the IR intensity changes as it moves through the absorbing and emitting layers of the atmosphere, holds 100% exactly the same. There is zero difference.

Is this clear from the outset?

on January 4, 2017 at 10:35 amFrankRW: I tried to pick one sentences that seemed to contain a valuable thought:

“but in a radiating gas isn’t what we sense as ‘temperature’ always the combination of a flux of incident EM radiation and that of a locally present kinetic flux of molecules in motion?”

In the macroscopic world, temperature seems to be a simple concept that can be measured with a thermometer. In a microscopic world of constantly colliding molecules, temperature is quite complicated. Check out the wikipedia article on temperature. One concept of temperature begins with the kinetic theory of gases and that pressure is produced by collisions with walls and that macroscopic temperature is proportional to the mean kinetic (translational) energy of the molecules. Another concept begins with entropy, in which dS = dq/T. Why is the T in the kinetic theory of gases the same T used in entropy? Statistical mechanics adds the idea that entropy is related to molecular disorder. And Planck’s Law provides a way to convert radiation intensity at any wavelength into a third idea of temperature. This third approach calculates a different temperature from the first two concepts when LTE doesn’t exist.

If you and George want to re-invent this area of physics and communicate with others, it helps to understand what is already known.

For climate change, the answer to your question is NO: You only need to think of temperature as kinetic energy; not kinetic energy plus radiation.

on January 4, 2017 at 4:01 pmRWFrank,

“If you and George want to re-invent this area of physics and communicate with others, it helps to understand what is already known.”Sure. However, GW at least claims there’s not any radical new transformative knowledge from anything he claims, including this particular component on atmospheric radiation, LTE, etc., which he says would only constitute a very slight adjustment or refinement to already well established theory.

What this all boils down to BTW are subtle, but significant nuances that relate the GHE, the underlying physics driving it, the conceptualization of those physics, and how they relate to accurately estimating climate sensitivity to increased GHGs. Now it’s true that his methods derive very low sensitivity, as his best estimate for 2xCO2 is around 0.35C. Surely, I don’t think you or anyone would argue this is a physical impossibility. In reality, he actually derives the same feedback factor as Lindzen and Choi do, i.e about 0.7C per 3.7 W/m^2 of forcing. It is brought down to 0.35C due to his claim of a factor of 2 error in the quantification of the initial forcing, so far as how it applies to surface warming. However, this claim has to do with the application of the RT calculated 3.7 W/m^2 — not the calculation itself, which he has done himself from scratch and also gets about 3.6-3.7 W/m^2 (which is the net increase in optical thickness looking up through the whole, converted in W/m^2, or the difference between the reduced IR intensity at the TOA and increase IR intensity at the surface, calculated via the Schartzchild eqn.).

Now, all this aside, let’s start with specifically how you (and the field) is defining LTE. Can you provide a clear and specific definition that we can work from?

on January 4, 2017 at 8:42 pmFrankLTE: A large group of molecules are in LTE when collisions transfer energy within the group faster than any other process (especially radiation) brings energy into or out of the group.

In LTE, the behavior of the group depends on their temperature (mean kinetic energy of the group), not their past history. The fact that some molecules in the group absorbed or emitted photons a few seconds (or milliseconds) ago is irrelevant if I know the group temperature. For molecules, the Boltzmann distribution determines how energy is partitioned within the group.

on January 4, 2017 at 8:46 pmFrankRW: As best I can tell, George is interpreting data in light of existing theories, not creating new theories. I don’t remember much except that his fit depends on what happens at low temperature in polar regions. Most of the planet is 270-300 K and the data is very noisy in that relevant range.

Lindzen’s low climate sensitivity is derived mostly from the large temperature change during large El Nino events. His conclusions depend on accepting a lagged relationship between reflected SWR and temperature. Current temperature correlates with less reflection of SWR today (positive feedback) and more reflection of SWR in 3-4 months in the future (negative feedback). Both correlations are weak (R2 = 0.25) and unconvincing Large El Ninos ARE associated with some immediate negative feedback in the LWR channel and the relationship between TOA OLR temperature is robust. In any case, El Nino warming (focused in equatorial regions) is a dubious model for global warming (with polar amplification).

In both cases, a graph that plots W/m2 vs K produces something that has the units for the reciprocal of climate sensitive (W/m2/K), but there may be little connection between the slope and a climate sensitive relevant to global warming – a rising in temperature everywhere with more at higher latitudes.

on January 5, 2017 at 4:18 pmRWFrank,

“LTE: A large group of molecules are in LTE when collisions transfer energy within the group faster than any other process (especially radiation) brings energy into or out of the group.In LTE, the behavior of the group depends on their temperature (mean kinetic energy of the group), not their past history. The fact that some molecules in the group absorbed or emitted photons a few seconds (or milliseconds) ago is irrelevant if I know the group temperature. For molecules, the Boltzmann distribution determines how energy is partitioned within the group.”OK, this is a good start. Now, will you further say and/or agree that this manifestation of LTE you’re describing is independent of the material, i.e. the matter, it’s applied to? That is, the physics occurring are the same whether the matter is in the form of a liquid, a solid, or a gas?

on January 6, 2017 at 3:36 amFrankRW: This definition of LTE should work for all materials – but the converse is not true – all materials are not in LTE. Normally we can only get significant amount visible light from materials that are above 1000 K (Planck’s Law): the sun and tungsten filaments. However, we have learned to create devices that are not in LTE: “fluorescent” and LED light and lasers, for example.

on January 8, 2017 at 6:27 pmRWBTW, for anyone interested we, i.e. myself and Frank (and many others), are now discussing at lot this stuff over here where GW has just recently posted a new guest essay:

https://wattsupwiththat.com/2017/01/05/physical-constraints-on-the-climate-sensitivity/comment-page-1/

on January 10, 2017 at 5:07 amRWFrank,

“RW: This definition of LTE should work for all materials – but the converse is not true – all materials are not in LTE.”OK good. This is a clear starting point from which to work from. Now, I want to move in very, very baby steps here and really clarify terms being used in a lot of detail before applying such terms to the discussion.

I would like to avoid a repeat of the scenario where it took 5000 posts in 50 different threads over 5 years to establish that absorption ‘A’ is IR optical thickness looking up, and transmittance ‘T’ is 1-‘A’.

on January 3, 2017 at 3:53 pm |RWDewitt,

“No.Kirchoff’s Law only requires that emissivity is equal to absorptivity. There is no such requirement about absorption and emission except at thermodynamic equilibrium.”Right, but in order for that to be true it requires the condition of LTE.

But the atmosphere is not at thermodynamic equilibrium. The atmosphere emits more radiation than it absorbs directly. Approximate energy balance in the atmosphere requires net convective heat transfer from the surface.”Yes, but it is said to be in *local* thermodynamic equilibrium, i.e. LTE. At least for the bulk of the troposphere.

on January 3, 2017 at 4:11 pmMike M.RW wrote: “in order for that to be true it requires the condition of LTE.”

No, emission = absorption requires equilibrium throughout the system, not just LTE in part of the system.

Consider a blackbody in LTE at some high temperature. It is located somewhere out in interstellar space. Clearly, emission is greater than absorption.

on January 3, 2017 at 4:36 pmRWMike,

We’re talking about *local* thermodynamic equilibrium, and thus corresponding local absorption and emission. Sorry if I didn’t make this clear.

on January 4, 2017 at 12:54 amMike M.RW: “the end result is B, i.e. different than A. Basic logic dictates there must some difference of physical processes occurring that accounts for this.”

No, what is needed is a difference in properties. A and B might be at different temperatures, therefore producing different emissions. Or they might have different emissivities.

I can see through glass, but not through wood. Same physics, different result.

on January 4, 2017 at 5:34 amRW“RW: “the end result is B, i.e. different than A. Basic logic dictates there must some difference of physical processes occurring that accounts for this.”No, what is needed is a difference in properties. A and B might be at different temperatures, therefore producing different emissions. Or they might have different emissivities.I didn’t specify a specific temperature or emissivity, because it’s not required to illustrate the point, i.e. the difference. All matter in LTE that subsequently absorbs more photons than it’s currently emitting, does not have an infinite capacity to store this additional absorbed energy. It must eventually convert at least some of that absorbed EM energy back into EM energy via increased photonic emission. Of course some of the additionally absorbed EM energy can be lost via non-radiant means, but the point is the same.

I can see through glass, but not through wood. Same physics, different result.”No, different physics, different result. The end result is not the same. Otherwise you would be able to see through both (and equally well).

on January 3, 2017 at 4:34 pm |RWMike,

“RW,You wrote: “(yet this is what occurs in the atmosphere, except if absorbed by the water or ice in clouds)”.

That is entirely wrong.”How so? Then why the need to apply Kirchoff’s law to each wavelength independently in order to accurately predict the emitted spectrum? You don’t have to do this with the water of ice in clouds, right?

“Up to that point, you seem to have made only one subtle error that led you astray. Broadband thermal emission is not the Planck spectrum. It is the Planck spectrum multiplied by the emissivity.”Right, but the point is you cannot predict the correct spectrum based on Planck’s law per its temperature and emissivity like you can for a liquid or solid. You have to scale the emissivity per each individual wavelength’s absorptivity in order to predict the correct spectrum emitted. If the absorbed radiant energy was being converted in the mechanical energy of molecules of motion via collisions with other gas molecules (as claimed), as it is in liquid or solid at LTE, you could (or should be able to) predict the spectrum based on Planck’s law in the same way. But you can’t. GW is seeing this alone as overt falsification that the absorbed radiant energy is being transferred by collisions to the non-GHG molecules; however, he does not think this means the gas and emission from the gas is therefore non-LTE (and in a seeming violation of Kirchoff’s law), as it appears the field of climate science at least does.

As long as the linear kinetic energy of the GHGs and non-GHGs are equalized amongst each other by collisions, LTE still exists even if the absorbed photons only have very little of their energy transferred via collisions to non-GHG molecules. Now, even if this scenario is true, collisions can still trigger emissions to some degree — it’s just that it isn’t the dominant mechanism triggering emissions from GHG molecules. Now, it’s important to note that GW claims there is no real or clear mechanism by which the energy of an absorbed photon by a GHG molecule, whose energy is stored as internal vibration energy, will transfer this energy upon collision with another GHG molecule or non-GHG molecule into linear kinetic energy, in the way it does in liquid or solid. So it’s not claimed the collisions don’t occur — they do, but only that there is largely no transfer of energy from internal vibration to linear kinetic. As long as the GHGs and non-GHGs have their linear kinetic energy equalized by collisions, Kirchoff’s law can still be fully satisfied under such conditions; and thus so can the condition of LTE (or vice versa).

Here is the wiki excerpt on this:

“It is important to note that this local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas need not be in thermodynamic equilibrium with each other or with the massive particles of the gas in order for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist.”So again, even if a case can be made here for this subtle difference in the manifestation of LTE, what’s the big deal? This, even if true, will have no effect on how the IR intensity changes it moves through the absorbing and emitting layers of the atmosphere. This is what everyone here seemed to think, as I recall.

on January 3, 2017 at 4:51 pmRWIt’s important to note here the central tenant of LTE in the mainstream’s view on atmospheric radiation is fully agreed to be correct. It’s only being argued that the physical manifestation of the condition of LTE itself established by the mainstream view is incorrect, and it has gotten around this incorrect physical manifestation of LTE by applying Kirchoff’s law to each wavelength independently. This, it’s claimed, compensates for the error of not modeling the actual physics occurring and allows the mainstream model to get the correct answer.

But again, even if true, this is not going to radically transform our understanding of the universe as it seemed many of you were implying. It would have no affect on how the intensity changes, though it *may* reveal some nuances about dynamics.

on January 3, 2017 at 4:53 pmRWHere is the wiki link, BTW:

https://en.wikipedia.org/wiki/Thermodynamic_equilibrium#Local_and_global_equilibrium

on January 3, 2017 at 5:57 pmMike M.RW,

You wrote: “the point is you cannot predict the correct spectrum based on Planck’s law per its temperature and emissivity like you can for a liquid or solid.”

No, it is exactly like a liquid or a solid. Since you won’t listen, there is not point in saying more.

on January 3, 2017 at 11:42 pmRWMike,

“No, it is exactly like a liquid or a solid. Since you won’t listen, there is not point in saying more.”Then why the need to apply Kirchoff’s law to each wavelength independently in order to predict the correct emitted spectrum?

It’s because the emission rate of each wavelength is proportional to the absorptive rate of each wavelength, right? Where LTE exists.

This is not the same process that occurs in a liquid or solid, where even if a single wavelength IR flux is absorbed affecting the liquid’s or solid’s temperature, i.e. increasing it, the incremental increased IR emitted flux due to the warming (in LTE) is NOT proportional to the single wavelength flux being absorbed. That is, increased IR emitted flux will NOT be solely an increase in the absorbed single wavelength.

You have scenario 1, where the process of absorbed IR energy is converted back into IR emitted flux, and the end result is A, and you have scenario 2, where the process of absorbed IR energy is converted back into IR emitted flux, and the end result is B, i.e. different than A. Basic logic dictates there must some difference of physical processes occurring that accounts for this.

on January 3, 2017 at 11:55 pmRWHow do you not see that it’s being claimed the same set of physical processes are at work in the gases of the atmosphere as they are (or would be) in liquid or solid, manifesting LTE via the transfer by collisions of absorbed IR energy into the kinetic energy of molecules in motion and then back into IR emitted energy, yet one results in a different end result than the other, i.e. the A and B end results from scenarios 1 and 2 I outlined above?

If the physics are the same, as they are claimed to be, this would not be the case in those scenarios.

on January 4, 2017 at 12:57 am |Mike M.RW,

You need to think about the meaning of “local” in “local thermodynamic equilibrium”. And the fact that the mean free path of photons can be very different from the mean free path for molecules.

on January 10, 2017 at 10:43 pm |FrankRW wrote: “I would like to avoid a repeat of the scenario where it took 5000 posts in 50 different threads over 5 years to establish that absorption ‘A’ is IR optical thickness looking up, and transmittance ‘T’ is 1-‘A’.”

I don’t want to participate in any further discussion where absorption and transmittance are used in connection with an atmosphere that emits a significant amount of radiation. No would I want to even if you used the technically correct term absorbance. So don’t write for me.

One reason: George White’s recent Figure 2 at WUWT with A = 0.75 doesn’t produce anything close to the observed value for DLR. He and you are applying the wrong physics (the S-B equation) and getting the wrong answer.

on January 10, 2017 at 11:52 pm |RWFrank,

I guess you missed my reply to you here:

https://wattsupwiththat.com/2017/01/05/physical-constraints-on-the-climate-sensitivity/comment-page-1/#comment-2393978

His Ps*A/2 is NOT a quantification of actual DLR at the surface. It would surely be wrong if it were.

The problem is you don’t seem to be making any effort to understand the methods he’s employing. Like so many you’re just more or less going into ‘shout down’ mode. He’s not modeling the actual thermodynamics and thermodynamic path manifesting the energy balance. It would all surely be spectacularly wrong if this is what he were doing.

It’s actually not that all that hard to understand the foundation behind black box derived equivalent modeling and system analysis, but you have to step back and say to yourself…”you know what, I don’t understand this thing…maybe this guy really has something here and I’m missing it”.

Yes, it’s counterintuitive in the sense that what you’re looking at in the derived model is not what is actually happening, but instead it’s only being claimed that the flow of energy in and out of the whole system would be the same if it were what was happening. There is absolutely nothing more than this being claimed. It doesn’t tell us why the balance is what it is (or has manifested to what it is), nor does it describe and quantify the complex, highly non-linear path the system takes from equilibrium state to another.

Again, if it were claiming to do this, it would all be spectacularly wrong. Your instinct that it cannot possibly do this is 100% correct. It can’t — it’s not even close.

The question becomes, if it’s not doing this, then what is it doing? But you have to step back and acknowledge you don’t understand, and from that make an effort to. And again, maybe there’s an error somewhere, as certainly anyone can be wrong. But again, you’ve got to first make some effort first.

on January 10, 2017 at 11:56 pm |RWAnd yes, you’re right. Absorptance is the right term.

on January 11, 2017 at 6:28 am |RWFrank,

The methods of system analysis GW is employing are widely used in the private sector in highly critical applications where a high level of accuracy and precision is required. It makes no sense they would be so widely used if the methods were not valid and didn’t consistently produce accurate and reliable results. In the private sector, they generally don’t employ the kind of system analysis that climate science is doing for an application like climate sensitivity, like GCMs or the looking at TOA net flux changes to surface temperature changes, as there are just way too many heuristics an inherent inaccuracies or ‘go wrongs’ involved in such methods.

Of course, if what were actually being claimed was what everyone here thinks is being claimed with it, it would surely be spectacular nonsense. As no doubt that’s what people here have concluded.

But, black box system analysis and a subsequently derived equivalent model is only an abstraction:

https://en.wikipedia.org/wiki/Black_box

“The black box is an abstraction representing a class of concrete open system which can be viewed solely in terms of its stimuli inputs and output reactions:The constitution and structure of the box are altogether irrelevant to the approach under consideration, which is purely external or phenomenological. In other words, only the behavior of the system will be accounted for.”The ‘Ps*A/2’ is only an abstraction, i.e. the simplest construct or model that results in the same rates of joules gained and lost at the surface and the TOA, given its inputs and required outputs at its boundaries (the surface and the TOA) per the black box atmosphere.

The foundation behind black box derived equivalent modeling is there are an infinite number of equivalent states that can have the same average, or there are an infinite number of physical manifestations that can have the same average.

Whether you operate as though Ps*A/2 is occurring or to whatever degree you can successfully model the steady-state atmosphere by approximating the actual thermodynamics, the final flow of energy in and out of the whole system is the same. You can even successfully construct the actual thermodynamic model out to more and more micro complexity, but it’s still bound the same end point, i.e. the same rates of joules gained and lost at surface and the TOA, otherwise the model is wrong.

Thus, Ps*A/2 is just as valid at quantifying the aggregate end result or aggregate dynamics of the complex thermodynamic path actually manifesting the steady-state energy balance. There is nothing missing from all the physics occurring, because the manifested boundary fluxes themselves are the net result of all of the physical effects, radiant and non-radiant, known and unknown, going into and out of the black box. This is why the model accurately quantifies the aggregate dynamics of the steady-state, even though it’s not modeling the actual behavior, i.e. modeling the actual thermodynamic path.

It’s only from the quantitative equivalence of Ps*A/2, given the required inputs and outputs from the black box, where the deduction is then being made that only about half the IR power absorbed by the atmosphere from the surface is acting to ultimately warm the surface within the highly complex and non-linear thermodynamic path actually manifesting the balance, where as the other half is acting to ultimately cool the system and surface within the complex thermodynamic path. It doesn’t quantify the thermodynamic path itself or tell us why the surface balance is what it is, but rather only quantifies its effect within in it, so far its ultimatel contribution to enhanced surface warming.

What is it that you think precludes the climate system from being able to apply these techniques to it? I don’t see that there are any.

on January 11, 2017 at 8:42 amscienceofdoomRW

On May 1st, 2016 I said:

Finally, after many “no progress” comments from RW 8 months ago, I reach the end of patience.

For people interested in RW’s point of view, please read his 5,000+ comments.

George White has a view on atmospheric radiation that is completely unsupported by the physics of the last 60+ years. And George doesn’t realize it. Or can’t.

I don’t really care.

Anyhow, no more.

Interested parties can find out more about George White and RW and their insights elsewhere, or here, preserved for posterity.