Many blogs write about over-simplifications of the radiative effects in climate. Many of these blog articles review simple explanations of how it is possible for atmospheric radiative effects to increase the surface temperature – e.g. the “blackbody shell” model.
As a result many people are confused and imagine that climate science hasn’t got past “first base” with how radiation interacts with atmospheric gases.
In any field the “over-simplified analysis” is designed to help the beginner to gain conceptual understanding of the field. Not to present the complete field of scientific endeavor.
This article will try to “bridge the gap” between the over-simplified models and the very detailed theory.
Note – it isn’t possible to cover the whole subject in one blog article and a decent treatment of radiative transfer consumes many chapters of a textbook.
There will be some maths. But I will also try to provide a non-mathematical explanation of “the maths” – or “the process”.
If you find maths daunting or incomprehensible that is understandable, but there is a lot that can be learned by trying to grasp some of the basic concepts.
Monochromatic Radiation
This means we need to treat each wavelength separately. Why? Because absorption and emission is a wavelength dependent process.
For example, here is the absorption spectrum of one part of NO2:
Figure 1
So when we consider radiation “zooming” through the atmosphere we have to take it “one wavelength” at a time.
There isn’t really any such thing as monochromatic radiation or being able to take “one wavelength at a time” – but that doesn’t stop us analyzing the problem..
A Digression on “Calculus”
How does the world of science and engineering deal with continuous change?
If a force, or a ray of radiation, or a movement of the atmosphere is a continuously changing value, how do we define it? How do we deal with it?
Calculus is the answer. This branch of mathematics allows us to deal with small changes and continuous changes and provide theorems, answers and equations.
For example, if we know something about the variable distance, s, with respect to time, t, then an equation defines the relationship between velocity, v, and these other variables:
v = ds/dt
where the “d”s at the beginning means: “the rate of change of”, so the formula means – in English:
Velocity = the rate of change of Distance with respect to Time
Generally when you see something like “da/db” it means “the rate of change of variable a with respect to variable b“.
It is also common to see Δx and δx – meaning “a small change in x”. This is different from “the continuous change of x”, but the specific rationale behind when we use “dx” and “Δx” isn’t so important for this article.
The other important area of calculus is “summing” results when again there is continuous change. If someone travels at 10 km/hr for 1 hour and then at 20km /hr for 1 hr they will have traveled 10km + 20km = 30km. That’s an easy calculation. But if velocity has continuously changed with time – how do we calculate the total distance traveled?
This means, in (harder to understand) English:
Distance = the integral of Velocity with respect to Time, between the limits of time = t1 and time = t2.
The integral is like the summation of each of the tiny distances covered in each very small time period (between t1 and t2). The integral is also often referred to as “the area under the curve”.
..end of digression
Absorption of Radiation
Let’s define a monochromatic beam of radiation, Iλ, travelling through the atmosphere:
Figure 2
We have some information we can use:
The rate of absorption of the beam of radiation as it travels through the atmosphere is proportional to the amount of absorbers at that wavelength and the ability of that absorber to absorb radiation of that wavelength
This is known as the Beer-Lambert law, and is written like this (note 1):
dIλ = -nσIλ .ds [1]
which means the same thing in mathematical terms, with n=number of absorbing molecules per unit volume (corrected), and σ=capture cross-section (or effectiveness at absorbing at that wavelength), and the subscript λ indicates that this equation is only true for the radiation at this wavelength
The value σ is a material property and so constant for one gas at one wavelength at one temperature and one pressure, but varies with the temperature and pressure of the gas (see comment). The value n will depend on location in the atmosphere. If we solve this equation between two arbitrary points, s1 and s2, we get:
Iλ(s2) = Iλ(s1). exp [ -∫σn(s).ds ] [2]
where the integral is between the limits s1 and s2
What it means in English:
The intensity of radiation at wavelength λ is reduced as it travels through the atmosphere according to the total amount of the absorber along the path. “exp” is e, or 2.718, to the power of the value in the square brackets.
If the concentration of the gas doesn’t change along the path the equation becomes a simpler version:
Iλ(s2) = Iλ(s1). exp [ -σn.(s2 – s1) ] [3]
Optical Thickness & Transmittance
These are some important properties to understand.
Optical thickness, usually written as τ, is the property inside the exponential in equation [1].
τ = ∫σn(s).ds [4]
where the integral is between the limits s1 and s2
Transmittance, usually written with a weird T symbol not available in WordPress, but with “t” here, is the amount of radiation “getting through” along the path we are interested in.
t(s1,s2) = exp [-τ(s1,s2)] [5]
also written as:
t(s1,s2) = e-τ(s1,s2)
The optical thickness is “dimensionless” as is the transmittance.
So we can rewrite equation [1] as:
Iλ(s2) = Iλ(s1).t(s1,s2) [6]
The transmittance can be a minimum of zero – although it can never actually get to zero – and a maximum of 1. So it is simply the proportion of radiation at that wavelength which emerges through the section of atmosphere in question:
Figure 3
With optical thickness, τ = 1, transmittance, t = 0.37 – which means that 63% of the radiation is absorbed along the path and 37% is transmitted.
With optical thickness, τ = 10, t=4.5×10-5 – that is, 45 ppm will be transmitted through the path.
A note on definitions – optical thickness is usually defined as = 0 at the top of atmosphere, where z is a maximum and a maximum at the surface, where z = 0:
Figure 4
So τ increases while z decreases, and z increases while τ decreases.
Absorptance
In the absence of scattering (note 2), absorptance, a = 1 -t.
That is, whatever doesn’t get transmitted gets absorbed.
Plane Parallel Assumption
If you refer back to Figure 2, you see that the radiation is not travelling vertically upwards, but at an angle θ to the vertical.
Using simple trigonometry, ds = dz / cos θ [7]
It’s always an advantage if we can simplify a problem and relating everything to only the vertical height through the atmosphere helps to solve the equations.
Atmospheric properties vary much more in the vertical direction than in the horizontal direction. For example, go up 10 km and the pressure drops by a factor of 5 – from 1000 mbar to 200 mbar. But travel 10 km horizontally and the pressure will have changed by less than 1 mb. Temperature typically changes 100 times faster in the vertical direction than in the horizontal direction.
And as air density is determined by pressure and absolute temperature we can make a reasonable assumption that the density at a given height, z directly above is the same as the density at the same height when we look at an angle of 45°.
Of course, by the time we are considering an angle close to 90° – i.e., horizontal – the assumption is likely to be invalid. However, the transmissivity of the atmosphere at angles very close to the horizon is extremely low anyway, as we will see.
Therefore, making the assumption of a plane parallel atmosphere is a good approximation.
Let’s review the earlier equations using a mathematical identity that reduces “equation clutter”:
μ = cos θ [8]
And rewrite equation [5]:
t(z1,z2) = exp [-τ(z1,z2)/μ] [9]
Notice that the equations are now rewritten in terms of the optical thickness between two vertical heights and the angle of the radiation.
It might help to see it in graphical form – and note here that the optical thickness, τ, is for the vertical direction (otherwise the graph would make no sense):
Figure 5
This simply demonstrates that as the angle increases the radiation has to travel through more atmosphere.
So suppose the optical thickness vertically through the atmosphere, τ = 1, then for:
- a vertically travelling ray the transmittance = 0.37
- for a ray at 45° the transmittance = 0.24
- for a ray at 70° the transmittance = 0.05
- for a ray at 80° the transmittance = 0.003
Emission
Using Kirchhoff’s law, absorptivity of a material = emissivity of a material for the same wavelength and direction. For diffuse surfaces – and for gases – direction does not affect these material properties, so they are only a function of wavelength. (And for all intents and purposes, absorptance is the same term as absorptivity, and transmittance is the same term as transmissivity-see comment).
Emission of radiation at any given wavelength for a blackbody (a perfect emitter) is given by Planck’s law, which is usually annotated as Bλ(T), where T = temperature.
The absorptivity of a gas, aλ = 1-tλ =emissivity of a gas, ελ. (Corrected)
For a very small change in monochromatic radiation due to emission:
dIλ = ελBλ(T) .ds [10a]
Therefore:
dIλ = nσBλ(T) .ds [10b]
So if now combine emission and absorption, equations 1 & 10:
dI/ds = nσ.(Bλ(T) – Iλ) [11]
If we combine this with our definition of optical thickness, from equation [4]:
dIλ/dτ = Iλ – Bλ(T) [12]
which is also known as Schwarzschild’s Equation – and is the fundamental description of changes in radiation as it passes through an absorbing (and non-scattering) atmosphere.
It says, in not so easy to understand English:
The change in monochromatic radiation with respect to optical density is equal to the difference between the intensity of the radiation and the Planck (blackbody) function at the atmospheric temperature
Sorry it’s not clearer in English.
In more vernacular and less precise terms:
As radiation travels through the atmosphere, the intensity increases if the Planck blackbody emission is greater than the incoming radiation and reduces if the Planck blackbody emission is less than the incoming radiation
Solving Schwarzschild’s Equation
Notice that this important equation contains the Planck term, which is for blackbody radiation (i.e., radiation from a perfect emitter), yet the atmosphere is not a perfect emitter. We definitely haven’t assumed that the atmosphere is a blackbody and yet the Planck terms appears in the equation. It’s just how the equation “pans out”..
I mention this only because so many people have come to believe that there is some “big blackbody assumption” in climate science and they might be concerned by this term. Nothing to worry about, this has not been assumed.
Let’s find a solution to the equation if we are measuring the TOA (top of atmosphere) radiation. Refer to Figure 4:
- at TOA, z=zm and τ=0
- at the surface, z=0 and τ = τm
Now some maths manipulation – skip to the end people who don’t like maths..
First we note a handy party piece:
d/dτ [Ie-τ] = e-τ. dI/dτ – Ie-τ [13]
Now we multiply both sides of equation [12] by e-τ:
e-τ.dIλ/dτ = e-τ.Iλ – e-τ.Bλ(T) [14]
Re-arranging:
e-τ.dIλ/dτ – e-τ.Iλ = – e-τ.Bλ(T) [14a]
And substituting “handy party piece” [13] into [14a]:
d/dτ [Iλe-τ] = – e-τ.Bλ(T) [15]
Now we integrate [15] between τ=0 and τ=τm:
Iλ(τm)e-τm – Iλ(0) = – ∫ Bλ(T)e-τ dτ [16]
where the integral is between the limits of 0 and τm
And re-arranging we get:
..end of maths manipulation
Iλ(0) = Iλ(τm)e-τm + ∫ Bλ(T)e-τ dτ [16]
Which – for those who haven’t followed the intense maths:
Iλ(0) = Iλ(τm)e-τm + ∫ Bλ(T)e-τ dτ [16]
The intensity at the top of atmosphere equals..
The surface radiation attenuated by the transmittance of the atmosphere, plus..
The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere
Don’t worry about the maths, but it is definitely worth spending some time thinking about the words in colors – to get a conceptual understanding of how atmospheric radiation “works”.
It’s Not Over Yet – Conversion from Intensity to Flux and the Diffusivity Approximation
Remember the note about the Plane Parallel Assumption ?
Getting equations into WordPress is painful and time-consuming so a quick explanation followed by the result, especially as maths fatigue will have set in among most readers, if any made it this far.
Equation [16] is for spectral intensity. That is, one direction rather than the complete hemispherical power (flux).
To calculate spectral emissive power (flux per unit wavelength) we need to integrate the equation over one hemisphere of solid angle. We re-write equation [16] in the form of equation [9] so that the optical thickness references vertical height, z and μ, which is the cosine of the angle from the vertical. Then we integrate from μ=0 (θ=0°) to μ=1 (θ=90°).
Transmittance, t(z,0) = 2 ∫ e-τ(z,0)/μ μ.dμ
where the integral is from 0 to 1
This equation doesn’t have an “analytical” solution, meaning we can’t rewrite it in a nice equation form without the integral. But with some clever maths that I haven’t tried to follow – but have checked – we can produce an approximation which is known as the diffusivity approximation:
2 ∫ e-τ(z,0)/μ μ.dμ ≈ e-τ/μ’
Where μ’ is the “effective angle” which produces a close approximation to the actual answer without needing to integrate across all angles (for each wavelength). The best value of μ’ = 0.6.
Here is the calculated integral (left side of the equation) vs the approximation with μ’ = 0.6, as a function of optical thickness, τ, demonstrating the usefulness of the approximation:
Figure 6
There are some other refinements needed, for example, the reflection of atmospheric radiation for a surface emissivity < 1, which is then attenuated by the absorptance of the atmosphere before contributing the TOA measurement. But these factors can all be introduced into the equations.
Full Color Solution
What we have produced so far is a solution for each monochromatic wavelength, λ.
Also, we haven’t explicitly stated the fact that the optical thickness depends on the concentration and “capture cross section” of each absorber for that wavelength. So the optical thickness, and transmittance, for each height requires combining the effects of each active molecule.
The solution for flux, W/m², requires integrating the equations across all wavelengths.
Wow. Conceptually straightforward. But computationally very expensive – check out Figure 1 – the absorption characteristics of each radiatively-active gas vary significantly with wavelength.
Conclusion
The equation for radiative transfer is commonly known, (in differential form) as Schwarzschild’s Equation. It relies on fundamental physics.
To solve the equation requires some maths.
To solve the equation in practical terms the plane parallel assumption is used. This relies on the fact that variations in temperature and pressure (and therefore density) are negligible in the horizontal direction compared with the vertical direction.
The equation could be solved without this plane parallel assumption, but the variations horizontally in pressure and temperature are so slight that the same result would be obtained, unless extremely high quality data on temperature, pressure, density and concentration of absorbers was available.
To solve the equation in practical terms we need to know:
- the temperature (vs height) in the atmosphere
- the concentration of each absorber vs height
- the absorption characteristics of each absorber vs wavelength
In any practical field, the “proof of the pudding is in the eating”, and so take a look at Theory and Experiment – Atmospheric Radiation – where theoretical and practical results are compared.
And lastly, the Stefan-Boltzmann equation, correct and accurate though it is (check out Planck, Stefan-Boltzmann, Kirchhoff and LTE) is not used in the actual equations of radiative transfer in the atmosphere. Nor is any assumption of “unrealistic blackbodies”.
I only note these last points due to the high quantity (but not high quality), of blog articles and comments demonstrating the writers haven’t actually read a textbook on the subject, but still feel qualified to pass judgement on this field of scientific endeavor.
Other articles:
Part One – a bit of a re-introduction to the subject
Part Two – introducing a simple model, with molecules pH2O and pCO2 to demonstrate some basic effects in the atmosphere. This part – absorption only
Part Three – the simple model extended to emission and absorption, showing what a difference an emitting atmosphere makes. Also very easy to see that the “IPCC logarithmic graph” is not at odds with the Beer-Lambert law.
Part Four – the effect of changing lapse rates (atmospheric temperature profile) and of overlapping the pH2O and pCO2 bands. Why surface radiation is not a mirror image of top of atmosphere radiation.
Part Five – a bit of a wrap up so far as well as an explanation of how the stratospheric temperature profile can affect “saturation”
Part Seven – changing the shape of the pCO2 band to see how it affects “saturation” – the wings of the band pick up the slack, in a manner of speaking
And Also –
Theory and Experiment – Atmospheric Radiation – real values of total flux and spectra compared with the theory.
Notes
Note 1: There are many formulations of the Beer-Lambert law and even much dispute about who exactly the law should be attributed to.
Other formulations include using the density of the gas and a matching coefficient for the effectiveness of the gas at absorbing.
Note 2: When considering solar radiation (shortwave), scattering is important. When considering terrestrial radiation (longwave), scattering can be neglected. In this article, we will ignore scattering, so the results will be appropriate for longwave but not correct for shortwave.
[…] This post was mentioned on Twitter by diyana, Margie. Margie said: Understanding Atmospheric Radiation and the “Greenhouse” Effect … http://bit.ly/dT3kne […]
Just thanks for your efforts, though I’m not good with maths, I know people that are more skeptical of the details of the greenhouse effect and more skilled in maths than me, and I’ve occasionally directed them to your blog.
One of the important consequences of The Equation is that it describes what we will “see” when looking at a given object. For instance, the temperature of the core of the Sun is around 13.6 million K but what we see is the object of temperature approximately 5778K.
Thus, we are not able to see the hotter part of the object if radiation from that part is absorbed on its way to the outer parts of the object. The radiation leaving the top of the atmosphere gives thus mainly the information about the temperature of the deepest part of the object that can be seen as well as about the absorption properties of the matter from this area and up to the outer part of the object.
To the contrary, if we are inside the object then we will not be able to see the colder, outer parts of the object. The only thing we will see, here feel, is the temperature of the closest to us part of the object. The changes in the temperature of the outer parts (for example due to the change in their composition) will, however, influence the rate of heat exchange between the inner and outer parts of the object, which will affect the temperature (and its variations) of the closest to us part of the object.
In order to estimate the temperature of the parts that we are not able to see, we have to perform a lot of thinking and calculations, which “science of doom” is showing us in a very pedagogical manner. Thanks for that.
With optical thickness, τ = 1, transmittance, t = 0.37 – which means that 63% of the radiation is absorbed along the path and 37% is transmitted.
With optical thickness, τ = 10, t=4.5×10-5 – that is, 45 ppm will be transmitted through the path.
Can you please explain this better. How can you use the same formula with different inputs,1 vs 10, and go from a percent to a ppm?
mkelly:
They are different ways of writing the same number. A lot of people less familiar with maths have trouble “seeing” what 4.5×10-5 actually is.
So 0.000045 = 4.5×10-5 = 0.0045% = 45 parts per million.
That is what I thought. I just wanted to make sure. Also please don’t be quite so condesending. I have a degree in mechanical engineering so I follow what you say. I use my J.P Holman “Heat Transfer” text book for light reading as followup to reading here.
But I think you should keep the same units when explaining things.
I enjoy this blog.
Sorry, it wasn’t intended that way.
I was attempting to explain that the reason that at times I write non-standard descriptions is because many people can’t visualize a number like 4.5×10-5. It wasn’t aimed at you.
SOD: Very nice. My only complaint is that you used Kirchhoff’s Law and emissivity in this derivation. There should be no need to invoke Kirckhoff’s law or any of the “old-fashioned” physics of bulk materials that was developed in the Dark Ages before we knew that molecules existed and developed quantum mechanics to explain their behavior. The absorption coefficient or cross-section, σ – more properly written as σ(λ) – is the probability from quantum mechanics that applies to absorption and emission. These two processes are the “time reverse” of each other so the same probability, σ or more properly σ(λ), should apply to both:
1) a molecule in a high energy state emitting photon of a given wavelength and lowering its energy by hc/λ.
2) a same molecule in a lower energy state absorbing a passing photon and increasing its energy by hc/λ.
Another problem with Kirchhoff’s Law is that it was meant to describe absorption and emission from bodies or surfaces in thermal equilibrium. For surfaces – the material for which Kirchhoff developed his law – radiation that isn’t absorbed is reflected (or scattered?). For gases, radiation that isn’t absorbed is transmitted (or scattered). For liquids, all of these processes may apply.
Above you say: “The absorptance of a gas, aλ = 1-tλ =emittance of a gas, ελ.” Then you equate the “emittance” (which appears to really be “transmission” and should have nothing to do with emission of photons) with “emissivity”, which opens the door for introducing Planck’s function into Equation 10a. If this hand waving isn’t correct and if σ is the constant that applies to the quantum mechanics for both absorption and emission, how does B(λ,T) get into the equation for emission?
If there are N molecules in a state capable of absorbing a photon and n molecules in the higher energy state produced by absorption, then n could equal to N*B(λ,T). If this were the case, we might expect Maxwell-Boltzmann statistics to apply to the situation. The Planck function, however, is derived for photons in otherwise empty space from Bose-Einstein statistics. Including the photon gives two states: State1, a molecule with photon near enough to be absorbed. State2, a molecule in a higher energy state. Then B(λ,T) would be the relative populations of these two states. Perhaps I should withdraw my comments about “old-fashioned physics”, “Dark Ages” and “hand waving”, because it is becoming clear that I don’t understand the situation. All my attempts to find a QM explanation for why Planck’s function appears in Schwartzschild’s equation on the web have failed. (I did see derivations similar to SOD’s) Can anyone help?
Frank:
Kirchhoff’s law proves that bodies in Thermodynamic Equilibrium have emissivity equal to absorptivity.
Experimental work subsequently proves that the material properties of bodies don’t change when they are out of Thermodynamic Equilibrium, as explained in Planck, Stefan-Boltzmann, Kirchhoff and LTE.
Kirchhoff’s law is true and can be applied, so why not use the simple explanation?
SOD (and DeWitt): I have now resolved my conceptual problems equating the behavior of solid surfaces and gases. For other skeptics, Kirchhoff’s Law was originally devised to explain the behavior of a surface at thermal equilibrium inside an enclosure. If absorption didn’t equal emission, the surface would warm or cool compared to the surroundings. If the surface is replaced with a volume of gas in a transparent vessel inside the same enclosure (and the other air is removed), the same reasoning appears to be true – even though the non-adsorbed radiation is reflected in the first case and transmitted in the second. In both cases, however, Kirchhoff’s “Law” is derived from common sense and should be verified. (My comments about your confusing transmissivity and emissivity don’t make sense on further review.)
However, I hate (perhaps irrationally*) the idea of treating absorption and emission as two fundamentally different physical processes, when they are simply the time-reverse of each other. At a microscopic level, why is Planck’s function (which has been derived by QM) the right factor for joining absorption to emission? The semi-classical derivation of Planck’s function begins with a box of oscillators at a given temperature and seems (if I understand correctly) to derive the energy flux (per unit volume per solid angle, if my dimensional analysis is correct) emitted in all directions from that volume – without reference to the nature of the oscillators. More fundamentally, Planck’s function is derived from Bose-Einstein statistics, wherein the relative number of particles in a higher energy state E = hv is proportional to (exp(E/kT-1))/exp(E/kT). Compared with Planck’s function, I see an extra exp(E/kT) term to cancel Maxwell distribution of energy between the two states of the GHG. Though I don’t really understand this physics, some of the right mathematical terms needed to derive Schwartzschild’s equation seem to be present and “begging” to be joined. Do we really have to go to a macroscopic scale – and invoke Kirchhoff’s Law – to derive Schwartzchild’s equation? (It seems clear that this is what some textbooks do, so I’m not criticizing anyone.) Or is Kirchhoff’s Law, like so many other macroscopic “laws”, a consequence of more fundamental physics?
*My aversion to the macroscopic descriptions is that they often get misapplied: Slabs of atmosphere don’t always emit black- or gray-body radiation. Emissivity is not a constant for gases. DLR isn’t contrary to the 2LoT. In my dreams, we would: Derive the correct laws from the microscopic scale, then show in which situations these fundamental laws do and do not produce the “laws” used to describe macroscopic behavior. This doesn’t mean I don’t appreciate SOD’s (generally heroic) efforts and the useful comments of others.
Frank.
I interpret the appearing of the Planck function in the Schwarzschild equation as the ”background” radiation, which, if one so desires, might be put to zero. In such a case, we will get the pure Beer law.
But generally, we have a background radiation from the object A of temperature T and the irradiation from the object B. If both objects are at the same temperature then the irradiation from B will not contribute to the change of the intensity of radiation from the object A when being measured by a detector in some given direction. A and B will only exchange the same power density (W/m^2) with each other which does not affect the outgoing radiation in the direction to the detector.
If the temperature of B is higher than that of A then we will have an excess of power density from B to A and it is this excess (which might be called signal) that is denoted by I. The intensity of the signal I is diminishing exponentially along it way of propagation through the object A in the direction to the detector until it drops to the level of the background radiation. After that, the signal will not be detectable.
On the molecular level we have the molecular excitations due to the collisions between the molecules in the sample. The incoming signal increases the fraction of the excited molecules within the given volume from n to n+dn. A part of the extra excited molecules, dn, will re-emit photons while a part will lose the energy in collisions, which will increase the kinetic energy of the molecules, instead (the absorption and thermalizing process). The process of thermalizing is associated to the absorptivity of the medium in respect to the given wavelength (photon energy). The absorptivity is thus a function both of absorption properties of the molecules and of the rate of molecular collisions. Thus the absorption coefficient of molecules is not the same as the absorption properties of the sample, these different absorptions should be not confused with each other. But it is correct that the absorption coefficient of the molecules is equal to their emission coefficient for the given wavelength (in accordance to Einstein theory of interaction of matter with radiation), and that you can describe this formally as the reversion of the absorption process in time.
There will be less and less photons in the signal. The propagation of the signal (photons above the background level) creates thus the temperature gradient along the direction of propagation of the signal. Conduction, convection and radiation processes will then strive to zero thus induced gradient.
In this interpretation, the background radiation given by the Planck function plays the similar role as mc^2 plays in the special theory of relativity, m being the rest mass of the object. In the classical physics, mc^2 is put to zero as the Planck function is put to zero in the Beer law.
Frank,
The derivation of the Planck function in Appendix A of Introduction to Atmospheric Radiation by K-N Liou starts with this phrase:
In accordance with Boltzmann statistics,…
Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration. Low concentration means densities much less than a white dwarf. I suspect high temperature means anything well above absolute zero. IOW, a gas or a solid object on the Earth’s surface at 300 K is going to follow Maxwell-Boltzmann statistics.
[…] will eventually point out that I didn’t make use of the diffusivity approximation from Part Six in my […]
Now we can understand where the “old-fashioned” physics of bulk materials – especially black- and grey-body radiation – comes from and how it gets MISAPPLIED to GHGs. In equation 11 above, if light has been passing through a homogeneous material (gas, liquid, or solid) for long enough that emission and absorption have reached equilibrium, we get
dI/ds = nσ[B(λ,T) – I] = 0 [11a]
(I, σ and ε vary with λ.) In this case, either σ is zero or I = B(λ,T). For materials where σ is not zero at any wavelength, emitted radiation follows Planck’s Law, I = B(λ,T) and the emission integrated over all wavelengths follows the Stefan-Boltzmann Law, W = oT^4. (o is the Stefan-Boltzmann constant and σ is still the absorption coefficient.) When σ is zero at some wavelengths, there is no light absorbed or emitted at this wavelength and any light coming through the material was emitted from behind. In this case, when we integrate over all wavelengths (the intensity is either B(λ,T) or zero), we come with a modified form of the Stefan-Boltzmann Law, W = εoT^4, where ε, emissivity, is a constant between 0 and 1 that corrects for all of those wavelengths where σ is zero.
For solids and liquids, radiation traveling through the material usually has passed by enough molecules so that emission and absorption have reached equilibrium. When radiation leaves their surfaces, we say the material emits “blackbody” or “graybody” radiation. This behavior is just Schwatzschild’s equation applied to a particular situation, not a fundamental “law” of physics.
This “law” is not true for gases. Many people (not SOD) like to pretend that we can discuss the radiation leaving a “slab of atmosphere” in terms of blackbody radiation. Those who wish to be technically correct will call it an “optically thick” slab, meaning that the slab is thick enough so that equilibrium between absorption and emission has been reached. “Optically thick” doesn’t apply to the Earth’s atmosphere, especially above the tropopause, where radiative equilibrium determines temperature. There are some wavelengths, such as the 15 um CO2 band, where the Earth’s atmosphere is acts optically thick, but radiative forcing comes from the flanks of these strong bands – which are not optically thick! No wonder there is so much confusion.
Worst of all, these same people like to say that the energy emitted by a slab of atmosphere is oT^4 or εoT^4, thereby allowing us to believe that emission doesn’t depend on the number of GHG molecules in the slab. As everyone recognizes, absorption does depends on the number of GHG molecules in the slab. Thus we create the widespread illusion that GHGs TRAP energy trying to the leave the atmosphere and don’t emit it. Perhaps SOD will want to discuss this illusion some day.
Frank,
Then there’s the people that want it both ways. Well one person anyway. The essence of the argument as near as I can tell is that the integrated emissivity over all wavelengths for CO2 for a 1 m length path through the atmosphere is small, but increasing the path length doesn’t change the emissivity. In other words, the slab is optically thin and thick at the same time.
If I’m the “one person” you are referring to above, I’d really like to have it one way – the scientifically correct way – assuming I can see passed my prejudices clearly enough to recognize it. From what I’ve learned, the enhanced greenhouse effect exists, but “trapping” is an extremely poor description of the mechanism.
I appreciate your help and SOD’s when I do get off-track.
I’m guessing that DeWitt Payne is referring to his amazingly patient debate in Lunar Madness and Physics Basics.
SoD,
That would be it. I didn’t want to mention the name as names have power and might attract unwanted attention (and I was too lazy to look up a link).
That actually proved to be useful as it gave me a better understanding of the mechanics of the Hottel/Leckner approach to calculating radiant heat transfer in gases.
make that two people
Frank on February 9, 2011 at 10:36 pm:
Perhaps I have misunderstood what you are getting at..
Let’s just consider vertically travelling radiation to simplify things (otherwise I have to amend my drawing I’m about to show and add some integrals).
The height of the thin layer = dz.
dI/dz ≠ 0 (upward radiation through the layer)
dI’/dz ≠ 0 (downward radiation through the layer)
Even in equilibrium.
If there is no convection then in equilibrium:
dI/dz – dI’/dz = 0 (with appropriate clarification of directions of I and I’).
With convection then we have to add in a convection term and sum to zero.
Even more important is that the I in this conservation of energy equation is not Iλ. It is the integral over all wavelengths.
For misapplication of blackbody radiation, try Figure 6.8 of FW Taylor’s book (which you recommended to me). Of course, he doesn’t say that this multi-slab model applies to the earth. You could add Figure 3.5, but this refers to an optically thin slab, which he says it emits oT^4. It needs an emissivity term that varies with the concentration of GHG. For me (at least), this drawing suggests that absorption depends on GHG concentration and emission does not.
I do believe that misapplication of blackbody radiation leads to the myth that the mechanism of the enhanced greenhouse effect is that GHGs “trap” energy in the atmosphere. Emission is constant; absorption increase. GHGs both emit and absorb – they are the means of getting rid of the energy extra energy they absorb. More GHGs speed up the transfer of energy, but reduce the distance over which the transfer is made. Whether more GHGs at a location causes warming or cooling is a complicated function of the temperatures at the multiple locations between which energy is being transfered.
The comments of David Reeves to part 5 are illustrative: Why doesn’t more CO2 increase DLR (since we all know CO2 traps energy)?
If CO2 traps energy, why does radiative forcing occur mostly in the wings of the 15 um band?
I should make it clear that I have no doubt about the existence of the enhanced greenhouse effect – just how it is explained. This series of posts is explaining it the right way. “Trapping” does not.
A more simple form of (16) is
I = Io[exp(-krz)] + B[1-exp(-krz)] (16a)
representing intensity of the transmitted radiation of a particular wavelength after the passage of the distance z through the absorbing medium.
Io = intensity of the incoming radiation at this particular wavelength
k = absorption coefficient
r = density of the absorbing/emitting gas
B = Planck function defining the intensity of the thermal radiation at the given wavelength.
You can regroup this equation to the form
I = (Io-B)exp(-krz) + B (16b)
or, if introducing Is = Io – B. where Is is the intensity of the signal
I = Is*exp(-krz) + B (16c)
In this form, it is easier to discuss the transmission properties of the medium. It should be observed that B might represent either black body or the gray body.
To avoid confusion by changing symbols, you can copy and paste Greek symbols (but not sub- or superscripting) from SOD’s post into your comments.
Optical thickness τ is probably not the same thing as krz because it is the result of an integration in equation 4. If the atmosphere were homogeneous, τ would be krz, but n decreases with z. My guess is that your simpler equation only applies to a homogeneous atmosphere whose pressure doesn’t decrease with height.
Optical thickness is a confusing concept for me, but dimensional analysis and equation 4 makes it appear to be a quantity of GHG – the number of molecules in a long, thin cylinder with a circular base of area σ (the absorption cross-section) long enough to absorb 1/e of the radiation passing through its length. It doesn’t matter whether the GHGs are distributed evenly or decrease as you move from one end of the cylinder to the other. Using the term “optical thickness” also suggests a length for this cylinder, but it doesn’t have units of length. When I think of τ as a quantity or length, I have trouble with the concept of integrating from one τ to another. SOD’s explanation of the terms in Equation 16 in red, blue and green is elegant.
τ also describes a cylinder extending from space down into the earth’s atmosphere that contains enough GHG to absorb 1/e the radiation passing through its length. This cylinder extends to infinity at one end, but the other end defines an altitude from which 1-1/e of the emitted photons escape to space – called the characteristic emission level.
Correction to above: τ is a dimensionless quantity. σ has units of area/molecule. τ can’t be an exponent with units.
Frank.
Thank you for a tip about copying Greek symbols from SOD’s text, I was afraid that this would not work.
In the case when there is a temperature or density gradient (or both) then the Schwarzschild equation for the transmission of radiation at the particular wavelength is given by
I = Io[exp(-σns)] + B(Te)[1 – exp(-σn’s)] (16)
where n’ and (Te) refer to the density of the absorbing molecules and temperature, respectively, at the end of the air layer, compare http://www.barrettbellamyclimate.com/page45.htm
This gives
I = [Io – B(Te)]exp(-σns) + B(Te)[exp(-σns) – exp(-σn’s)] + B(Te) (16a)
or
I = Is*exp(-σns) + B(Te)[exp(-σns) – exp(-σn’s)] + B(Te) (16b)
where
Is = Io – B(Te)
is the intensity of radiation (ie intensity of the “signal”) above the background thermal radiation (ie above the “noise”).
The extra term B(Te)[exp(-σnz) – exp(-σn’z)] describes the decrease of the “noise” along the way of propagation of the “signal”.
SOD: There appears to be a QM derivation of Schwartzschild’s equation on a microscopic scale that parallels the macroscopic derivation that you provided. (The derivation below originates with me, so there is no guarantee what follows is all or even partially right.) The derivation of Planck’s Law tells us the (energy) density of photons of a given wavelength, λ, present around one or more GHG’s inside an enclosure at a given temperature, T is given by: u(λ,T) = 4Pi*B(λ,T). The rate of absorption of photons is determined by to the number of GHGs (n), the density of photons and a QM probability of absorption (r). Absorption (per unit volume) = rn*4Pi*B(λ,T). At equilibrium inside the enclosure, the rate of absorption must be equal to the rate of emission, so emission must also be rn*4PI*B(λ,T). (The QM term r is required to be the same in both directions). When we remove the enclosure and allow the photons to escape, the density of photons can take on any value, but emission will continue at a rate of nr*4Pi*B(λ,T).
Taking the derivative of these equations with respect to position, s, gives dr/ds = σ, the cross-section for absorption. The density of photons becomes the flux of photons (per unit area) and the 4Pi term is eliminated when radiance in all direction is converted to flux in one direction. Kirchhoff’s Law would then be a consequence of the QM requirement that r be the same in both directions. There would be no need for separate bulk properties, ε and σ.
Frank,
I think your derivation is overly simplistic. I don’t think you can use a derivation that depends on the presence of a black body in a perfectly reflecting container to derive the radiation field for a collection of molecules which may only approximate a black body at certain wavelengths depending on the size of the box and the partial pressure of the gas. Your r is going to be a function of temperature and pressure as well as wavelength as you have both line strength and line shape to deal with as well as the superposition of multiple lines. I seriously doubt that u(λ,T) = 4πB(λ,T) at a wavelength where r is much much less than 1.
Dewitt: I’m sure my derivation is overly simplistic. However, all of your caveats about r varying apply to both macroscopic scale and quantum scale.
I simply tried to use the density of a photon gas (u(λ,T)) inside an enclosure to calculate an absorption rate (which must be equal to an emission rate at equilibrium). With the assistance of Kirchhoff, SOD uses the flux from such a photon gas on a macroscopic scale to demonstrate the ε must be nσ in Equation 10.
The standard model for radiative transfer breaks down when the distance traveled (and the size of the enclosure used to derive Planck’s Law) isn’t bigger than wavelength. http://web.mit.edu/press/2009/heat-0729.html
Frank:
I’m glad you got the book.
I returned my copy to the library, but for reference I did scan a few pages, luckily this includes Chapter 3.
I have Figure 3.5.
There is indeed a mistake in the diagram. But if you read the text and equations that is clear:
eσTE4 = 2eσTS4
Where TS is the temperature of the stratosphere and e is the emissivity of the stratosphere.
The optically thin stratosphere is not radiating like a blackbody.
I don’t have chapter 6.
However, I do remember reading the book and much of Taylor’s excellent explanations are helping the reader to develop a conceptual understanding, which means starting from overly simplistic models and gradually making them more realistic.
I recall a number of simplified “greenhouse” models where he says things like ‘..and so this gives us a surface temperature of 320K..” clearly out by a factor of … not bad for such a simplistic model..” and so on.
If you want formal and complete derivations with consequently much more work for the student, try the expensive “Radiation and Climate” by Vardavas and Taylor, Oxford Science Publications (2007).
SOD: Taylor solves your equation (eσTE4 = 2eσTS4) to find that the stratosphere is 215 degK by assuming e is the same for both layers – even though one layer is the surface of the earth combined with the troposphere. Since e varies with the density of GHGs and is very different for the surface of the earth, this appears to be incorrect.
Ignoring the fact that emissivity for a gas, unlike traditional surfaces, varies with the composition of the gas is the cause of much confusion in climate physics. Picture a photon at the tropopause trying to escape to space through 1X or 2X GHG. The photon is more likely to “trapped” in the atmosphere, so the earth will warm. Now remember that there are 2X GHG’s emitting photons at the tropopause. The earth doesn’t warm. Memo to all proCAGWers: Avoid reminding anyone that emissivity depends on GHG concentration. (The proper physics is too hard to explain.)
Even worse (I hadn’t noticed before), Taylor goes on to calculated a no-feedbacks climate sensitivity of 18 degK in Table 7.3 and implies that the current absence of such a large temperature rise could be explained by a massive change in albedo! Improper treatment of emissivity appears to be the cause.
Frank on February 11, 2011 at 10:21 pm:
You are not correct about this assumption.
Note the added red text to correct the diagram.
What is the energy absorbed by the layer? It is the earth’s surface/tropospheric radiation x absorptivity of the layer
Ein = eσTE4, where e = emissivity = absorptivity of the layer
What is the energy radiated by the layer? It radiates from two “surfaces”, so:
Eout = 2eσTS4
Therefore, in equilibrium, Ein = Eout and so:
TS = TE/21/4
There is no assumption that the emissivity of the stratosphere is equal to the emissivity of the earth’s surface + troposphere.
Once you start working on the assumption that most of the content of books by Professors of Physics is correct – even, amazingly, those involved in climate science – you will probably find that you learn more from their books.
Frank:
Can you scan and post somewhere? Or email to me, scienceofdoom – you know what goes here – gmail.com.
I doubt that your criticisms will survive review. Maybe even your own review.
SOD: You are correct that I should start out with the assumption that the mathematics and physics of most books by physics professors is correct. You are completely correct (as usual) that emissivity is not misused on p47 as written. The “error” on this page is that emissivity is assumed to be 1 for the earth+troposphere layer used in the model, but this error is compensated for by choosing a “surface temperature” of 255 degK.
When I wrote my comment, I had been spending my time looking at why emissivity seems to be absent from many of the equations in chapter 6 (radiative transfer) and chapter 7 (the greenhouse effect). I saw the equation concerning the stratosphere in your post from chapter 3 and, to save time, mistakenly used this as an example. (A poor excuse for carelessness. Since I don’t have the time to master every detail, I often use words like “appears to” or “seems to”, but sure didn’t in this case.)
With regard to Table 7.3 (which I have just emailed), Professor Taylor certainly doesn’t use the phrase “no-feedbacks climate sensitivity”. He calls it “Model results for surface temperature corresponding to different values of Earth’s albedo, A, and different atmospheric CO2 mixing ratios”. The model results are surface temperatures, so one has to subtract to come up with climate sensitivity. So I think I fairly characterized his results. I have no reason to think that Taylor’s math will be found to incorrect for the model he presents, but the final answer is absurdly wrong: a no-feedbacks climate sensitivity of 18 degK with 1/3 of this rise predicted for the “current” value of 367 ppm of CO2. He goes on to explain that some of the difference (20-50%) is due to thermal inertia of the ocean. Then he suggests that changing albedo could account for some of the discrepancy, using albedos of 0.25, 0.30 and 0.35! Albedo may be changing, but I’ve never seen changes this large.
As best I can tell, a significant amount of confusion and distortion in climate physics arises from models that treat emissivity as a constant that doesn’t vary with GHG concentration. This leads to the idea that increasing GHG absorb, but don’t emit, more outgoing radiation and thereby “trap” heat in the atmosphere. However, I’m no longer sure that this is the fundamental cause of Taylor’s misleading results.
Although we can usually count on professors of physics to get their mathematics correct, we apparently can’t count on them to be fully candid about the limitations of their work. In this case, the professor seems to be obscuring the limitations of his model with unreasonably? large changes in albedo and misdirection about climate sensitivity with and without feedbacks. I was unable to find what I understand to be the truth in his book – that the no-feedbacks climate sensitivity for 2X CO2 from our best models is about 1.2 degK, not 18 degK. Taylor does say that his result is six times to larger than the IPCC’s value, but the IPCC’s value includes feedbacks (a concept Taylor introduces later).
Steven Schneider, a pioneering climate scientist, once said: “as scientists, we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts”. Then he went on to explain what type of activities were appropriate when talking to the public in order to “make the world a better place”, which includes “telling scary stories”. Is Table 7.3 presented in a manner appropriate for “science” or is it a “scary story”?
Frank is annoyed by the usage of “heat-trapping gases”. I googled for the term and found it being used by the EPA, the NYT and on Scott Mandia’s blog (i could go on.). And here by Dessler, North et.al.:
http://www.chron.com/disp/story.mpl/editorial/outlook/6900556.html
(from March 6, 2010)
“Heat-trapping gases are very likely responsible for most of the warming observed over the past half century”
Draw your own conclusions.
I don’t find that any more misleading than the term greenhouse effect. It’s an analogy. All analogies fail when examined closely. It’s like trying to explain all of quantum mechanics using only English. You can’t.
Frank,
Kirchhoff’s Law and Planck’s derivation of his own function also breaks down if there isn’t an actual black body in a perfectly reflective cavity.
See: http://www.ptep-online.com/index_files/2009/PP-19-01.PDF
Absent a hole, which acts as a source of radiation, or the presence of a black body, it’s not clear that a photon gas with Planck properties exists in a perfectly reflective cavity. Since a molecular gas is very far from being black or even gray, it would be surprising to me if the photon gas in a sealed reflective cavity containing only a molecular gas had a black body spectral distribution.
Science of doom
In my opinion, the use of the plane geometry, Figures 2 and 4 above, when studying the outgoing radiation is an unnecessary complication as compared to the spherical geometry representing the actual form of the atmosphere.
The plane geometry might be useful when studying the incoming radiation from the Sun, but in such a case one must take additionally into consideration the tilting of the absorbing surface in relation to the direction of propagation of irradiation and not only the increase of the path of propagation related to 1/μ, μ being given by [8].
In such an interpretation, Figure 2 would indicate that the radiation from the Sun is falling on the area dA*μ along the path s = z/μ (the arrow Iλ being directed down). The incoming radiation is absorbed, thermalized and emitted upward. It is sufficient to regard the propagation of the upward (long wave) radiation along the axis z, which is due to the spherical form of the surface of the Earth. This is similar as when studying the radiation from the Sun.
However, the calculation presented by “science of doom” are still valuable if assuming that the parameter τ(z1,z2)/μ in [9] relates to n/μ where μ is not necessary a cos function (but nothing prohibits us to treat μ as a cos function, either). n/μ represents the change of n, as, for example, when enriching the atmosphere by a given kind of absorbing molecules as the result of the human industrial activity.
In particular, the doubling of concentration of the absorbing molecules from the level n corresponding to τ(z1,z2) = 0.1 to the level 2n = n/cos(60) corresponding now to τ(z1,z2) = 0.2 will change the transmittance of the layer (z1,z2) from about t = 0.9 to about t = 0.82, see Figure 5. Certainly, when n goes to infinity, which corresponds to cos(90), the slab (z1,z2) will become completely opaque to the given wavelength.
One can relate the optical thickness to the distance (z1,z2) at which the transmittance drops by i/e, as Frank had mentioned. In my opinion, the optical thickness should be related to the distance (z1,z2) at which the intensity of the signal drops to zero, which corresponds to the case when the intensity of the incoming radiation drops to the level B(λ,T).
Frank:
Thanks for sending me the relevant chapter from Taylor’s book.
Why don’t I write an article explaining many aspects? And then more people can learn and ask questions and, of course, criticize.
SOD: A separate post on how Taylor arrives at the results in Table 7.3 would be more useful than a discussion here. I initially grabbed his book to find an example of a misleading model with an optically thick slab atmosphere, but I don’t fully understand why his calculations appear to be so far off.
How about another post on the theme “GHGs redirect, but do not trap, infrared radiation”? Stratospheric cooling proves that “trapping” is not a general property of GHGs. With more GHG’s, there are more photons traveling shorter distances. It is easy to see why this cools the stratosphere, but less obvious why it warms the troposphere.
SOD wrote:
The absorptance of a gas, aλ = 1-tλ =emittance of a gas, ελ.
For a very small change in monochromatic radiation due to emission:
dIλ = ελBλ(T) .ds [10a]
Therefore:
dIλ = nσBλ(T) .ds [10b]
In equations 10a and 10b, you equate ελ and nσ. Emissivity often a dimensionless number between 0 and 1. nσ appears to have units of inverse distance and the potential to be >1. What am I missing?
I think you missed
nσ = τ
For very small τ a = ε = τ
DeWitt:
τ = ∫σn(s).ds [4]
So τ is not equal to nσ, dτ/ds = nσ. This affords the correct units (1/distance). The slope of a function is not equal to the value of the function, but when we are dealing with an exponential function like n(s), the slope is proportional to the value of the function. So we could say τ = knσ, but with no obvious reason to say that k=1. Am I being obtuse?
I went back to Petty on the derivation of the Schwarzchild equation. I think the confusion arises because SoD is using ελ and ε is usually emissivity. In fact, the emissivity in SoD’s notation is actually ελds. so ελ in SoD’s notation isn’t dimensionless, it also has dimension (1/distance) Petty avoids this confusion by using βa as the extinction coefficient. The absorptivity a is then βa ds The extinction and emission coefficients are equal.
Which means ελ in SoD’s notation is the emission coefficient, not the emissivity.
Frank:
DeWitt Payne is spot on.
There are many ways to write some parts of the equations and I have generated confusion by a lack of clarity and especially by an incorrect definition at the start.
Many others might also be confused for different reasons.
Let me review and explain a few steps, including questions not asked – for others. Repeated content from the article in italics.
1. Early on:
..with n=number of absorbing molecules per unit distance, and σ=capture cross-section..
INCORRECT definition for n:
n = number of absorbing molecules per unit volume, so the units are 1/m3.
σ = units are m2.
** Article will be updated **
2. Therefore considering this first equation:
dIλ = -nσIλ .ds [1]
nσ .ds has units of 1/m3 . m2 . m = dimensionless
Of course, we need this to be dimensionless (ask if it isn’t clear why).
3.
The absorptance of a gas, aλ = 1-tλ =emittance of a gas, ελ.
Note the subscript λ is explaining that the relationship is ONLY true for monochromatic radiation (radiation at one wavelength). Often, in derivations, the λ subscript is dropped just to make the equations clearer (less clutter). I don’t do that here because it’s very easy for newcomers to think that an identity is true across all wavelengths.
So ελ = aλ = 1-tλ
Let’s calculate ελ across a very small distance, ds:
ελ = 1 – exp(-nσ.ds)
How do we manipulate this equation?
4.
Using the Taylor expansion – a mathematical manipulation method – we find that
exp(x) ≈ 1-x, for very small x
And so ελ = 1 – exp(-nσ.ds) = 1 – (1 – nσ.ds) = nσ.ds
This expression is dimensionless, as already demonstrated, which is what we want.
5. Therefore:
For a very small change in monochromatic radiation due to emission:
dIλ = ελBλ(T) .ds [10a]
Therefore:
dIλ = nσBλ(T) .ds [10b]
SOD and DeWitt: Isn’t ε dimensionless in this reply everywhere but equation 10a, suggesting something is wrong somewhere? There could be a problem with infinitesimals in step 3; one side of the equation is an infinitesimal and the other side isn’t. (If we limit the discussion to a single wavelength, the λs can be omitted for clarity.)
ε =? 1 – exp(-nσ.ds) (dubious from SOD above )
dε =? 1 – exp(-nσ.ds) (this isn’t right either)
a = 1 – exp(-nσs) (homogeneous only)
ε = 1 – exp(-nσs) (Kirchhoff)
dε/ds = nσ*exp(-nσs)
dε = nσ*exp(-nσs) .ds
I = ε * B(T,λ). B(T,λ) is constant with position. If there is a dI with respect to position, there must be a dε. Therefore
dI = nσ*exp(-nσs) * B(T,λ) .ds (homogeneous only)
dI = nσ * B(T,λ) .ds (when s =0)
Before this reply, Equation 10a (with apparent units of 1/distance for ε) seemed to appear out of nowhere. In a previous (possibly erroneous*) comment, I made the assumption that Equation 10a was equivalent to a definition of emissivity written in differential form.
I = ε * B(T,λ) definition of emissivity
dI = ε * B(T,λ) .ds Eqn 10a
However,
dI/ds = dε/ds * B(T,λ) + ε * dB(T,λ)/ds
dI/ds = dε/ds * B(T,λ) + 0
dI = dε/ds * B(T,λ) .ds
Not: dI = ε * B(T,λ) .ds [Equation 10a]
When DeWitt tells me that SOD and I are using different meanings for ε, the second meaning could be dε/ds. For exponential functions like ε, dε/ds is proportional to ε, but not equal to ε. The units are also different. However, as best I can tell, SOD and I use the same ε everywhere.
*In my earlier comment, I assumed SOD had gotten Equation 10a by treating ε as a constant when writing a differential form of I = ε*B(T,λ). If he had properly differentiated both sides of this equation with respect to s and ε were constant, both dε/ds and dI/ds would have been zero.
Equation 10a appears to be wrong. I = ε*B(T,λ), but this equation can’t be differentiated with respect to s because ε = a and absorptivity/absorbance is certainly a function of s. (Yet another example demonstrating that “all” problems with climate physics originate with the assumption that emissivity is a constant for gases. 🙂 )
I = a * B(T,λ) = [1 – exp(-τ)] * B(T,λ)
Differentiating with respect to s:
dI/ds = B(T,λ) * exp(-τ) * dτ/ds *
For an infinitesimal path ds, τ approaches 0 and exp(-τ) approaches 1. τ = Int[n(s)σ.ds], so dτ/ds = nσ
dI/ds = B(T,λ) * nσ
dI = nσB(T,λ) .ds (I = emission) [10b]
But your ε is not the same as SoD’s. You’re using ε to mean emissivity. SoD isn’t. The integrated form of 10a in SoD’s terminology assuming n is constant in the layer and the layer thickness is s:
I =[1 – exp(-ελs)]Bλ(T)
I don’t know of anyone other than he who shall not be named and his followers that think that emissivity isn’t a function of path length.
Above, I tried to offer a derivation of Equation 10b without going through Equation 10a. Unfortunately, I may have made the explanation too short when editing. Expressing Kirchhoff’s Law for a volume of gas gives:
a = ε
a = 1- I/I_0 = 1 – exp(-τ) I = transmitted light
ε = I / B(T,λ) I = emitted light
1 – exp(-τ) = I / B(T,λ) I = emitted from here on
I = a*B(T,λ) = [1-exp(-τ)]*B(T,λ) “Kirchhoff’s Law”
Differentiating with respect to s:
dI/ds = B(T,λ) * exp(-τ) * dτ/ds *
For an infinitesimal path ds, τ approaches 0 and exp(-τ) approaches 1. τ = Int[n(s)σ.ds], so dτ/ds = nσ
dI/ds = B(T,λ) * nσ
dI = nσB(T,λ) .ds (I = emission) [10b]
Frank,
On further consideration, you’re right and 10a is wrong. ελ is defined as emittance which is a function of path length and is dimensionless, so its appearance undifferentiated in 10a is wrong.
In fact ελ is emissivity rather than emittance. Emittance is equal to:
ελBλ(T)
and has units of W/m2.
A convert! Your comments drove me to make a cheat sheet with all of the terms, defining equations and units. If logic prevailed, emittance would be a synonym for emissivity.
I’ve always thought of absorption cross-section in units of m2/molecule rather than SOD’s m2 and density in terms of molecules/m3 rather than 1/m3. “Molecule(s)” could be individual molecules, moles, grams, mixing ratio*density, etc. Are SOD’s units the traditional ones for this field.
Absorptance and Transmittance are ratios, so they’re equivalent to absorptivity and transmissivity. Emittance isn’t a ratio. So SoD’s statement that:
“The absorptance of a gas, aλ = 1-tλ =emittance of a gas, ελ.”
is wrong. It’s absorptivity equals emissivity. So 10a is wrong and unnecessary. It should be eliminated and 10b renamed 10.
Petty doesn’t use absorptivity and emissivity when he derives the Schwarzchild equation. He uses the extinction coefficient. Emittance isn’t even in the index, just emission and emissivity
Just went back and read that thread, DeWitt Payne. Nyarlathotep’s balls, He Who Shall Not Be Named was a ===== (moderator’s note, please observe the Etiquette, painful though that might be).
A certain regular commenter took him seriously, too.
What happened to Mark? Would be sad if he gave up reading this blog.
SoD,
You say:
“The value σ is a material property and so constant for one gas at one wavelength.”
and:
“The intensity of radiation at wavelength λ is reduced as it travels through the atmosphere according to the total amount of the absorber along the path.”
The period at the end of your statements suggests that this is the full explanation. But I think you should say a word about the fact that σ = σ (T, p).
Therefore I think the full (I mean, the correct) statements would be these:
“The value σ is a material property and so constant for one gas at one wavelength at one temperature and one pressure, but varies with the temperature and pressure of the gas.”
and:
“The intensity of radiation at wavelength λ is reduced as it travels through the atmosphere according to the total amount of the absorber along the path, and according to the temperature and pressure distribution of the gas along the path.”
Miklos Zagoni:
You are correct and I have added this note to the article.
I am working on another article to explain this subject more fully.
Frank on February 16, 2011 at 7:14 pm (and also DeWitt Payne on his followup).
Sorry it has taken a few days to respond. Unfortunately it has been a very busy week and some questions need mental energy that has been in short supply.
For some reason I started using the term emittance instead of emissivity.
I have no idea why – except that I was focused on the maths. Emittance is definitely the wrong term.
I will update the article soon and also address your maths points at the same time.
[…] as transmittance decreases. Check out the heading Optical Thickness & Transmittance in Part Six if this isn’t […]
[…] previous articles in this series I created two fictitious molecules, pCO2 and pH20, and solved the radiative transfer equations for a variety of conditions for these two molecules through the […]
On another blog I have said that I will provide evidence that the article writer has made false claims.
Here is an extract from Grant Petty, A First Course in Atmospheric Radiation, demonstrating that the atmosphere is not considered as a “blackbody” with an emissivity = 1:
Referencing my comment above, here is an extract from Radiation & Climate, Vardavas & Taylor (2007):
Note the highlighted equation.
The term S is the “source function” – which, when the atmosphere is in local thermodynamic equilibrium (LTE), is equal to the Planck function.
So if the emissivity was assumed to be 1 – a blackbody – then this would NOT be multiplied by the emissivity term, exp (-(t2λ-t1λ)/μ.
And by the way, this equation is the solution at one wavelength.
Finally following up on the earlier comments (e.g. February 20, 2011 at 3:18 am) about the correct terms..
Having consulted 5 textbooks covering a period of 4 decades it appears that the term emittance is a problem term and used differently by different authors.
There is no confusion in the basics of heat transfer in these books, as they define their terms.
Emittance – if consistency was observed – would relate to emissivity the same way that transmittance relates to transmissivity.
Essentially, transmittance is the ratio of transmitted radiation, as is transmissivity. The first is the extensive term, the second is the intensive term.
Following this convention – as some have tried to do including Lienhard (2008) – emittance would be the fraction of radiation from a real surface as a proportion of blackbody radiation.
However, most people have come to use emittance as the actual emitted flux.
Therefore, I will use emissivity and am correcting the text accordingly.
So as a result we have “Absorptance = Emissivity” which I have changed to “Absorptivity = Emissivity” so as not to confuse readers more than necessary(?).
[…] Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations – the actual equations of radiative transfer (no blackbodies or Stefan-Boltzmann equations to be seen) […]
[…] Note 1: Optical thickness is proportional to the number of absorbers (molecules that absorb radiation) in the path. So as the atmosphere thins out the density reduces and, therefore, the optical thickness must also reduce. You can read more about the equations of optical thickness in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations […]
RSS feed request to Before It’s News
Hi,
My name is Ben Chasteen and I’m the Science editor at Before It’s News http://www.beforeitsnews.com. Our site is a People Powered news platform with over 2,500,000 visits a month and growing fast.
We would be honored if we could republish your blog RSS feed in our Science category. Our readers need to read what your Science of Doom blog has to say.
Syndicating to Before It’s News is a terrific way spread the word and grow your audience. Many other organizations are using Before It’s News to do just that. We can have your feed up and running in 24 hours. I just need you to reply with your permission to do so. Please include the full name and email of the person who will be attached to the account, and let me know the name you want on the account (most people have their name or their blog name).
You can also have any text and/or links you wish appended to the end or prepended to the beginning of each of your posts on Before It’s News. Just email me the text and links that you want at the beginning and/or ending of each post. If you know html you can send me that. If not, just send me the text and a link to your site. It should be around 200 characters or less (not including links).
You can, if you like, create a custom feed for Before It’s News that includes multiple links back to your blog or web site. We only require that RSS feeds include full stories, not partial stories. We don’t censor or edit work.
Thank you,
Ben Chasteen
Editor, Before It’s News
http://www.beforeitsnews.com
[…] The only way to answer these questions is to solve the Schwarzschild equation – see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. […]
[…] can find a more complete explanation of optical thickness in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations, which I definitely recommend reading even though it has many equations. (Actually, because it has […]
[…] calculate this value, you need to solve the radiative transfer equations, shown in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. These equations have no “analytic” solution but are readily solvable using numerical […]
[…] simple – and is not used to really calculate anything of significant for our climate. See Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations for the real […]
I think the Greenhouse effect is much simpler than this explanation.
The essential simplified model to understand ‘global’ warming of Greenhosue effect is that of a ‘filtered blackbody’ (fBB).
Imagine a theoretical blackbody (BB) at 255K. It is emitting power right across the theoretical BB spectrum.
Now what happens if if you wrap the BB with a filter that covers the 600-700 wavenumber range?
The fBB quickly changes to a new dynamic thermal equalibrium: it emits more radiation over the non-filtered wavenumbers but overall emits the same total power as before.
So it ‘looks’ to an outside observer like a BB at 295K but with a chunk of the spectrum missing. And at the macro (global) leve this is fairly much what would happened.
You would be able to say that the ‘Greenhouse’ effect results in a global temperature rise of 40K.
Now look at an emissions spectrum for Earth.
This looks fairly much like an fBB to me. Visually you can see that it looks like something at about 295K with some ‘chunks’ missing and has the equivalent power output of a true BB at 255K.
Next question is what would be the impact on the above graph of doubling CO2 concentation in the atmosphere?
I haven’t been able to find an answer, but what I have seen so far indicates, well, not much would happen. Absorption at CO2 and H2O wavenumbers is near satured and while there might be an effect in the ‘wings’ of the CO2 absorption spectrum this would be very minimal.
Apparently some radiance might be filtered out round about wn740 … but how much would the power output across the rest of spectrum need to increase to compensation?
We would be talking about a very minor impact indeed on the total Greenhouse effect.
Flash:
It depends on what you mean by “not much”.
The answer is given in CO2 – An Insignificant Trace Gas? Part Seven – The Boring Numbers.
An insight into the effect of CO2 across its spectrum is shown (along with the calculations) in Part Nine – you can see that absorption is not “saturated” along with the calculation of transmittance through the atmosphere with a doubling of CO2.
And you can find explanation about the question of “saturation” in CO2 – An Insignificant Trace Gas? – Part Eight – Saturation.
There’s some complex theory on ‘wings’ and ‘shoulders’ and stuff. A bit over my head, but the general gist seems to be a widening of the potential absorption band by a little bit (we’re talking 2-15 wavenumbers here depending on source and only in one direction because the other way we’re into the already saturated H2O spectrum).
In paractice, there are some observations of the change in the ‘filtered’ spectrum between 1970 and 1997. Some of that was due to inreased H2O absoroption but some apparently due to the CO2 ‘wing’ effect in wavenumbers ~735 (so it would appear to be a real and observable phenomenon). But not by a lot (a change in intensity over a narrow band of wavenumbers by no more that 1.5Teff). This was from a ‘warmists’ site so I assume is the worst case … reality might be less. Dunno, needs more observational data.
I took the suggested extra absorptions over the 27 years (1970 -1997) and then multiplied them by 10(!) to represent the standard CO2 doubling and then factored them in to a new Greenhouse Effect as per my filtered Blackbody (fBB) model. Net impact was about 0.2oC.
Does anyone have a model of the emissions spectrum they think that Earth will have if CO2 doubles? That will tell us all we need to know.
Yes, see the above link in my earlier comment: CO2 – An Insignificant Trace Gas? – Part Eight – Saturation.
See Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Twelve – Curve of Growth
And also: Part Nine.
Hi SoD,
Lots of (probably interesting) but not overly relevant stuff.
My question is … can you show me what the models say the emissions spectrum of Earth looks like with double 1970’s CO2?
Take this as a reference point …
I want something that shows how this changes. How does this change if there is 2*70’s CO2?
This is the ONLY thing that tells me what the impact of CO2 is on the Greenhouse Effect.
Thanks,
Andrew
Hi SoD,
Yeah. So, okay, the ‘Doppler’ effect as the recipient moves forward and backward smears the absorbable quantum wavenumber out a bit. It still does not open up much of a ‘wing’ in the filtered spectrum.Doesn’t this just affect absorption in neighbouring wavenumbers?
As per my fBB model Greenhouse warming is a factor of the total absorbed spectrum ‘v’ the unabsorbed ‘window’.
Flash:
This was in the link of “not overly relevant stuff”.
It is relevant because it directly answers your question.
Strictly speaking it doesn’t actually answer the question you asked – because you asked about doubling 1970’s CO2. Whereas the answer everyone else poses and considers (for consistency) is what happens when we double CO2 from pre-industrial levels.
I picked 1970 as a base year because we seem to have some records from then!
No point starting with a point we cannot measure reliably.
Pre-industrial we don’t have any records of what Earths transmission spectrum looked like.
The diagrams above are not an Earth’s emission spectrum … which is what I am looking for.
Does anyone have a model emissions spectrum of what they think it will look like if CO2 levels go up?
If the answer to this is NO, no-one has ever done this that would be interesting too.
A.
[…] of 289K. The diffusivity approximation was used to estimate total hemispherical transmittance (see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations). The wavenumber step, Δν = 1 cm-1. The calculations were done from 100 cm to 2500 cm (4μm […]
[…] the atmosphere is not gray so this is not a simple problem, but it can be solved using the radiative transfer equations with numerical […]
[…] Radiation does go in all directions. The plane parallel assumption has very strong justification and – in simple terms – mathematically resolves to a vertical solution with a correction factor. You can see the plane parallel assumption and the derivation of the equations of radiative transfer in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. […]
[…] What’s the Palaver? – Kiehl and Trenberth 1997 Do Trenberth and Kiehl understand the First Law of Thermodynamics? Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The… […]
SoD
Click to access Beer-Lambert.pdf
I wrote a paper on my doubts about the Beer-Lambert hypothesis and the resulting Schwarzschild equation.
JWR,
It’s nice to see someone attempt to address the real physics.
I haven’t been very active on the blog for some months due to “real life” but will take some time to review your paper.
Being curious about how anyone could dispute the (>150 year old) Beer-Lambert law of absorption I had a quick scan and noticed you said:
Yet the equation has the dependent term Iλ, which is the source radiation, which depends on the temperature of the source.
Do you have a comment on this simple point?
I will comment more fully in the next few days.
Beer-Lambert law gives directly relevant results under very specific circumstances only. What’s required is that:
1) radiation is monochromatic or the absorptivity of the absorbing material is independent of the wavelength over the range of wavelengths considered.
2) emission by the absorbing material can be disregarded as very weak in comparison to the incoming radiation that passes trough the full thickness of absorber
The second condition is commonly true for SW radiation (visible, UV, X-ray, gamma) but not for LWIR.
When LWIR in atmosphere is considered, no serious source assumes that Beer-Lambert is directly applicable. Specifically this series of posts on understanding atmospheric radiation is not to the least based on assuming applicability of Beer-Lambert law.
The Beer-Lambert Law originated from observations in the laboratory. The derivation came later. Are you saying then that spectrophotometric analysis is somehow flawed? That would be news to a lot of chemists.
@de Wit
The discussions with SoD are already much further.
Try to give a positive contribution.
JWR,
Here’s a positive contribution: Do a bit more research before you start spouting off about things you clearly don’t understand very well.
@SoD
in the Beer-Lambert constitutive equation which I write here as
dI_lamda = -N*alfa_lambda*I_lamda
it is generally understood that I_lamda is the current value at point z.
I forgive you for the remark since you did it in a hurry.
@Pekka
Instead of alfa_lambda you can write alfa, assuming that the absorption is independent of wavelength.
In the time you and your CPU are spending with the “code” (ref [5]) as described in the post , I see tau and exp(-tau) all over the place!
JWR,
tau and exp(-tau) may appear in numerous texts that try to explain, what’s going on in the atmosphere, but they are not used in the actual calculations. It’s not uncommon that a detail of the final results of such an actual calculation is expressed using these concepts through a backwards calculation.
JWR,
Iλ(z) is the current value of intensity at point z. You are correct.
It came from a source and therefore at its starting point, 0, the intensity Iλ(0) depended on the source temperature. During its journey it is attenuated by the absorptivity of the absorbing material and this is not dependent on the temperature of the absorber.
Therefore Iλ(z) has a dependency on the temperature of the source (the Planck function for the source temperature at that wavelength x the emissivity of the material at that wavelength), and not on the temperature of the absorber.
So back to your statement:
You seem to have misunderstood the Beer-Lambert hypothesis. The Beer-Lambert hypothesis does not make this claim.
@SoD
Thank you for your explanation that I(z) does not represent
the real intensity at level z but that part which is remained of the
surface flux of the Prevost type I(0)= eps*sigma*T^4 .
Our different interpretation is the reason of this misunderstanding.
You think in terms of Prevost fluxes up and down, I make my
reasoning according to the generalized Stefan Bolzmann equation
as explaned in my reference [3]. I should have interpreted your I(z)
in the way you use it. Sorry for that.
What I discovered from our discussion is that the Schwarzschild
approach with the Beer-Lambert relation is in fact the same as my
chicken wire approach.[1]and [2].
The Beer-lambert relation gives the distribution of the absorbers based
on a geometrical cross-section.
Discretized by means of nodes in z-direction it represemts the
distribution and the thickness of the wires the absorption coefficient f.
No further physics invoved.
The quantification of the emission=absorption is decribed by the
Planck function in the Schwarzschild approach:
exchange of heat between nodes : sum(B(Ti)-B(Tj))
However it is not written down explicitly, SoD starts from the
Prevost upward flux Iup(0) , calculates the part of that Prevox surface
flux which remains at position z, using the geometrical absorber
distribution and Planck absorption (=emission):
Iup(z)=Iup(0)- function1(B)
And the same for Idown(z) = Idown(TOA)+ function2(B).
In fact the exchange of heat becomes in some way :
function(Bi-Bj)
In the chicken wire model the exchange of heat between stations
i and j is calculated inmediately by sum(fi*fj*(Ti^4)-Tj^4))
The CPU consumption in the SoD code is reported to be important,
the chickenwire model is very: on my laptop the results are inmediately.
The Sod code uses of course a lot of CPU by extracting data from
HITRAN. But the SoD code is also not very strict in saving work.
In fact for Iup and Idown the contribution of the Planck function B
is the same (it is written also in a comment that they are the same,
but it has been programmed twice in order to change it in the futur)
My suggestion to SoD is to look for the similarity in the variation
of Iup and Idown and indeed look for pairs in the process such that
somewhere appears B(Ti)-B(Tj) such that for Ti=Tj no exchange
of heat according to the second law.
CONCLUSION
I will change the title of the paper about the Beer-Lambert.
It is indeed the same as my chicken-wire representation.
The Schwarzschild equation was a nice way in 1905 to attack
the problem by means of an analytical solution.
The integral in that solution had to be determined by
numerical methods, but you can play with it, for teaching purposes
by inserting functons B which can be integrated analytically.
The introduction of auxillary variables Iup and Idown is not
necessary anymore in 2013.
We can write down inmediately
the discretized equations for the heat balance in a node.
When there is a dicreapancy in a node in the radiation heat
balance we know it is from convection. [2] and [3].
In September I will publish the MATLAB program (2 A4)
with the overlaying finite elements[2].
JWR,
The Matlab code of SoD described in
https://scienceofdoom.com/2013/01/10/visualizing-atmospheric-radiation-part-five-the-code/
and discussed in the whole series of postings is essentially written to perform the kind of calculation you are planning to present. It may be a little more complex as it takes into account the detailed absorption spectrum of all atmospheric gases that absorb and emit IR at a significant level. The basic idea is, however, the same.
@SoD
I have used your explanation concerning Beer-lambert to correct my earlier paper. Thank you once more.
Click to access planckabsorption.pdf
In the above link I have studied in more detail your equations and your code.
I repeated your analysis for the fictive upward flux , and I added a similar one for the fictive downward flux.
I think to have detected an error in the sense that two different tau distributions should be used, one for the upward flux and an other for the downward flux. The error is easy to be corrected as I have indicated in the paper.
I have also included an appendix, in which for a planet with an atmosphere modelled with two screens, the two-way Schwarzschild formulation and the one-way finite element formulation indeed give the same equations.
The conclusion is simple: back-radiation is not a physical phenomenon and the surface is not radiating very much, apart from the window,
Convection of sensitive and latent heat to higher levels is the mechanism.
At higher levels IR-sensitive molecules emit the heat to outer space.
JWR,
If that’s the case, then please explain to me how an IR thermometer actually works and why when you point it at the sky, you measure an apparent temperature.
Also, convection from the surface to the atmosphere is mostly latent heat transfer from the evaporation and condensation of water. But the scale height of water vapor in the atmosphere is only 2 km compared to the scale height of all the other gases of 8 km. There simply isn’t enough physical circulation of the atmosphere present to transport the quantities of energy involved.
correction
dI_lamda = -N*alfa_lambda*I_lamda*dz
@SoD
Take your time.
For me it is now Tuesday 20h00
I am leaving for a family trip early Thursday morning.
I will be back on september 10
I take my laptop with me but I don’t know whether I find time to answer your eventual comments.
@de Witt
Usually I write “back-radiation of heat” does not exist.
I forgot to add “of heat” in the answer to SoD.
In my papers, in particular ref [3] I give the generalized Stefan-Boltzmann equation which gives the heat flow between two surfaces with temperature and emission coefficients (T1, eps1) respectively (T2,eps2):
q=eps1,2*sigma*(T1^4-T2^4)
1/eps1,2 =1/eps1 +1/eps2 -1.
In plain words: two remote surfaces exchange information to each other on
their temperatures and on their emissivities and on the basis of which a heat flow is exchanged from the warmer surface to the colder.
There is no back-radiation of heat from the colder surface to the warmer.
See also the examples given in my ref [3].
The pyrgeometers use above equation.
Surface 1 is the sensor surface of the pyrgeometer,
with known temperature T1, known eps1, known sigma,
known electrical input q.
Unknowns are the data to be measured of a remote surface: eps2 and T2.
The manufacturers are clever enough to include e.g. an eps2 in the chip to measure the remote T2, or they make two measurements with a different set (T1, eps1, q).
That is the choice of the different manufacturers.
As concerns your remark that there is not enough convection available to evacuate 109 Watt/m^2, that remark is addressed to all other, with back-radiation of heat.
I only find from my analyses that there is no back-radiation of heat, that the LW emission is much smaller.In fact the K&T diagrams of NASA say the same.
JRW,
In the spirit of classical thermodynamics your way of describing radiative heat transfer makes sense, but that’s not a valid argument against anyone who prefers to consider separately radiation from body A to body B and from B to A. Both approaches are correct when used systematically. Thus you are justified in saying that you can describe these processes without back-radiation, but you are wrong in saying that back-radiation does not exist even if you add “of heat” to that.
Classical thermodynamics is not the only truth, it’s just one way of describing those phenomena that in can describe. It cannot describe all important phenomena, therefore other theories are in many ways better.
It’s in easier to do the calculations discussing the back-radiation as another subprocess. It’s not necessary to know, where the radiation is going to calculate the emission from a surface. This is a huge advantage in practice, and not the only advantage. Therefore no working scientist follows your approach.
JWR,
So all that money spent on the SURFRAD ( http://www.esrl.noaa.gov/gmd/grad/surfrad/ ) network is wasted? They aren’t really measuring anything?
But forget pyrgeometers and absolute cavity radiometers. How about an FT-IR spectrophotometer such as this one: http://www.arm.gov/instruments/aeri ? How come you can point one at the sky and record a spectrum if there is no such thing as back radiation? Even better, you can calculate that spectrum using a line-by-line program radiative transfer program with the HITRAN molecular spectroscopy database and have the measured and calculated spectra agree within a few percent.
JWR,
Something else you might want to consider is the Rosseland diffusion approximation for radiative energy transfer in optically thick media. It’s used to model energy transfer in solar atmospheres.
JWR from 2013/09/13 at 7:26 pm:
I had a look at your paper.
This, as in the previous version of your paper, highlights a question. The answer to this basic question is not clear from your paper (at least not to me) so I hope you can resolve my confusion via this simple thought experiment.
We have a box containing a gas and from left to right the transmissivity, at wavelength, λ = 0.5. Scattering is negligible at this wavelength.
1. I propose, therefore, that this box has a transmissivity from right to left of 0.5 at this same wavelength.
– True or False?
2. When we shine a beam from left to right, I(lr)λ the absorption of this beam is 50%, the transmission is 50%. (Zero reflection).
– True or False?
3. With condition 1 & 2 in place, we now shine a beam, of the same wavelength, from right to left, I(rl)λ. The absorption of this beam is 50%, the transmission is 50%.
– True or False?
4. As a result of condition 3, the total absorption is double the absorption as a result of condition 2.
– True or False?
SoD
I made an update of the paper with an appendix 1, in which I compare analytically, for an atmosphere consisting of two layers, the two-way Schwarzschild formulation with the one-way finite element formulation.
I address there the absorption of the flux.
In order to get a real flux I have to subtract: U-D.
I think to have to subtract the downward absorption from the upward absorption! The upward flux U is too high, with an amount D, and the absorption due to D has to be subtracted in order to get the real absorption of (U-D).
At least I go in the direction of the one-way absorption from the FE formulation.
Your box thought experiment is therefor more or less treated in the appendix of the version of 19 september.
Click to access planckabsorption.pdf
JWR,
Perhaps it’s only clear to you, but Appendix 1 does not answer the above questions.
It raises more questions.
5 (continuing my numbering from earlier) Is Appendix 1 the basis of the paper (the premise), or does it follow from the paper?
6. a) You list, in Appendix 1 in figure 1: Imaginary downward flux and Imaginary upward flux. Do you think flux is not a real physical property?
b) You then state, in Appendix 1, This OLR is in fact the only real value in the Schwarzschild formulation. However you have just claimed it. How did OLR go from “Imaginary upward flux” whatever that is, to being a real physical property, while “Imaginary downward flux” whatever that is, is treated as still imaginary at the end?
You have a curious statement: U3 [equation] ..real value for U3 since D3=0. What is the physics basis for this claim?
And just to help newcomers who might have reviewed your paper, one of the terms in the imaginary upward flux (then real OLR) is f3.θ3. One of the terms in the imaginary downward flux (then still imaginary back radiation) is also f3.θ3. Both are the emission of radiation from node 3 in this simplified atmosphere. One is the emission upward from the node, one is the emission downward from the node.
c) Are both real, both imaginary, or somehow does the node radiate something real upwards and imaginary downwards?
When claiming a serious attempt at overturning basic physics it is important to be clear and precise about the basis for each step in the formulation.
And this is why I asked my questions 1-4 on September 21, 2013 at 10:14 am – because I need to understand what you believe about emission and absorption of thermal radiation.
7. You state:
– what do you mean by this and what is the basis for this claim?
And perhaps it is all made clear by the preceding statement:
This is completely false.
Absorption of radiation is absorption of radiation. Downward radiation which is absorbed in a body is added to upward radiation which is absorbed in the same body.
[ Clarification note added a few minutes later in case my writing wasn’t clear: Downward radiation absorbed in a body is added to absorbed upward radiation in the same body.]
This is the essence of the first law of thermodynamics.
It’s pointless explaining what’s wrong with your paper if you don’t agree on basic premises of physics – so please take a few minutes to help me, and others understand – by answering questions 1-4 from earlier, and these questions from today.
Thankyou for having studied the appenduix in detail.
It is completely my fault if I have not been able to anticipate your questions and remarks.
I will repeat your remaks and questions , and answer them by putting in front JWR and finish with endJWR. I tried italics and bold and colours but it did not work..
Prhaps it’s only clear to you, but Appendix 1 does not answer the above questions.
It raises more questions.
5 (continuing my numbering from earlier) Is Appendix 1 the basis of the paper (the premise), or does it follow from the paper?
JWR
Appendix 1 is a comparison of an analytical example for 2 layers.
Analytical because for 2 layers we can follow up the numerical process and express it in algebraical form.
endJWR
6. a) You list, in Appendix 1 in figure 1: Imaginary downward flux and Imaginary upward flux. Do you think flux is not a real physical property?
JWR
A flux is of course a real quantity.
When I put the adjective imaginary in front of i, it is not anymore real.
endJWR
b) You then state, in Appendix 1, This OLR is in fact the only real value in the Schwarzschild formulation. However you have just claimed it. How did OLR go from “Imaginary upward flux” whatever that is, to being a real physical property, while “Imagienary downward flux” whatever that is, is treated as still imaginary at the end?
You have a curious statement: U3 [equation] ..real value for U3 since D3=0. What is the physics basis for this claim?
JWR
The imaginary downward flux is positive when going down.
The imaginary upward flux is positive when going up.
The real flux is therefor U – D.
At the TOA OLR =U3-D3 = U3 (since at TOA the incoming D3 =0)
Indeed the OLR, calculated by the two-way Schwarzschild formulation, is equal to the OLR, calculated by thee one-way FE formulation.
endJWR
And just to help newcomers who might have reviewed your paper, one of the terms in the imaginary upward flux (then real OLR) is f3.θ3. One of the terms in the imaginary downward flux (then still imaginary back radiation) is also f3.θ3. Both are the emission of radiation from node 3 in this simplified atmosphere. One is the emission upward from the node, one is the emission downward from the node.
JWR
Our discussion is about the Prevost proposal, who in 1771 suggested that every surface emits as if it were looking to outer space at zero K.
Fourier at that time did not agree.
We are now 250 years further.
And the discussion still goes on.
You are a follower of Prevost.
I am more inclined to Fourier, by saying that the real emission depends also on at what temperature the surface is looking at.
If you are interested in the history of back-radiation go to www,tech-know-group.com, where you can find papers of Keespies.
This more or lees answers also your question 6c).
endJWR
c) Are both real, both imaginary, or somehow does the node radiate something real upwards and imaginary downwards?
When claiming a serious attempt at overturning basic physics it is important to be clear and precise about the basis for each step in the formulation.
And this is why I asked my questions 1-4 on September 21, 2013 at 10:14 am – because I need to understand what you believe about emission and absorption of thermal radiation.
JWR
See also my answer to 6b)
I believe that the generalized Stefan-Boltzmann equation (10) is the real law for heat exchange between two walls.
The most transparent way is described in my ref [3] where I have used it.
If SB written as eps1,2*sigma*(T1^4 – T2^4), see also Wikipedia, that relation can only be explained in words by: surfaces at a distance exchange informatoion with each other concerning their temperatures and concerning their surface conditions, on the basis of which a heat flow is establisged from the warmer surface to the colder one. No heat is going from the colder to the warmer. Only information between the plates, thus also information from the colder surface to the warmer one, but no heat from the colder to the warmer.
As already said in my ref [3] I give the examples from Cristiansen 1883, but you can find it in Wikipedia.
endJWR
7. You state:
Absorption remains a problem anyhow with the Schwarzschild formulation.
– what do you mean by this and what is the basis for this claim?
JWR
As said earlier in order to be able to follow up the numerical processes ,
in appendix 1 I give an example of an atmosphere represented by two nodes.
I follow the fluxes in both the one-way finite element formulation and the two-way Schwarzschild formulation.
I compare the quantities U-D , for TOA and for the surface, between the two formulations. OLR at TOA is the same.
At the surface U1-D1 is nearly the same as qs in the one-way FE formulation, but a factor f1 is making a slight difference (for f1 NE to 1).
I also cmpare the absorption in the two formulations, and I see differences.
(I jump now to your next statement)
endJWR
And perhaps it is all made clear by the preceding statement:
The absorption of the downward flux should therefor be subtracted from the upward flux absorption!
This is completely false.
Absorption of radiation is absorption of radiation. Downward radiation which is absorbed in a body is added to upward radiation which is absorbed in the same body.
This is the essence of the first law of thermodynamics.
It’s pointless explaining what’s wrong with your paper if you don’t agree on basic premises of physics – so please take a few minutes to help me, and others understand – by answering questions 1-4 from earlier, and these questions from today.
JWR
Now we start to have more serious different interpretations.
I say explicitly that in order that the absorption of the two-way formulation goes in the direction of the absorption of the one-way formulation, I have to subtract the absorption of the downward flux D in formulation 2way from the absorption of the upward flux from formulation 2way.
Also to me , at first sight , that looked strange!
But it is not, the upward flux is too high I.e.it is higher then the real flux because we have to subtract the downward flux: real flux = U – D.
In other words the absorption of the upward flux U is too high!
When I subtract the absorption of the downward flux D,
then the difference of the two absorptions is closer to the one of U-D.
And indeed I get the same expression as for the FE formulation , except of course the difference due to the imaginary back-radiation.
And – strange enough – the back-radiation flux of formulation 2way, appears in the analytical expression for total absorption by means of the one-way FE formulation 1way!
You say that it is the essence of the first law!
But the splitting up of the real flux in U and in D is not very physical.
It is more a mathematical trick to be able to write an analytical solution.
That is the reason I call them imaginary fluxes, only their difference can give real physical values..
Or sometimes I also call them emissions of the Prevost type which are only real when the surface emits to outerspace at zero K.
I am not going to repeat here the paper.
I hopethat I have given now enough arguments and
explanations for a better understanding of my views.
And SoD,I want to thank once more to have explained me that I have to interpret Beer-Lambert in the sense of the two-way formulation.
All simple equations like the Stefan-Boltzmann law, Beer-Lambert law and Schwarzschild formula are very limited in their applicability. They are true only when certain assumptions are true, but those assumptions are seldom strictly true. All these equations can be derived from more detailed micro level physical theory. From the micro level physical theory, it’s also possible to derive the limits of applicability of these equations.
Classical Thermodynamics is also an extremely limited theory. It was conceived at a time, when microscopic understanding of physics didn’t exist. At the time it was not possible to study, how heat is transferred. Classical Thermodynamics is an abstract mathematical theory that abstracts and summarizes a number physical realities, and is extremely successful in that. Due to abstraction inherent in Classical Thermodynamics the concept of heat was defined stating that it can be observed only as balancing energy that can be transferred between bodies needed to satisfy the First Law of Thermodynamics.
In the very limited approach of Classical Thermodynamics, it’s correct to say that only the net heat transfer is real. Dividing that as heat flowing from A to B and at the same time from B to A is contrary to the rules of abstraction of Classical Thermodynamics. In that limited world it does not make sense to say that radiation can transfer heat simultaneously in both directions.
Present day understanding of physics is not restricted by the abstractions of Classical Thermodynamics. Now we do understand that radiation goes both ways, and that radiation transfers energy simultaneously in both directions. Both fluxes are real, and neither imaginary. For black and gray bodies of uniform temperature we can use the (generalized) Stefan-Boltzmann law to calculate the net heat transfer, but for heat transfer in atmosphere that’s not possible. For atmosphere we must use more detailed theories of interaction of radiation with matter. The theory must take into account the emissivity/absorptivity of the molecules of the gas. It must also be applicable for a material where the temperature changes continuously with altitude.
When we want to learn about the radiative heat transfer in atmosphere we must forget the Stefan-Boltzmann law. Beer-Lambert law is also applicable only with large reservations. The Schwarzschild formulation is more useful, but only for each wavelength separately. Dividing radiation to upward and downward is a possibility and may lead to reasonably accurate results, but a fully correct calculation requires that all directions are considered separately.
JWR,
You say:
Science is not about arcane historical disputes and our difference of opinion is not about being a follower of someone in the 1700s.
This blog states in the Etiquette:
The reason for this statement is to avoid lengthy disputes over basic science.
The last 100 years of physics uncontroversially accepts the reality of thermal emission of radiation. This emission is dependent on temperature and material properties of the body radiating. It is not dependent on the temperature of any body towards which is it radiating.
If you can find a few undergraduate physics textbooks from the last few decades that agree with your understanding of radiation please provide the details.
The confusion of your approach would be clearer to you and to others if you answered the specific questions 1-4 already raised.
If you answer “True” to questions 3 and 4, then it contradicts your lengthy explanation of atmospheric radiation.
It seems that you think – just having a stab at this myself because you haven’t stated your answers – that the answer to item 3 is -50% and the answer to item 4 is not double, but zero. Or maybe the proposed answer is when we turn on the second source of radiation (from right to left) there is no radiation any more because radiation works differently from how it’s described in all physics textbooks.
This (suggested) “result” is contradicted by all experimental evidence.
—
Note: You can see how to format comments in Comments and Moderation.
JWR,
Only in your mind and those others who refuse to accept modern physics. That discussion was settled definitively when Planck formulated his equation for the spectrum of black body radiation. There is no term in that equation that is a function of the temperature of any other surface. You have no physical mechanism of black body emission to support Fourier.
You and Claes Johnson should get together, just not here.
@SoD
I have now updated my earlier paper.
In appendix 2 but more in appendix 3, I compare the results of the Schwarzschild procedure as I found in your “equations” and the “code” with the one-way FEM model of a stack.
I can’t find your results, so I programmed the original Schwarzschild procedure( according to your equations and code) and I find different results.
It is not due to the numerics, in fact I added two numerical procedures and they give the same results as the original ones: it is the Schwarzschild procedure.
I modified the Schwarzschild procedure with two ameliorations and the Schwarzschild results come closer to the FEM results
.
Conclusion.
It has nothing to do with quantum mechanics,
I use the work of Christiansen from 1883.
I find a solution for the 240W/m^2 for ftot=0.86 (optical thickness.)
The atmospheric window factor 1-ftot = 0.14 which gives qwindow =53.
I do not see any results from your code.
Where can I find them?
I have included the MATLAB program with the “green” comments.
Click to access planckabsorption.pdf
JWR,
Many results from my code are in Visualizing Atmospheric Radiation – Part Two and subsequent articles in that series.
Your old paper is no longer available. Can you provide the old one along with a list of what specifically has changed (deletions, additions and changed equations).
In my comments of September 22, 2013 at 8:42 pm I asked 4 basic questions because your apparent assumptions about basic physics were wrong.
Can you please confirm what those answers are?
Definitions
When I start reading your revised version it is clearly not precise in many important areas. This makes is less interesting to try and follow all the way through especially without the ability to see what old mistakes have been corrected and what new mistakes have been added (from the last version).
For example:
1.
The physical properties (Uλ, Dλ) you are discussing here are not flux. Flux is the spectral intensity integrated over all wavelengths and directions in the direction normal to the surface. This is measured in W/m2.
The property you are describing is the spectral hemispheric emissive power, which is in W/(m2.μm).
Often when people use terminology in non-precise ways I don’t try and pick it up and many times I try not to use strict terminology so the blog is clear and readable to non-scientific people.
However, given that you are trying to overturn basic physics, it is important to be precise.
2.
What is “fictive” about radiation? When you write fictive it means it doesn’t exist.
You could write “hypothetical” if you are unsure about an idea.
From recollection your earlier paper switched properties from fictive to non-existent and others from fictive to real and measurable with no explanation as to why, or how it was determined that a given property was real or not real.
3.
Is this the premise? Or the conclusion?
If it is the premise then you need to establish it. Claiming it doesn’t make it true.
If it is the conclusion then you need to demonstrate it, and it would be appropriate to introduce it to the reader as “the conclusion that will be proven via these following equations … or pages..”
4.
This is the Planck function for emission. This is not a function for absorption.
The material properties you might be thinking of are emissivity and absorptivity. Emissivity = Absorptivity for the same wavelength for a given material.
5. It will be easy to make mistakes if you confuse wavelength and wavenumber. You might be doing that in your paper.
In your parameters you are apparently using wavenumber, e.g.:
Spectral emissive power is usually written like this when using wavenumber, although in this units “cm” should be cm-1.
Possibly you are using wavelength measured in cm?
Planck’s law for emission of radiation is a completely different formula if it is in terms of wavelength, λ compared with wavenumber, v, so you need to ensure you have the right formula.
6.
and the integral of the three are in each case the fluxes, not the total intensity. It’s possible to get everything right with the maths, while using the wrong terminology (even though it will confuse readers). I’m just commenting as I go.
Ditching the Schwarzschild equation before Using It?
7. Then you switch to a new approach – a “more modern technique” as you describe it – without explaining how you can make it work. Or its relationship to the previous equations.
Firstly, you have introduced exchange of radiation between two surfaces with no absorption by the intervening atmosphere.
Secondly, you have introduced a general emissivity term which is not wavelength dependent.
The only way to solve the problem of radiative transfer in the atmosphere where the gases have strongly varying absorptivity/emissivity with wavelength is to keep track of the spectral power at each wavelength and each height in the atmosphere.
So you created the Schwarzschild equation but then ditched it before solving it and instead used the aggregated formula for exchange of radiation between two surfaces?
Then having not used the equations you derived you decide that your result proves those equations give the wrong result??
I have to admit to being lost at this point, but equations 6, 9, 10 and following are not relevant to solving the problem.
To be clear, they are not capable of solving the problem at hand.
Have you used the HITRAN database to get the spectral dependency of absorptivity/emissivity for each gas and then used the concentration of each gas with height?
At this point I gave up trying to understand your approach. I suspect you have created equation souffle with no validity in physics.
By the way, if you take a look at Part Twelve of the above linked series you can see “heating” curves for the atmosphere:
which contains the enigmatic statement underneath: Notice that the heating rate is mostly negative, so the atmosphere is cooling via radiation..
A concept which G&T fail to grasp when they condemn energy balance diagrams as scientific fraud.
JWR,
I read a bit further on in what is clearly a fantasy paper:
[Bold emphasis added].
Readers with zero physics knowledge might go along with JWR.
Anyone with a passing familiarity with the theory of radiation & heat transfer will know this is “invented physics”.
The reason this is against “SoD etiquette”! is because inventing basic physics is not an interesting subject for discussion at this blog.
It is a personal preference of the blog owner to discuss Climate Science within the Frame of Physics.
Rather than to discuss Climate Science within the Frame of Fantasy Physics & The Easter Bunny.
Anyway, once again, in a futile attempt, I ask JWR to produce a physics text book where his inspiration is accepted.
As I requested on September 23, 2013 at 11:13 pm:
As any readers who review his paper can see, he has produced equations for radiative transfer as a function of wavelength (7) & (8), and then for flux exchange between two bodies (9) and then, without reference to any textbooks, papers, or experimental work to support his assertion, simply determined that flux from a surface is a completely different value from those equations. That is, he has derived equations and then claimed they are wrong.
This is called, in technical terms:
Making up random stuff and asking people to believe in it because it sounds nice.
Well, don’t forget the Easter Bunny.
The idea that downwelling radiation would not be real, and that there might instead be an effect that reduces the upwards radiation is a rather common one. There seem to be two separate origins for this kind of thinking.
1) Classical thermodynamics and definition of heat as a net effect with the additional notion that this is the full definition, and that it’s not possible to divide it to further parts.
Using this argument is based on the fallacy that Classical Thermodynamics would present a comprehensive description of physics for the related phenomena. The truth is, of course, that present physics covers very much that Classical Thermodynamics cannot even discuss and that one physical theory cannot set limits for what other theories can describe.
2) Theory of electromagnetic radiation. Studying electromagnetic radiation on the basis of Maxwell’s equations the approach is built on electric and magnetic fields that form a totality where Poynting vector provides a single (net) directional energy flux density at every point, This observation seems to be the argument for the more sophisticated arguments for the idea of the first paragraph of this comment.
We know that photons are also electromagnetic radiation. Thus it’s natural to think that the energy fluxes related to photons should be understood based on the Poynting vector. The photons cannot, however, be understood from Maxwell’s theory alone, but are fundamentally quantum mechanical objects. The theory of radiation before quantum mechanics could not be made self-consistent, but led to the ultraviolet catastrophe. Planck invented a trick to get the correct radiation formula, but could not explain that from fundamentals, because only QM has been able to provide the needed fundamentals. That required the use of a method called second quantification, where particles (photons) can be created and destroyed. This approach was refined and made (essentially) self-consistent by the Quantum Electrodynamics (QED).
In QED it’s seen that photons interact very weakly with each other. Therefore every single photon behaves as if it would be the only photon. That makes every photon equally real. Some of them go down and some up. All of them must be considered, and they cannot cancel each other. Each of them has it’s own Poynting vector. The electromagnetic fields or Poynting vectors cannot be summed up, only the energy fluxes that result from them. Mathematically that’s true because the quantum mechanical phases are uncorrelated (incoherent).
SoD
As Pekka says
“Classical thermodynamics and definition of heat as a net effect with the additional notion that this is the full definition, and that it’s not possible to divide it to further parts.”
The greenhouse theory in a nutshell, is solar radiation (UV => IR) makes it easily through the atmosphere but the long wave IR from surface is absorbed by greenhouse gases and partially radiated back to surface warming it up in the process.
There is little controversy about the solar radiation effects.
However since the IR radiation and its interactions can be explained by classical physics then an approach using Maxwell’s Equations and Gibbs Thermodynamics should be valid.
That as I understand it is the approach adopted by JWR.
He introduced his paper in a Tallbloke thread.
Tim Folkers went through the numbers and found to his surprise that they were in the ‘right ball park.’
So was your remark , “Well, don’t forget the Easter Bunny” really appropriate?
Perhaps you were having a ‘DeWitt Payne’ moment.
Bryan,
Please cite a reference that calculates the IR absorption spectrum of CO2 using only classical physics.
DeWitt and Pekka
The classical physics approach would be via the bulk thermodynamic quantities of such as the specific heat of individual gases.
For instance in the range 250K to 350K the specific heat of CO2 increases by 13%, this reflects the increasing radiative activity.
Whereas the SH of N2 is almost constant over this range.
There is no need to look at individual wavelengths to investigate heat transport in the troposphere.
The radiative properties of the molecules are naturally included in the bulk quantities.
Bryan,
Changes in the specific heat of CO2 do not affect anything in the atmosphere significantly. That’s a really insignificant change, and is not directly connected to the GH effect. Both the changes in specific heat and the GHG properties of CO2 are due to the quantum mechanical properties of CO2 molecules. Neither can be explained without. They are linked through this, but adding the change of specific heat to the classical thermodynamic description of the atmosphere tells nothing about GH effect.
To understand what happens it’s absolutely necessary to consider also the wavelength dependence of absorption and emission of IR.
The radiative properties of molecules are not included in any classical description of bulk properties.
It seems that you erred fully on every count of your comment.
Pekka you say
“That’s a really insignificant change, and is not directly connected to the GH effect”
But then perhaps there is no greenhouse effect.
How would you account for the increased Specific Heat of CO2 other than by increasing vibrational modes?
Other bulk quanties include the transport coefficients.”
The thermodynamic effect of an object radiating on another object ihas already been taken into account by the coefficient of thermal conductivity which, despite its name, measures all kinds of diffusive heat transport including radiation.
G&T write the following in their first falsification paper:
“A physicist starts his analysis of the problem by pointing his attention to two fundamental thermodynamic properties, namely the thermal conductivity, a property that determines how much heat per time unit and temperature difference flows in a medium;”
In their reply to Halpern et al. they write:
“Speculations that consider the conjectured atmospheric CO2 greenhouse effect as an “obstruction to cooling” disregard the fact that in a volume the radiative contributions are already included in the measurable thermodynamical properties, in particular, transport coefficients.”
If you want to know the thermodynamic effect of doubling the CO2 concentration you only need to measure the changes in the transport coefficients. These changes will of course be unmeasurable (although there is probably some tiny factual difference).
No need for any redundant radiative transfer calculations.
Bryan,
That you present an erroneous claim does not make GHE any less likely.
As i wrote explicitly: The same QM phenomena are behind both, but as I also said you cannot pick one of the consequences and plant it to classical thermodynamics and make it give the right results.
Thermal conductivity does not normally include radiative effects. The radiative heat transfer cannot be described fully by the equations of thermal conductivity, and exactly this deviation is crucial for GHE.
G&T write so much rubbish, that I don’t usually comment on them. What they say about what a physicist does first proves absolutely nothing. Their answer to Halpern is simply untrue.
The GHE is finally determined by outgoing radiation at TOA. Transport coefficients don’t tell anything on this most important factor.
Pekka
This total conviction you have in the physics behind the greenhouse gas theory is not universally shared.
Professor Mehmet Kanoglu indicates three stages of interaction of IR active gases in a radiation field.
1. Low temperature – non participating
2. Moderate temperatures – participate as absorber
3 High temperature (furnaces) participate as absorber and emitter.
Page 35 Heat and Mass Transfer Chapter 13 Fourth Edition(2011)
Professor Alfred Schack, the author of a standard textbook on industrial heat transfer was the first scientist who pointed out in the twenties of the past century that the infrared light absorbing fire gas components carbon dioxide (CO2) and water vapour (H2O) may be responsible for a higher heat transfer in the combustion chamber at high burning temperatures through an increased emission in the infrared.
He estimated the emissions by measuring the spectral absorption capacity of carbon dioxide and water vapour.
In the year 1972 Schack published a paper in Physikalische Blatter entitled
“The influence of the carbon dioxide content of the air on the world’s climate”.
With his article he got involved in the climate discussion and emphasized the important role of water vapour.
Schack discussed the CO2 contribution only under the aspect that CO2 acts as an absorbent medium.
Schack decided not to use the radiation transport calculations.
G&T agree with this decision on the basis that the transport formulas were derived for Stars and cannot simply be applied to the Earths atmosphere for several physical differences.
Page 72 of their falsification paper.
So what about the “bite” around 15um in SoD post below perhaps you will say?
It’s my opinion that emission energy leakage to the much more probable lower energy H2O bands is responsible.
Notice that the graph shows that observed value is higher than the theory value.
So back to my original point .
Given the lack of evidence that CO2 does not seem to track with climate temperatures a little soul searching might be appropriate from climate science.
Perhaps going back again to classical thermodynamics as in the case of JWR might not be such a bad idea
Bryan,
Everything I have written in these comments is standard content of textbook physics and is universally shared by physicists who have studied these issues enough to have a personal view. Being universally shared does not mean that it would be impossible to find a few individuals who declare themselves as physicists, and who do not share these views.
Here we are not discussing 98% vs 2%, but rather something like 99.99% vs. 0.01%. How can I be so sure? Because that is true for everything contained in physics textbooks of the level needed here. There are questions, where physics does not give equally certain answers. There are many such questions in atmospheric physics as well, but nothing that we have discussed here depends on those less certain areas.
What you tell as counter-evidence is either written for some other purpose and valid only under assumptions accurate enough for that purpose, or examples of the 0.01% that I mentioned above.
We are clearly moving towards issues that SoD has declared as being excluded from this site. He writes: This blog accepts the standard field of physics as proven. Arguments which depend on overturning standard physics, e.g. disproving quantum mechanics, are not interesting until such time as a significant part of the physics world has accepted that there is some merit to them.. Claiming that modern physics should be excluded and QM dismissed in favor of Classical Thermodynamics alone is against his policy. Therefore I refrain from discussion such points, but I may continue to explain, if relevant questions are presented on the way QM operates here.
Pekka,
I would go further and say that the thermal conductivity of a gas at normal temperatures never includes radiative absorption/emission. The standard method for measuring thermal conductivity is by determining the reduction in temperature (increase in resistance) of an electrically heated thin wire caused by the presence of a gas at different temperatures and pressures. Using a transient technique to eliminate induced convection caused error means that the thermal diffusion boundary layer is so thin that any absorption/emission is infinitesimal. And, of course, optically transparent gases like helium and the other noble gases do have thermal conductivity.
In Meteorology, the thermal conductivity of the atmosphere is considered small enough to be ignored, except at the surface where the thermal gradient can be high enough that the flux from conduction is significant. Even then, most heat transfer from the surface to the atmosphere is latent rather than sensible.
In engineering calculations, radiative heat transfer is calculated separately from conductive/convective heat transfer. Rather than using the radiative transfer equation directly, graphs, tables or fitted equations of total emissivity have been constructed originally by measurement but now by using this procedure for a band model. The total emissivity is calculated at different partial pressures and path lengths such that the product of path length and partial pressure is constant and extrapolated to zero partial pressure/infinite path length. The individual lines on the graph are for different amounts of CO2 for a unit area expressed as bar cm. A bar cm is the amount of pure gas at one bar pressure that would have a thickness of 1 cm for a given unit area. Converting total emissivity at zero partial pressure to a real situation requires the use of another equation.
Bar cm, by the way is similar to Dobson Units for total column ozone. The Dobson unit is a thickness of 10 μm at 1 atmosphere rather than 1 cm at 1 bar and the path length is the entire atmosphere, ~8 km at 1 atmosphere. So 200 Dobson Units would be 2 mm of ozone in 8 km. Total column CO2 is about 300 bar cm, if I remember correctly. Eyeballing the graph linked above, that would be a total emissivity of ~0.2.
DeWitt,
I don’t go further to the actual subject, but make only one comment.
It’s really pity that we have all the units. First figuring out, what they mean and then keeping track of the conditions used in defining each of them is just a wasted effort.
Units based on column height have a simple relationship to the total amount of air in a vertical column and concentration expressed in ppm (or ppb), but the exact relationship depends on the pressure (atm or bar) and temperature (0C, 15C, 20C or 25C) used in the definition, and all this essentially for no useful purpose. Standardization takes too long.
Pekka,
It did indeed take me a while to understand the meaning of atm cm, or in the case of MODTRAN, atm cm/km, when I first encountered it, but it is convenient that the individual components add up to a nice round 100 for a cubic meter of air at STP rather than 44.64… moles, which, I think, would be the correct SI unit.
Bryan,
Interactions of IR radiation with matter cannot be explained by classical physics. Quantum mechanics is essential in that, and even quantum mechanics without Quantum Field Theory has some problems of self-consistency in that.
Bryan says:
Was this taught as a method of proving or disproving a theory when you did your physics degree?
I hope so.
Onto serious stuff, can you demonstrate how equations 9-12 can be derived from the correct fundamental equations 7 & 8. (And to make this a super-hard problem, we’ll go “old-school” which means the method of proof “I met a guy down the pub who took a look and reckoned it wasn’t bad” is excluded, valid though it may be in some enlightened circles).
Bryan,
The irony also gets rather thick when you take me to task for citing Denker on thermodynamics while lauding G&T on the greenhouse effect. All Denker wants to do is rationalize the terminology to minimize confusion when teaching the subject, not overthrow the fundamentals.
Bryan,
By the way, given the difficulty Roy Spencer had with creating identically behaved boxes when trying to replicate the Wood experiment, Vaughan Pratt’s observation of the large temperature gradient inside his boxes, making thermometer placing critical, and ‘he who shall not be named for fear of moderation’s’ use of insulation on his box with the IR transparent cover while not insulating the other boxes when ‘replicating’ the Wood experiment, do you still believe that Wood actually used all of his formidable experimental skills when doing his experiment and that it is still definitive?
One of these days, if I can catch up on my DVR backlog and don’t have anything better to do, I’ll finish building what I think will be a better mousetrap, as it were. I’ll put an electric heater on the bottom of the box, rather than using sunlight, and turn it upside down to minimize convection. Then I should even be able to use an uncovered box as well as covers with different spectral properties. I have all the pieces.
From your comment at Roy Spencer’s:
It’s all about differences. DLR does not warm the surface, it raises the surface temperature necessary to maintain the same upward energy flux. The surface inside a glass covered box does not get as cold at night as fast as it does in a box with an IR transparent cover, provided you don’t get dew or frost on the IR transparent cover, drastically increasing IR absorption. The cooling curves are quite different. With an IR transparent cover, the interior surface cools faster than the cover, much like what happens on calm, clear sky winter nights when there is a strong temperature inversion within a few meters of the surface. In a glass covered box, the cover cools faster than the interior surface.
Horace de Saussure in 1767 built a three glass layer hotbox that reached an internal surface temperature of 230F or 383K. That’s equivalent to 1220 W/m² for an emissivity of 1. In my version de Saussure’s experiment the interior surface reached a temperature of ~410K. Channel 1 is ambient air temperature, 2 is the interior surface temperature and 4, 5 and 6 are the temperatures of the innermost glass cover, the intermediate polycarbonate cover and the outside acrylic cover. That’s equivalent to 1600 W/m² at an emissivity of 1 for the interior surface. That’s far above the ~1000W/m² of solar radiation that reaches the Earth’s surface on a normal day. And that doesn’t include the reflection and absorption losses of incident radiation at each of the three covers. You can’t explain a temperature that high based on just preventing convection.
M. D. H. Jones and A. Henderson-Sellers should be embarrassed to have perpetrated all this on the scientific community by resurrecting Wood’s brief note from its well deserved repose since it was eviscerated by Charles Greeley Abbot shortly after its publication.
Pekka
Nothing I have said contradicts quantum mechanics.
Quantum mechanics is certainly required for many aspects but perhaps long wave infra red is not one of them.
Classical physics did not have the catastrophe until it dealt with the UV.
There is a lot to be said for using the simplest tools at hand.
The passage plans of space probes are worked out using Newtonian Mechanics rather than Relativistic Mechanics.
If however asking whether the Schwarzschild equations are appropriate for the Earths atmosphere is going beyond the pale then I will terminate this discussion
Bryan,
It has been told to you several times that photons are a QM concept. They do not exist at all without QM. Everything in understanding interaction of IR with matter depends on QM and photons. If you disagree, you should tell who and where has been capable of explaining the IR absorption spectrum of CO2 or any other gas without QM.
You keep on insisting that the analysis should be done without the only tools that are valid for it.
You are contesting basic physics, whether you admit that or not.
For non-technical readers who can’t follow maths I explain briefly the (first) confusion of the paper.
The problem we are trying to solve is calculating radiation absorption and emission in the atmosphere.
Now let’s make our desired outcome very simple, and say that we don’t care what the (upward) spectrum at the top of atmosphere looks like and we don’t care what the (downward) spectrum at the surface looks like.
That is, we only want to calculate the upward flux at TOA and the downward flux at the surface. This flux is power per unit area, which is in the units W/m2.
The difficulty is that different components of this flux are absorbed at completely different rates by certain gases. So for example, 95% of radiation at 15 μm is absorbed within 1m (at surface pressure) by CO2. But very little of 9.8 μm is absorbed within even a 12km path through the troposphere (lower atmosphere) by any gases. And there is everything in between these extremes. Here is an example of transmission through CO2 at different wavelengths:
The only way that I know of to keep track of the intensity of wavelength 12 μm (for example) is by keeping track of the intensity of wavelength 12 μm on its path up through the atmosphere, and separately down through the atmosphere.
This is what equations 7 & 8 actually do. They let you track each wavelength at each height in the atmosphere. Then we say – how much CO2, how much water vapor exists at each height in the atmosphere, and then we can work out (from a database called HITRAN) what the absorption of wavelength 12 μm is through the first km of the atmosphere and the second km through the atmosphere and so on. And we can also work out what the emission of wavelength 12 μm is from each of these layers in the atmosphere.
When we’ve calculated the absorption and emission through the atmosphere we get the values for each wavelength at the top of atmosphere. Then we can sum it all up into the total flux. We also have the added benefit of knowing what the spectrum looks like. For example:
From Theory and Experiment – Atmospheric Radiation
Let’s say instead, let’s dispense with that convoluted approach and let’s just track total flux.
First problem – in the journey of this flux from the surface up to 1km what is the absorption by the atmosphere? Well, we are going to have to calculate a general absorptivity term. It’s one number. We have to do this by averaging the absorptivity over all the wavelengths of interest for all the gases present in their respective concentractions (water vapor, CO2, CH4 etc).
And we can’t just do a “flat average”, we have to weight the averaging process because the intensity of radiation from the surface at 10 μm is much higher than the intensity of radiation at 25 μm.
This is a soluble problem but I can’t see whether JWR has actually done this, or even understands that this has to be done.
Second problem – this one is an insoluble problem by the way – now we have a flux value for upwards radiation at let’s say 5km. Now we want to work out the “absorptivity” of the atmosphere between 5-6km. So we can work out the concentration of each gas, work out the total number of molecules, look up the absorptivity at each wavelength for each type (in the HITRAN database) and do some averaging?
No. We can’t.
The reason we can’t is we don’t know what the spectrum looks like so we can’t work out what weights to assign to the different components of absorptivities. Here is an example upwards spectrum at 20km (top graph):
From Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Ten
When we know what the spectrum looks like we can see how to weight the different absorptivities to see the effect on total flux. But when we don’t know we can’t calculate “total flux absorptivity”.
And that is why we have to keep track of each wavelength and only sum up at the end.
@SoD
Click to access Planckabsorption.pdf
I have updated the paper.
1) I changed “fictive” in “hypothetical” as you suggested. Thank you.
2) What concerns the textbooks, I refer to my lecture notes of 1953 of the
University of Delft, in which it was suggested to follow up the work of
Christiansen of 1883 to define the heat exchange by radiation between
two surfaces.
Indeed Chrisiansen used, back in 1883, forward and backward radiation
to end up with a converging geometrical series.
You can find it also in Wikipedia, emission, with the same proof.
In my reference [3] I have explained it by means of practical examples.
In fact reference [3] shows how radiation works according to engineers.
3) The stack model is a monochromatic one.
It has been monochromatic from the very beginning.
In my very first paper [1], in which I have written the equations
on the basis of a finite difference scheme, I changed the concentration and
the distribution of IR-sensitive trace gases in such a way that
OLR=240 as reported by K&T type publications (see [1], to find the
references of the papers by which I validated the monochromatic model.)
I deduced that ftot = 0.86! And ftot is the optical thickness at the surface.
No other data has been used, just logical thinking.
The monochromatic model uses as variables theta= sigma*T^4.
It is shown that indeed theta is the monochromatic equivalence
of the Planck function B
In fact by changing the concentration and the distribution
( with respect to the temperature distribution)
of IR-sensitive trace gases , it was found that the full color model is not
of a high priority: the monochromatic model is indeed within 10% what
you can expect of full-color models. See [1].
4) Nevertheless I want to go to a full-color model.
As I have indicated by introducing parallel finite elements.
And since not everybody is familiar with finite elements,
I have now shown in the update that, by creating for each wave
length interval k a “monochromatic” matrix relation,
it is sufficient to add them.
It will take me some time to extract from the various data bases the
concentration of IR-sensitive trace gases, and other parameters which
define B(T(z), … , … ). There is not a consensus which one is correct.
5) My study of the Schwarzchild equations is done for the monochromatic case,
in fact SoD is doing the same in “the equations” [4] and in “the code”[5].
And I have done it for ftot = 0.001 to 0.86 and for different distributions ,
indicated by the parameter “m”: as argued above I have made numerical
experiments to simulate full-color model.
I have applied the Schwarzschild procedure to an atmosphere which I discretize
as a stack of gauze’s. SoD also introduces sampling points, or nodes, to simulate a continuous problem. I like to call my model the “chicken wire” model, just to
show that plain engineering intuition brings you fast to a result.
You do not need to mention quantum mechanics just to make impression on the reader.
Application of the Schwarzschild procedure – dividing up the radiation in an
hypothetical up-ward flux and an hypothetical down-ward flux, and determining
the OLR from the Prevost surface flux as boundary condition and by multiplying
per layer the upward flux by (1-f) – turns out not to be a correct procedure.
There is no physical argumentation in the Schwarzschild procedure.
6) Instead admitting a heat exchange between pairs of surfaces, it turns out that for a temperature distribution defined by an environmental lapse rate, the layers absorb
lesser LW heat than they emit, and for a steady state temperature distribution the
difference of heat should come from other mechanisms than LW radiation:
convection of sensible and latent heat, absorption of SW radiation by aerosols etc
That is the physics of the one-way model.
I was glad to read that at least the blogger Bryan said that he, and others he referred to , judge that they understood it and that I make a point there.
I have included in the paper the listing of the MATLAB program.
Every body with a MATLAB on his PC can do numerical experiments with it.
A 30 layer model runs in a few seconds.
One does not need any external files.
The scope of the paper is to show that CO2 is not a dangerous gas, it is a fertilizer for plants. Doubling the concentration gives an increase of surface temperature of 0.1 K, according to the model.
Click to access Planckabsorption.pdf
JWR,
You do know that this isn’t exactly news, see, for example, the TFK09 energy balance which is discussed at length here. It proves nothing about one way or two way radiation emission/absorption as the energy balance is exactly the same in either case.
One could probably construct a thought experiment that proves that one way transfer violates causality. Consider infinite parallel planes separated by a large distance, say one light second. Now vary the temperature of one of the planes. Solving the energy transfer problem with standard two way physics is no problem.
Referring to Bryan favorably doesn’t help your cause. If I read his recent correspondence correctly, he doesn’t believe that there is any radiative exchange in the atmosphere. It’s all convection. Unfortunately for him, the known movement of air in the atmosphere and the 2 km scale factor for water vapor means that convection alone cannot be sufficient. Radiative transfer as a percentage of the total energy flux increases rapidly with altitude.
The point that the net influence of LW absorption and emission is cooling in the troposphere was discussed in the recent post https://scienceofdoom.com/2013/01/30/visualizing-atmospheric-radiation-part-twelve-heating-rates/ . That’s actually the fundamental property that makes troposphere different from stratosphere.
DeWitt Payne says
“Referring to Bryan favorably doesn’t help your cause. If I read his recent correspondence correctly, he doesn’t believe that there is any radiative exchange in the atmosphere.”
Point to any statement of mine that comes anywhere near what you claim.
Of course there is radiative exchange in the atmosphere.
What I have said is that in the troposphere the radiative contribution is already included in bulk thermodynamic quantities like specific heat capacity.
The radiative component of neighboring volumes is largely self cancelling.
Above the troposphere the radiative components will depart from the bulk.
For long wave radiation > 5um a classical approach should be valid because the law of energy equipartiion still largely holds.
The quantum approach is dropped by De Witt when convenient to the global warming story.
Not that long ago (here on SoDs site) I said that for quantum reasons blue light is not absorbed in pure water.
De Witt was then arguing instead the ready thermalisation of all light in pure water.
Bryan,
I don’t fully understand what you mean by many of your sentences as they don not have any meaning in the description of atmosphere, oceans and radiation in textbook physics.
1) Excitation of vibrational modes affects both the emission/absorption of IR and the specific heat of atmosphere. The latter effect is, however really negligible while absorption and emission of IR affects atmosphere a lot. The negligible effect cannot explain the important one.
It’s true that changes in convection compensate in some situation within the troposphere changes in radiative heat transfer, but other effects of emission and absorption of IR remain large, in particular the influence on OLR at the top of troposphere and downwelling radiation to surface.
2) Classical physics cannot explain any part of emission and absorption of IR by gas molecules. If you disagree, please tell, how it explains the emission ans absorption spectra, which are totally essential for correct calculation of heat transfer.
3) All wavelengths of SW are absorbed in deep water, only difference between wavelengths concerns the depth where that occurs. Well less than 0.1% of solar SW penetrates deeper than 1 km, about 6% passes more than 100m in pure water, and about 23% more than 10m.
Bryan,
Umm, No, I wasn’t. Here’s the start of the exchange:
https://scienceofdoom.com/roadmap/atmospheric-radiation-and-the-greenhouse-effect/#comment-20180
Where in that exchange did I drop the quantum approach? You were wrong then and you’re still wrong. You’re hypothesis that absorption of blue wavelengths in sea water is only by organic matter suspended or dissolved is simply wrong, as is your claim that there are no energy levels in liquid water in the blue part of the spectrum:
JWR:
No one is in disagreement about heat exchange by radiation between two surfaces.
In fact, I have frequently provided examples from Fundamentals of Heat & Mass Transfer, by Incropera & DeWitt (2007) to demonstrate the basics of radiation (and also conduction). Their work, along with every other engineering textbook on radiative heat transfer includes as the staple – how to calculate heat transfer between surfaces using exactly the method you describe.
Well, there are two questions relating to this particular subject for you:
1. Where is the second surface? (Earth’s surface is surface 1)
2. The specifics of the calculations of radiative heat transfer between two surfaces neglect absorption by the intervening atmosphere – where is your demonstration that atmospheric absorption can be neglected ?
My questions here are very simple – you derive equations 7 & 8 from first principles, so:
3. Just to be clear – you are now claiming these equations (7&8 in your paper) are wrong? Please confirm.
4. Given the answer to question 3 above, is the derivation incorrect or are your assumptions incorrect?
4. Why do you not address these points in your paper?
In simple terms, I am confused by your paper. It has the merit of deriving equations, it does not disprove these equations, yet your commentary on your own paper is that these equations are not correct or not applicable.
I’m sure there is a technical term for this approach, but in plain English your paper makes no sense unless it addresses these points.
As a further explanation for you (my working assumption is you don’t actually understand the basics of this subject), the Planck equation gives the monochromatic (wavelength by wavelength) “output” of a “black body”.
The monochromatic emissivity (values 0-1) shows how the material property departs from a black body at each wavelength.
When you integrate the Planck equation over all wavelengths and all directions it results in the “blackbody” Stefan-Boltzmann equation – σT4. That is, the SB equation is the “aggregation” of all wavelengths.
When you integrate the Planck equation x the monochromatic emissivity over all wavelengths you get the modified SB equation – εσT4, where ε is the “average” emissivity for that body at that temperature.
In summary, the SB equation used for radiative transfer between bodies is the simplified version of the fundamental equation and whenever there is any confusion about the applicability of a result always start with the fundamental equations and determine what simplifications can be made.
This determination needs to be explicitly stated. You don’t start with the simplified version of a fundamental result, assume it is the actual real physics and then write off the fundamentals because your simplified solution produces a different result.
In case, what I am writing is still not clear – if the simplified result derived from the fundamental physics disproves the fundamental physics then the simplified result has also been disproven (because it depends on the fundamental equations).
And if the paragraph above is not clear I applaud your total lack of comprehension of logic, science and the last 500 years of the enlightenment and turn to other matters.
@SoD
I will follow your remarks
Remark by SoD
No one is in disagreement about heat exchange by radiation between two surfaces.
In fact, I have frequently provided examples from Fundamentals of Heat & Mass Transfer, by Incropera & DeWitt (2007) to demonstrate the basics of radiation (and also conduction). Their work, along with every other engineering textbook on radiative heat transfer includes as the staple – how to calculate heat transfer between surfaces using exactly the method you describe.
Well, there are two questions relating to this particular subject for you:
1. Where is the second surface? (Earth’s surface is surface 1)
2. The specifics of the calculations of radiative heat transfer between two surfaces neglect absorption by the intervening atmosphere – where is your demonstration that atmospheric absorption can be neglected ?
answer by JWR
Ad 1
You agree with the work of Christiansen of 1883. In particular I suppose the exchange of heat by radiation between surfaces with different emission coefficients!
I repeat it in plain words:
“two surfaces exchange information concerning their temperatures and their surface conditions on the basis of which heat is exchanged from the warmer to the colder surface”:
q(1–>2)=eps12*sigma*(T1^4-T2^4)
In my reference [3], I give examples of applications of these equations. It is shown there that the heat exchange by radiation is an one-way traffic from the higher temperature to the lower temperature. Trying to apply the proposal of Prevost is not working, in particular not for two surfaces with different emission coefficients. See my reference [3].
In my first paper on the stack model [1], I use the one way heat flow between grids, that are plates with holes.
Already at that time I anticipated your question “where is surface 2?”.
I insisted in [1], that I was analyzing stacks of grids in vacuum, or air without IR-sensitive trace gases. I postponed the discussion whether the stack of gauze’s represents a semi-transparent atmosphere with IR-sensitive trace gases, until numerical results of the stack model were obtained.
I made models with stacks with different absorption coefficients f, where (1-f) represent the holes in the gauze, different number of layers, the usual numerical experiments to test the convergence of procedures.
Only at the end of the paper [1] I suggested that with f=beta*delta_z looks like an atmosphere wiyh absorption beta and I validated it by means of the published results of 3 K&T type papers.
This answers your question what is surface 2, it is the discretized layer of an atmosphere with f=beta*delta_z. In [2] is also presented how by changing a parameter “m” of the beta distribution the validation with K&T papers could be carried out.
Ad 2
In [1] the equations were developed by means of a finite different mesh. The mesh points were the layers. Between two adjacent layers there were no IR-sensitive trace gases, they are concentrated in the mesh points.
In a second paper [2] I introduced the concept of finite elements. In the elements with nodes belonging to adjacent layers , the resulting equations are the same.
The finite elements are overlaying, and in elements with nodes of layers not adjacent to each other, a viewfactor was introduced. See[2].
It turned out that the viewfactors do not have a big influence on OLR. The reason is that in the K&T global and annual mean atmosphere the heat transport from the surface to the atmosphere is not governed by radiation but rather by convection. I understood that Bryan is also making this point.
Question by Sod
My questions here are very simple – you derive equations 7 & 8 from first principles, so:
3. Just to be clear – you are now claiming these equations (7&8 in your paper) are wrong? Please confirm.
4. Given the answer to question 3 above, is the derivation incorrect or are your assumptions incorrect?
4. Why do you not address these points in your paper?
Answer by JWR
ad 3 and ad 4
In the paper we are discussing , with the MATLAB listing, both the results of [1] and [2]. They are presented in appendix 4.
The finite difference equations in [1] were not wrong, they used a windowF vector, the finite element equations in [2] is a refinement with the viewfactorF matrix.
In option 6 the different viewfactor matrices and the window vectors are depicted graphically.
Conclusion:
the viewfactorF matrix is to be preferred, it followed from the more transparent finite method of developing equations.
The Schwarzschild method gives rise to a viewfactorS matrix, which is also depicted in option 6 of the MATLAB program.
In appendix 2 where the equations for a two-layer model are written explicitly, you can observe the similar structure of the FEM equations and of the Schwarzschild proposal. Only the viewfactor matrices are different: viewfactorF and viewfactorS respectively. Inserting the viewfactorS matrix in the FEM stack model gives the same result as the Schwarzschild procedure: OLR is not decreasing for increasing ftot = optical thickness at the surface.
The only drawback of the present stack model is that the Planck functions for a layer are represented by the mono-chromatic theta=sigma*T^4. As said previously introducing the wavelength dependent Planck functions did not have a high priority for me. The dependance of OLR on the distribution of beta*delta_z was studied in [1]. It turned out not to be important. I am now shopping around to find Planck functions B(T(z),…,…) to replace the theta distribution, as indicated in the paper.
It is clear from what I write in my papers and in this blog that I am trying to find out what is the scope of SoD code.
From my models I concluded already in the December 2012 paper [1] that the heat evacuation from the surface of the planet to higher layers of the atmosphere is mainly by convection, the FEM paper [2] of April 2013 confirmed it even more.
If radiation is determining the heat evacuation, ―like in the case of an hypothetical atmosphere without a heat exchange between the 99% bulk and the IR-sensitive trace gases ―, it follows that the trace gases are much colder than the experimental observed temperature with corresponding environmental lapse rate.
How come that you claim that radiation is heating up the atmosphere? My model is not showing that.
What is the reason of the huge number of iterations?
What are you iterating for in SoD code?
I my paper of 2012 , reference [1], I found out that two-way heat flow between a stack of plates gives spurious absorption.
From your code , I see that the absorption of the hypothetical up-going component is used to heat up the atmosphere.
From SoD code, my reference [5]
% the upwards radiation leaving the layer, then a heating
Eabs(i)=Eabs(i)+(radu(i,j)-radu(i+1,j))*dv;
And a similar way you add the term from the hypothetical down-ward flow:
% accumulate energy change per second
Eabs(i)=Eabs(i)+(radd(i+1,j)-radd(i,j))*dv;
My conclusion in my 2012 paper [1] that the two way heat flow gives a far too high absorption is confirmed.
I tried to draw your attention on it earlier, but your answer was that it is based on “first principles”.
It is my opinion that the Schwarzschild procedure , as far as I could deduce it from your “equations” and your “code”, is violating both the first and the second law.
JWR,
You are fighting against that part of physics where the agreement between theory and experiments is verified perhaps most accurately of all parts of physics, namely Quantum Electrodynamics developed by Feynman, Dyson, Tomonaga, and Schwinger based on work of other great scientists from Planck to Dirac and beyond. Their theory tells that radiation between bodies is a two-way phenomenon. It’s well known that attempts to get correct results with any other approach are hopeless. In a formal sense that’s possible, but it’s known that the approach leads to the same results, and that the only way the calculations can be done is exactly the one that is best described by two-way exchange of photons.
Everything that you discuss concerns phenomena which can be analyzed fully by this very well known and verified theory. Whenever you get the a different result your result cannot be but wrong, because the alternative has been verified. When you get the same result, your calculation may be correct, but even then almost certainly just confusing.
JWR,
Two surfaces with different emissivities can’t exchange information by any means other than the exchange of photons. A surface with an emissivity ε less than unity must have a reflectivity equal to 1-ε, that is it absorbs or reflects photons. The result of this is that if the two surfaces are the same temperature, the energy distribution of the photons inside the box is not dependent on the emissivities of the walls but is identical to the distribution that would result if ε was equal to 1 for all surfaces. That’s why a box with a hole in it with the walls at constant temperature is such a good approximation of a black body. I don’t see how this could happen if energy exchange is only one way. How do the inside surfaces of the box ‘know’ that there is a small hole and emit just enough radiation in exactly the right direction to give the appearance of a black body spectrum? It’s much simpler to have photons with a black body energy distribution always present.
This is another place where causality raises its head. Suppose the box is very large with the walls one light second apart. Now put a hole in one wall. Do you immediately observe black body radiation or do you have to wait two seconds for the opposite wall to detect that there are no photons coming from the new hole? Immediate detection with one-way transmission would seem to require that information be transferred by means other than photons faster than light.
JWR,
Then you’re not doing it correctly. It works just fine for me as long as you remember to add reflected radiation to the emitted radiation from each surface before calculating absorption by the other surface. An iterative approach in a spreadsheet rapidly converges to the correct solution for infinite parallel planes at temperatures T1 and T2 with emissivities ε1 and ε2:
Q = σ(T1^4-T2^4)/(1+(1-ε1)/ε1+(1-ε2)/ε2)
1-ε1 is equal to the reflectivity of surface 1.
JWR,
In your Appendix 5 you state:
Not very well. The trace gases absorb strongly at some wavelengths and hardly at all at others. A semi-transparent grid absorbs and emits equally at all wavelengths. This means that energy absorbed by the grid is then emitted with a Planck spectrum over all wavelengths rather than the wavelengths appropriate to the specific trace gases. In addition, as pressure and temperature decrease with altitude, the molecular lines narrow so the average emissivity decreases. A semi-transparent grid, therefore, is only a crude approximation to a real atmosphere and calculations based on this approximation prove nothing about the real atmosphere.
JWR,
Your reference 3 claims to prove that surfaces with different emissivities at the same temperature are not in radiative equilibrium with two way exchange of radiation. That reference is wrong because it neglects reflection. Instead of each surface emitting only εσT^4, the surface emits that amount of radiation and reflects (1-ε)σT^4. The sum is then σT^4, identical to a black body.
This is a really, really dumb error. The fact that you didn’t catch it is strongly indicative.
And again, what mechanism other than photons could a surface possibly use to exchange information on its temperature and emissivity with another surface?
JWR,
Then there’s the premise that pyrgeometers don’t actually measure radiant flux, they measure something else. But that something else produces exactly the same result as if were radiant flux. I believe that falls under the logical fallacy called a distinction without a difference.
My comments following JWR on December 26, 2013 at 2:58 pm are posted at the end of the comment section to break them into manageable pieces.
JWR,
Additionally, did you address this problem highlighted on December 12, 2013 at 9:47 pm:
Pekka
I was quite specific.
Blue light is not absorbed in pure water.
You reply
All wavelengths of SW are absorbed in deep water
What depth of PURE water is required to absorb blue light?
The largest penetration depth of any wavelength in pure water is about 220m. That’s the average depth of point of absorption for the most penetrating violet light (wavelength about 412 nm). A small fraction penetrates several penetration depths deep.
Blue light penetrates about half as deep as that most penetrating wavelength.
Bryan,
Your question reflects a lack of understanding of absorption spectrometry. There is no single depth. Absorption is exponential in path length, see for example the Beer-Lambert Law. You need to specify how much of the initial intensity is absorbed. 100% is not an option as the logarithm of zero is undefined. The 1/e depth at 400nm is ~200m. So even at a path length of 4 km, some tiny fraction of the initial intensity is still present. But it’s quite small, ~2E-7%. For every ~460m of path length, the intensity is reduced by 90%.
De Witt and Pekka
Attenuation of a beam of blue light by pure water is not the same as absorption of the blue light.
The light is scattered.
That’s why pure water appears to an observer to have a blue tint.
But all this is drifting away from JWRs paper.
It is a reasonable approach to use classical physics for atmospheric heat transport in the troposphere.
For long wave IR >5um equipartition theory largely holds.
Tim Folkerts (known to some of you as a working physicist who supports IPCC science in the main) did the numbers and found them in the same ball park as measured values
Much of the theory used in climate science anyway predates the quantum theory (1905)
Beer Lambert Law (1729)
Kirchhoff‘s Law (1862)
Stefan–Boltzmann law (1886)
Also spectroscopy was not born as a result of quantum theory, we will not go back as far as Newton or Hershel( although we could).
Spectroscopy lets say was born in 1801, when the British scientist William Wollaston discovered the existence of dark lines in the solar spectrum.
Thirteen years later, Fraunhofer repeated Wollaston’s work and hypothesized that the dark lines were caused by an absence of certain wavelengths of light .
In 1859 Kirchhoff was able to successfully purify substances and conclusively show that each pure substance produces a unique light spectrum, that analytical spectroscopy was born.
Kirchhoff went on to develop a technique for determining the chemical composition of matter using spectroscopic analysis that he, along with Robert Bunsen, used to determine the chemical makeup of the sun .
Bryan,
Many things have not changed with the introduction of QM, many other things have changed.
All experimental evidence tells that every time QM differs from older theories, QM is correct. Because you don’t know a priori, where the theories differ, you must check every time, what QM tells and use the older theories only when the theories agree.
What is the equipartition theory for LWIR > 5µm? If you can first tell that then are you sure that it agrees with the correct QM based theory?
Actually it’s certain that no classical theory can agree with the thoroughly verified QM based theory, because the absorption spectrum is a purely QM result, and a result that’s essential for the correct understanding of radiation in the atmosphere. Your statement lacks all merit – or where is the merit to be found?.
As long as you stick to outdated or otherwise wrong theories you have no change of understanding the atmosphere.
JWR’s paper is not much better than before, but too much effort has already been spent in pointing out its weaknesses.
Bryan,
The reduction in intensity by scattering is also exponential with path length. However, if it were only scattering, i.e. no absorption at all, the diffuse scattered flux upward at the surface would be a large fraction of the initial intensity, much how a cloud reflects sunlight in the visible spectrum by scattering from individual droplets. The ocean would appear to glow like the diffuse solar radiation from molecular scattering makes the sky glow in the blue. In addition, when looking at the source, from a depth, it would appear to be shifted to the red end of the spectrum like the sun near the horizon. But it doesn’t because the rest of the visible spectrum is absorbed, not scattered, more strongly than the blue and near UV. But the ocean doesn’t glow because the scattering coefficient is small compared to the absorption coefficient. Clouds appear white because the path length is too short to absorb significantly in the visible, but the near IR is strongly absorbed.
De Witt
So now sea water is the same as pure water?
You will need to try harder.
Bryan,
You have no actual evidence for your assertion that blue light is attenuated only by scattering and not absorption in pure water. Just because there are no fundamental resonances near 400 nm, where attenuation in pure water is at a minimum, does not mean that there can be no absorption. A molecule can absorb at overtone frequencies that are ~2, 3 ,4… times the fundamental frequency. There are combination bands as well. See this reference. The probability of absorption of a photon by an overtone band is much lower than for the fundamental resonance, but it’s not zero. The transition at 401 nm, av1 + bv3 a+b=8, means that the transition is a combination of the symmetric stretch, v1, and asymmetric stretch, v3, with the sum of the quantum number change equal to 8. So you could have v1 + 7v3, 2v1 + 6v3, 3v1 + 5v3, 4v1 + 4v3, etc. What you don’t know about molecular spectroscopy would fill books. I suggest you read one.
Bryan,
Several different outcomes are possible for a photon that hits the sea ssurface:
1) it can be reflected from the surface
2) it can be absorbed in sea water
3) it can reach reach the bottom and be absorbed there
4) it can enter the sea and be refracted or be scattered in a way that brings it back to surface where it exists water again
The alternatives (1) and (4) contribute to the albedo, (4) very much less than (1). The alternative (4) does not affect any calculation of energy balance at a level that would be significant. Arguing further on that is irrelevant.
Pekka perhaps you should re-read the link below
http://hyperphysics.phy-astr.gsu.edu/Hbase/chemical/watabs.html
Further on in the previous discussion we found out any absorption was by harmonics of the fundamental.
This reminds us of the wave properties of the photon.
Bryan,
That page tells about the absorption by water molecules in gas. I based my numbers on measured absorption in pure liquid water.
Bryan,
Does the relative absorption graph in your link go to zero anywhere? Obviously, it doesn’t. You’re neglecting the effect of the wings of the strong absorption bands at shorter and longer wavelengths. You get pressure broadening of absorption lines in a gas. The effect of the structure of a liquid, particularly water where there is strong hydrogen bonding, is going to be even greater. Water is not perfectly transparent anywhere. In fact, there is nothing in the real world that is perfectly transparent, perfectly reflective or perfectly absorptive. The path length in intergalactic space is very long, but not infinite.
A link where also the absorption of radiation in liquid water is discussed is this
http://www1.lsbu.ac.uk/water/vibrat.html
It has links to additional data sources on absorption of radiation in liquid water. My own calculations on the shares of radiation that penetrates to various depth are based on those sources together with some standard data on the spectrum of solar radiation at Earth surface.
The link I gave contains a curve that shows the absorption coefficient in liquid water. The minimum of about 0.00005 1/cm is in the UV. The inverse of that is about 200m (more accurately from the numerical data the value is 220m).
Pekka
You will need look at my post and answer the points I make rather than make up your own straw man to answer.
Is that too hard to do?
I said
“Attenuation of a beam of blue light by pure water is not the same as absorption of the blue light.The light is scattered.
That’s why pure water appears to an observer to have a blue tint.”
You say
“A link where also the absorption of radiation in liquid water is discussed”
You finish by saying
“As long as you stick to outdated or otherwise wrong theories you have no change of understanding the atmosphere.”
Nowhere have I questioned quantum mechanics as the blue light in pure water proves.
However anyone who used relativity theory rather than Newton to calculate the speed of a car on a road would be considered a crank.
Its true that the crank would get a more accurate answer if it could be measured.
The point I make is that it is reasonable to use classical physics to deal with radiation > 5um
Are you trying to imply that nobody knew about radiation before 1905 or dealt with it in heat transfer problems?
Likewise a correct consistent theory of climate involving quantum mechanics would be marginally more accurate in this area.
However you must be very complacent if you think that that such a consistent theory presently exists.
Perhaps testing each theory by experiment and comparing might give valuable insights.
But I forgot IPCC science is perfect in all respects and beyond the need for any such testing.
Advocates of IPCC science are apt to act like propagandists for a cause rather than scientist.
Bryan,
Please tell, how classical physics can be used to get anything that’s even remotely correct on the radiative heat transfer in the atmosphere.
People had learned tricks to handle radiative heat transfer until QM provided an explanation. Planck made an intermediate step by making an ad hoc assumption to derive the right formula, but even that was an unexplainable trick at the time. It could not and cannot be understood without QM.
Not a single atomic or molecular spectrum can be understood at all without QM.
People use classical physics to “disprove” GHE. Disproving something requires the use of a theory confirmed as valid in the particular field of physics, but classical physics is not confirmed for these applications, it’s disconfirmed. All these “poofs” are the result of using theories known to be totally wrong.
The question is not about minor adjustments, it’s about the difference between correct and totally wrong.
Most of thermal IR is at > 5µm, the 15µm CO2 absorption peak is not far from the peak of the IR spectrum. Nothing on influence of that peak can be understood without the shape of the absorption/emission spectrum. It’s no wonder that people who don’t accept that create “proofs” that have no real value.
Pekka
The specific heat capacity of CO2 drops by 13% from the Earth surface to tropopause.
Since the mass does not change then the 13% represents the radiative energy lost in the vertical column.
It will be passed on vertically since the horizontal components self cancel
Similarly for H2O but much more significant than CO2
How would you account for the increased Specific Heat of CO2 other than by increasing vibrational modes?
Other bulk quintiles include the transport coefficients.
The thermodynamic effect of an object radiating on another object has already been taken into account by the coefficient of thermal conductivity which, despite its name, measures all kinds of diffusive heat transport including radiation.
G&T write the following in their first falsification paper:
“A physicist starts his analysis of the problem by pointing his attention to two fundamental thermodynamic properties, namely the thermal conductivity, a property that determines how much heat per time unit and temperature difference flows in a medium and the isochoric thermal difusivity, a property that determines how rapidly a temperature change will spread, expressed in terms of an area per time unit.”
In their reply to Halpern et al. they write:
“Speculations that consider the conjectured atmospheric CO2 greenhouse effect as an “obstruction to cooling” disregard the fact that in a volume the radiative contributions are already included in the measurable thermodynamical properties, in particular, transport coefficients.”
If you want to know the thermodynamic effect of doubling the CO2 concentration you only need to measure the changes in the transport coefficients. These changes will of course be unmeasurable (although there is probably some tiny factual difference).
No need for any redundant radiative transfer calculations.
http://arxiv.org/pdf/0707.1161)
Bryan,
G&T have been discussed in separate threads on this site already. Their work doesn’t count as support for anything.
Otherwise we are just repeating the same arguments. Thus continuing is of no value.
Bryan,
No, they’re not. That is an assertion by G&T that is unsupported by any literature citations or data. As a result, it amounts to an argument from authority and proves nothing.
By the way, nice goal post move in your reply to me above. You’re the one who asked about the absorption of radiation at the blue end of the spectrum by pure water. Yes, sea water is not pure and attenuates more strongly than pure water. Your contention, however, that in the absence of impurities, water does not absorb radiation with wavelengths near 400nm is still false. You have yet to admit that.
De Witt
As G&T say
“the radiative contributions are already included in the measurable thermodynamical properties, in particular, transport coefficients”
How could they not be?
If you take a mole of CO2 at say 500C, it will have a much higher Cp than one at -50C.
This is because a much higher % of the molecules will have their vibrational modes occupied.
If this mole is is placed inside an IR transparent container and the container placed in outer space then the CO2 container will emitt IR radiation as the temperature drops.
The Cp will show a corresponding drop.
How could it not?
Cp is not a radiative property. The share of molecules in vibrationally exited state is not a radiative property. Absorptivity and emissivity are not determined by that share, they are determined by a different mechanism. The only direct connection is that both require the existence of a vibrationally exited state, but the numerical values are not directly related.
Both can, however, be explained by QM based on the same model of molecules. Neither can be explained without.
De Witt
“Your contention, however, that in the absence of impurities, water does not absorb radiation with wavelengths near 400nm is still false. ”
The hyperphysics link says it for me rather well!
“It doesn’t absorb in the wavelength range of visible light, roughly 400-700 nm, because there is no physical mechanism which produces transitions in that region ”
If you want to ignore the valuable insights offered by quantum mechanics in the last 100 years then thats up to you.
http://hyperphysics.phy-astr.gsu.edu/Hbase/chemical/watabs.html
Bryan,
That’s not true for liquid water. See the link that I gave. Hyperphysics does not refer to the liquid at all in this section.
Your ignorance of physics does not make the reality what you imagine it to be.
|Pekka
What makes you say that the link refers to water vapour rather than water?
Nothing in the article refers to water vapour.
You might as well say they were talking about ice.
The statements are true for for water molecules in gas, not for liquid. That’s a good reason to conclude that it refers to gas.
In general interaction of water molecules in gas are discussed all around, because the details of that are so important, the details of absorption of radiation in liquid water are important only for some specific applications, therefore people even forget often to mention that they are discussing water in gas.
From extensive studied ans also research of physics I happen to know enough to help me understand correctly what other physicist are trying to tell even when they forget to mention some implied assumptions. That helps me also to identify physics related crap as crap even when the author has some apparent credentials that would one to expect that he knows something. G&T are a case in point. They claim to know physics, and have some credentials, but evidently don’t understand much at all on the issues they write about.
Bryan,
There is a physical mechanism for absorption by liquid water from 400-700 nm. It just requires a very long path length to observe. Even if the quantum number change greater than six combination transitions do not contribute significantly, the wing of the long wavelength vibrational band must eventually overlap the wing of the short wavelength electronic transition band. Absorption lines are broadened in liquids so much that you generally only observe bands of overlapping lines, not the lines themselves. As a result of the broadening, those bands have long tails. They don’t have a brick wall cutoff. You continue to ignore the spectrum on the link you cite which does not go to zero anywhere and looks exactly like the overlap of two bands in the visible. The y axis is labeled relative absorption, not attenuation.
The closest thing to a brick wall cutoff in spectrometry are the absorption edges observed in x-ray spectrometry. But even those have fine structure which can be used for analysis.
Pekka
I said
“Attenuation of a beam of blue light by pure water is not the same as absorption of the blue light. The light is scattered. That’s why pure water appears to an observer to have a blue tint.”
Nowhere have I questioned quantum mechanics as the blue light in pure water saga proves.
Do we always have to use quantum mechanics?
No!
Do we now use relativity theory to calculate the speed of a car on a road ?
Its true that any pompous twit who so insisted would get a more accurate answer if it could be measured.
However this is getting well away from JWRs paper so I refuse to be distracted further by irrelevances.
Stick to the main point and don’t flit about like a butterfly because you are unable to address the substantive points that JWR and Schact and G&T make.
How can you separate radiation from the physical causes that produce radiation?
The substantive point I make is that ;
It is reasonable to use classical physics to deal with radiation > 5um.
You disagreed
You are trying to imply that nobody knew about radiation before 1905 or dealt with it in heat transfer problems?
A correct consistent theory of climate involving quantum mechanics would be marginally more accurate in this area.
However you must be very complacent if you think that that such a consistent theory presently exists.
17 years of the recent temperature record shows no link to increasing CO2.
There is no historic link either.
Has the thought never occurred to you that you are pushing a FAULTY interpretation of quantum mechanics ?
Perhaps testing each theory by experiment and comparing might give valuable insights.
IPCC science is ’settled’ and beyond the need for any such testing you say.
However reality cannot be ignored
Do we always have to use QM?
No!
Do we have to use QM, when it makes a difference?
YesI
Does it make a difference for interaction of radiation with matter and radiative transfer in gases?
Yes, it leads to totally different conclusions (what else would be reason skeptics often want to ignore QM).
Actually I should correct my previous comment.
There’s no classical theory of radiative heat transfer in gases and no consistent classical theory of interaction of radiation with matter. Thus all supposed classical theories are either old and known to lack self-consistency, or new inventions that have never been more than products of imagination of a few people.
Pekka
Classical Physics was quite smug around 1900 thinking all that was left to do was to get the physical constants to a few more decimal points.
The vibrational modes of IR gases are still treated with a semi-classical visual model.
Vibrating spheres connected by springs is often used as an analogy.
All this concerns radiation > 5um
For radiation < 1um, Quantum Mechanics is required as my blue light in pure water example shows.
In 200 years time perhaps there will be a consistent correct QM theory of the climate .
If then calculations are made and compared to Classical Theory the results will be roughly the same and I agree that the QM result will be more accurate.
Bryan,
That analogy is only used for instruction, not for actual calculations. Most of the observed transitions that are tabulated in the HITRAN database can be calculated by QM ab initio. It is already possible to calculate atmospheric emission spectra that agree with observed spectra to a precision of about 1%. The limit of the precision is the knowledge of the atmospheric temperature and partial pressure profiles, not QM, with the possible exception of water vapor continuum absorption. Radiative transfer is defined by QM and is considered to be well understood. Climate is more complicated as the movement of air and liquid is involved. QM has nothing to say about that.
Pekka and De Witt
If there was a prize for missinterpreting other peoples posts then you would be joint winners.
You both have the knack of getting the ‘wrong end of the stick’.
Woods would not be seen for the trees .(Pun intended)
Insignificant side comments are examined forensically to avoid dealing with any matter of substance.
Experimental reality does not matter in your make believe world.
R W Woods experiment falsifying the effect has not been seriously challenged.
There is no historic al evidence of CO2 driving the climate.
In fact quite the reverse.
The recent 17 years confirms the historic record
No serious physics or thermodynamics textbook mentions the ‘greenhouse effect’.
Perhaps it is time to look at alternatives like the paper by JWR to see if it has any answers.
My more substantive point has been sidestepped.
For LWIR >5um the heat transferred by a gas cooling down by radiation only would work out about the same for both classical and QM calculation.
Bryan,
That depends on who’s voting. SoD, Pekka and I would give the prize to you hands down.
That’s rich coming from you. But then irony always increases.
It was, in fact, challenged immediately, as it was counter to all previous experimental results starting with de Saussure in 1767. Read Abbot’s rejoinder to Wood published in the same journal shortly after Wood’s note.
The actual reality is that Wood’s experimental results have never been replicated, in part because his description of the experiment is sadly lacking in details. Roy Spencer and Vaughan Pratt produced the opposite results from Wood as have I. The one published result claiming to reproduce Wood was fatally flawed as should be obvious to even you from the picture of the experimental apparatus. Guess which box had the IR transparent cover. Wood also did not attempt to defend his experimental results after Abbot’s critique. That should be a critical point for you as you denigrate Pratt’s results because you claim that he is not defending them from criticism by every Tom, Dick and Harry.
Undergraduate physics textbooks in general only spend a few pages on absorption and emission of radiation by atoms or molecules. The Feynmann Lectures on Physics, for example, devotes ~2 1/2 pages to Einstein’s laws of radiation (Volume I. chapter 42-5) and a few pages more on the general theory of radiation absorption derived from Maxwell’s equations. The specifics of radiative transfer through the atmosphere is too specialized a subject for an introductory physics text, or chemistry text for that matter. The same should hold true for introductory thermodynamics textbooks. Many textbooks for Atmospheric and Oceanic science courses will contain sections on the greenhouse effect. A classic textbook on heat transfer mentions the greenhouse effect but doesn’t go into detail (see below)
If you apply a 65 year moving average to the instrumental record, you get a curve that looks much like the increase in ghg forcing over that same time period. That doesn’t prove anything, but it is evidence. There is evidence of long term periodicity in the instrumental record with a period of ~65 years and a magnitude that explains the recent slow down in global temperature as well as the one between ~1950-1970 without invoking the aerosol kludge.
And precisely how are you going to calculate the emissivity of the gas in question without QM? Then how are you going to calculate total emission without the Planck equation or its integrated form, the Stefan-Boltzmann equation? The Planck equation was formulated because classical physics failed dismally to predict emission from a black body. There is no complete classical description of radiative cooling of a gas. The classical solution to the radiative transfer problem requires that you know the source function. Without QM, you don’t know it.
All modern emissivity tables use radiative transfer models to calculate effective emissivity. See Modest, Heat Transfer, Third Edition, for example. As I mentioned above, the greenhouse effect is mentioned on page 2 and 96 of this standard text on heat transfer. In the past, Hottel measured emissivities at different temperatures, pressures and path lengths and was fairly close. The calculated results, which are ultimately traceable back to line-by-line models, are more accurate.
From Modest, page 2:
Please don’t pull a G&T and pick nits about how CO2 doesn’t absorb solar energy directly. The energy absorbed by CO2 does come from the sun, just not directly. You’re always going to be able to pick nits with any one sentence description of the effect.
De Witt
Here is a simple example of the difficulty of separating radiation from the particles that cause the radiation.
We have a cooling IR active gas like CO2 losing heat by radiation conduction and convection.
All three methods involve the specific heat.
Do we use the specific heat that includes a radiative component after stripping out the radiative energy lost?
Perhaps a new specific heat value is used in which the radiative contribution has removed.
In this case I would be very interested in the experimental arrangement to calculate the SHC of CO2 with no radiation at a particular temperature.
Pick either Cp or Cv.
Bryan,
As long as we have a stationary state where overall cooling and warming are as strong, the specific heat makes no difference.
When the temperature is changing, the relevant specific heat is Cp of air including all its components at the local temperature. CO2 has a very small influence on that proportional to its concentration.
Specific heat of CO2 is influenced by the energy levels of vibrational excitations, but not by the emissivity/absorptivity related to these levels. The existence of those levels does not tell about the transition probabilities between the ground level and the excited levels due to IR radiation. The occupation rate of the levels is determined by collisional excitation and deexcitation, not by emission and absorption. The dependence of the occupation rate on temperature affects specific heat, emission and absorption have no influence on that.
Rate of emission depends on the temperature according to Planck’s law, but the emissivity coefficient in the formula of the emission strength is determined by the coupling constant that’s a independent quantity as I wrote above.
Pekka
You have misunderstood my post to De Witt.
The gas is not air but CO2
Lets make it a little more concrete
Lets say we have one kilogram of CO2 at 350K in a one cubic metre container with IR transparent walls with negligable heat capacity which is itself in a vacuum.
How much heat is lost by the gas if it cools down to 250K
I don’t think that there’s any disagreement on that as long as everyone knows whether that happens at constant pressure or constant volume.
Why should anyone be interested in that question?
When you can show the relevance of that question to the discussion on radiative energy transfer in atmosphere and into or out from the atmosphere, can be consider that. Bringing in totally irrelevant issues has no other effect than confusing the discussion. That seems to be your goal.
Pekka
I think in your rather grudging reply you have confirmed that the answer would be exactly the same worked out by classical physics or quantum mechanics.
As long as classical physics cannot give any answer at all, that’s a moot proposal that cannot be true. Claiming that I would have confirmed is ridiculous – how far can you go in your misrepresentation of others?
As I have written many times, classical physics does not have any self-consistent description of the required physics. It doesn’t not give any answer on these questions.
If you disagree, you must be able to tell, how the answer can be obtained from classical physics or where we can find a valid calculation based on classical physics. Making unsubstantiated claims on what you believe on the state of physics around year 1900 is not enough.
Physicists where able to make some rough calculation based on their knowledge as the work of Arrhenius shows. Those calculations were, however, not fully consistent, and must be considered only early and preliminary estimates.
All self-consistent calculations depend on the quantification and the resulting concept of photon. That theory is also thoroughly verified empirically.
Pekka says
“If you disagree, you must be able to tell, how the answer can be obtained from classical physics ”
Easy
Specific Heat in this case is Cv
No work is done in cooling
Work out the loss of internal energy between 350K and 250K
Since the only way to lose energy is by radiation then it’s all radiative loss.
No mention of photons required.
Now that’s me signing of for the season.
You can work out the same problem using QM
The answers will be the same.
Bryan,
The amount of energy lost from the temperature change is not the issue. It’s the rate of loss that’s important. That tells you how much energy needs to be supplied to maintain a constant temperature. Please detail how you would calculate the rate of energy loss using only classical physics. Then you can also calculate the cooling rate of dry air containing 280 ppmv CO2 and an equal mass containing 560 ppmv CO2.
Don’t let the door hit you on the way out.
Bryan,
I hope that you understood that you didn’t give any real answer at all. If not, then there’s even less hope that this discussion will lead anywhere.
Bryan,
Heat capacity at constant pressure is important to the structure of the atmosphere because it determines the value of the dry adiabatic lapse rate. It also helps to determine the cooling rate by radiation of a parcel of gas with a given mass, but only in conjunction with the rate of emission of radiation. The cooling rate of a kilogram of argon at 350K is going to be many orders of magnitude slower than the cooling rate of a kilogram of CO2 even though the difference in heat capacity between argon and carbon dioxide is less than a factor of two and gets smaller as the temperature drops. Classical thermodynamics will not tell you this.
The resent discussions with JWR and Bryan tell once more, how hopeless it is to resolve issues by pointing out errors and weaknesses in a highly incomplete “theory”. Such “theories” redefine concepts, and do that in a way that allows new redefinitions when errors are shown with use of the earlier ones.
It’s, of course, impossible that an individual could create a fully specified alternative for the present theories. Thus the vagueness of the presentation is not an additional weakness of the theory, but rather a factor that makes constructive discussion impossible, when it’s in some way accepted that the rules of discussion are set by the creator of the new theory.
We know from own learning experience that the only working alternative is to build the understanding on the work of past scientists, and to a major part on the knowledge learned from good textbooks or teaching based on such textbooks.
Discussing separate issues in the interpretation of established theories and results of scientific research is illuminating and productive, but discussion of these new theories seems to lead nowhere. If a person refuses to accept the standard approach, not by pointing out where he sees a problem, but by proposing something very different as alternative, the only result seems to be an endless discussion.
[Picking up a few different threads from the above discussion with JWR, from his most recent comment of December 26, 2013 at 2:58 pm which followed my comments and questions of December 22, 2013 at 11:35 am]
JWR,
Originally your point appeared to be:
And I picked this up because a common theme in the more physics-challenged blog world is “engineers know how to do radiation heat transfer calculations and climate science ignores it to invent new stuff because it doesn’t know how to do the basics”
So my point was – engineers don’t use the equations you provide to do a calculation with one surface and an atmosphere. In fact, engineers who have to consider atmospheric absorption for, say, furnace calculations refer to Goody (for example) for how to do calculations of radiative transfer in the atmosphere. And engineers who have to deal with an atmosphere that absorbs have to make use of calculated absorptivities at a given temperature and a given CO2 concentration.
It is quite legitimate to attempt to use basic radiation exchange between surfaces by putting one layer of the atmosphere as a surface – so long as you understand the limitations and approximations involved.
Just don’t claim this is the “engineering way” because it’s not – or please cite a paper or textbook for how engineers do radiative transfer through the atmosphere.
[Picking up a few different threads from the above discussion with JWR, from his most recent comment of December 26, 2013 at 2:58 pm which followed my comments and questions of December 22, 2013 at 11:35 am]
JWR said:
I gather from your comment that Christiansen 1883 or your interpretation of said text is in opposition to every radiative heat transfer textbook (all engineering textbooks of the last 50-100 years) and all physics textbooks for the last 100 years.
Therefore, I rest my case.
Physics textbooks – all of them – confirm that thermal emission of radiation is a real two way process, not a process where the hotter surface works out how much radiation to emit based on its understanding of the temperature of the colder surface.
I can help you rewrite your paper if you like.
You should start it something like:
Your readers would then understand. I suspect that you do not even realize this is the claim of your paper.
And given that DeWitt has already replied on this specific subject explaining the error in your calculation, perhaps you can respond to him on this.
[Picking up a few different threads from the above discussion with JWR, from his most recent comment of December 26, 2013 at 2:58 pm which followed my comments and questions of December 22, 2013 at 11:35 am]
I had asked:
Answer by JWR:
What does this mean? You don’t know? You don’t understand the question?
Your earlier comment: “It is shown there that the heat exchange by radiation is an one-way traffic from the higher temperature to the lower temperature.” states – in opposition to a century of uncontroversial basic physics – that radiation is NOT a two way process (see note 1).
You derived the Schwarzschild equations which rely on the well-proven fact that there IS a two-way process.
Your paper before last (I haven’t checked this one) just blithely claims that absorption of radiation by the atmosphere is only absorption of the NET radiation, not absorption by radiation in any direction. This is in OPPOSITION to the Schwarzschild equations and to all experimental spectroscopy for the last 100 years.
At this point I believe readers who have stayed this far (if there are any) and struggle with radiative physics can draw comfort – there is a point to this paper.
People who comment favorably on this paper, and blogs who discuss it without pointing out these obvious “smack around the side of the head with a large brick” shortcomings should be avoided if the intent of the reader is to learn anything about reality.
JWR,
It doesn’t seem that you understand the first point about the subject you are writing a paper on. Your paper is not about Finite Element Analysis, of which you probably have some understanding.
It needs correct equations to go into an FEA.
It needs an understanding of how to derive one equation from another, what constitutes proof, and what constitutes assumptions.
I cannot help you.
[Note 1 – Just to be clear, radiation goes in all directions. As explained in this article, in the atmosphere the plane parallel assumption works pretty well, and therefore the problem can be resolved down to a manageable set of equations. That is why we refer to the problem as two way (up and down) – it is shorthand terminology. ]
@de Witt
Thank you to correct me that I forgot to take into account
the reflection in a statement about Prevost.
It gave me a hint to find a more elegant proof of the
Christiansen law of 1883.
As usually we call theta=sigma*T^4, that make things
easier to edit.
Considering two surfaces with theta1 and theta2 and with emissivities eps1 and eps2, than we can write for a hypothetical
q1 (in the Prevost sense, from 1 in the direction of 2) and and an hypothetical q2 (from 2 in the direction of 1):
q1 = eps1*theta1 + (1-eps1)*q2 (1)
q2 = eps2*theta2 + (1-eps2)*q1 (2)
The reflexion (1-eps1)*q2 for surface 1 is taken into account,
and in the same way for surface 2, (1-eps2)*q1.
We have two simultaneous linear equations for q1 and q2
which can be solved analytically. No need for iterations!
With a=1, b= – (1-eps1), c= – (1-eps2), d=1
│ a b ││q1│ = │eps1*theta1│
│ c d ││q2│ = │eps2*theta2│
solution:
│q1│ = ( 1/det)│ d -b││eps1*theta1│
│q2│ │-c a ││eps2*theta2│
det = a*d-b*c = (eps1+eps2-eps1*eps2) (3)
For the two hypothetical fluxes q1 and q2 we obtain:
q1 = (eps1*theta1+eps2*(1-eps1)theta2) /det (4)
q2 = (eps1*(1-eps2)theta1+eps2*theta2) /det (5)
The real heat flux from 1 to2, for theta1>theta2:
q(1→ 2) = q1-q2 = eps12*(theta1 – theta2) (6)
1/eps12 = 1/eps1 +1/eps2 -1 = det/(eps1*eps2) (7)
This is the relation from Christiansen from 1883, including reflection. For theta1=theta2 we find qreal = 0.
The important lesson is that we find always the combination (theta1-theta2): for emission alone, for a combination of emission and reflexion,… It reflects the second law, the heat flux is zero when the temperatures are equal.
In case of an atmosphere, discretized in N levels, we have
N*(N-1)/2 pairs (theta(i), theta(j)). In order that the second law is satisfied, it is heuristic that the physical phenomena can be described by fe(i , j)*(theta(i)-theta(j)).
That is what the stack model is doing: defining the heat transport by radiation between pairs of layers.
The concept of finite elements gives an elegant and transparent way to introduce pairs.
The physics consist of the determination of
the factors fe(i , j).
And if we want to introduce wave-number dependent Planck functions then we consider for each wave number k: fe(i, j, k)* (B(T(i),k)-B(T(j),k). In this way,
for T(i) = T(j), the contribution is zero for each wave number interval k.
Once more, my apologizes for the slip of the pen and forgetting the contribution of reflexion in a statement concerning Prevost.
The slp of the pen was versus the end of the paper.
What about the other examples in ref 3?
In the two slabs separated by a vacuum with eps1=eps2=1, fluxes and temperatures are continuous at the borders between the slabs and the vacuum, no place for absorption of the Prevost fluxes between the two slabs and emit the heat back by means of back-radiation.
And what about the stack of black slabs in ref 1 which absorb in the two way formulation of heat a huge amount of heat?
Like the Schwarzschild procedure described in “the code” and in “the equations” of SoD, where a too high absorption is noticed.
JRW,
You must note that in a multilayer model you have:
1) energy transfer between all pairs of layers
2) energy transfer between the surface and each layer
3) energy loss from each layer and surface to space
4) all of the above are influenced by absorption in intervening layers
5) all the above have wavelength and azimuthal angle dependent coefficients. Nothing can be calculated correctly by Stefan-Boltzmann law, because the wavelength dependence acts in a way that makes it necessary to use Planck’s law for emission as well as wavelength dependent emissivities/absorptivities for all layers. Only the surface may be considered to be close enough to gray without essentially distorting the results.
Trying to do that with one way heat transfer is essentially bound to fail as it cannot explain the radiative warming of gas layers by the sum of radiation from below and radiation from above. (Perhaps someone may propose an extremely complex description that succeeds in that, but why should anyone bother when the standard explanation based on two way radiative heat transfer explains everything correctly in a simpler, more intuitive – and at least in normal thinking physically more correct way.
(I added the words “normal thinking” in the above, because I could imagine that a formally correct alternative could be presented, but that would give the same final results.)
@SoD
I have already answered de Witt and make my apologizes
for the statement that in case of different emission coefficients
the Christiansen law would indicate that the heat transport by radiation is a one-way traffic. The heat transport is a one-way traffic but it follows not from the Christiansen law.
I have made that statement in my reference [3] at the end of the paper. I suppose that de Wit agrees with the other conclusions of my reference [3].
As concerns your remarks in my last paper I repeat equations which have been derived in earlier papers, with a finite difference scheme in [1] of December 2012 and with a FEM technique in [2]
of April 2013.
What my point is that heat transport is proportional to
(theta(i)-theta(j)), and in a wavelength dependent analysis it is proportional to (B(T(i) – B(T(j)) for each wave-number interval.
In reference [1], I show that the two way heat transport with Prevost fluxes , a stack of 100 slabs absorbs in the lower slab 100 times the amount what gives the one way formulation.
In your “code “ I see that you also absorb the ,what you call, the back-radiation.
I try to conclude what are my opinions in order we can polish up the discussion.
1)
The Schwarzschild procedure looks like the stack model.
I use in that stack model a viewfactorF.
I have found that the Schwarzschild procedure, as I found in your “code” and “equations” can be
written also by means of a viewfactorS matrix.
In the paper I give a 3D picture where I compare the two
matrices. Needless to repeat that I think that the
viewfactorF is more correct.
It might be that Schwarzschild has used his mehod for cases
where the difference did not matter.
2)
Back-radiation which is calculated by the Schwarzchild
procedure (multiplying by (1-f) in the downward direction)
looks like the “back-radiation expression” of the stack
model based on a one-way heat transport by radiation.
In the stack model it is not a flow of heat, it is the sum of the
terms with a negative sign of the LW surface flux.
In the paper it is always indicated by high lighting those
terms.
3)
The big objection to SoD code and equation, and this has
nothing to do with smaller mistake as discussed under 1), is
the huge absorption which SoD and other IPCC authors and
bloggers conclude from what they call the fundamental
physics.
The stack model, which I sometimes call the chicken wire
model, does not show these absorptions, for a temperature
distribution corresponding to the environmental lapse rate.
It has been validated in [1] by comparison with the K&T type of diagrams, however, without the back-radiation.
@de Witt
Thank you to correct me that I forgot to take into account
the reflection in a statement about Prevost.
It gave me a hint to find a more elegant proof of the
Christiansen law of 1883.
As usually we call theta=sigma*T^4, that make things
easier to edit.
Considering two surfaces with theta1 and theta2 and with emissivities eps1 and eps2, than we can write for a hypothetical
q1 (in the Prevost sense, from 1 in the direction of 2) and and an hypothetical q2 (from 2 in the direction of 1):
q1 = eps1*theta1 + (1-eps1)*q2 (1)
q2 = eps2*theta2 + (1-eps2)*q1 (2)
The reflexion (1-eps1)*q2 for surface 1 is taken into account,
and in the same way for surface 2, (1-eps2)*q1.
We have two simultaneous linear equations for q1 and q2
which can be solved analytically. No need for iterations!
With a=1, b= – (1-eps1), c= – (1-eps2), d=1
│ a b ││q1│ = │eps1*theta1│
│ c d ││q2│ = │eps2*theta2│
solution:
│q1│ = ( 1/det)│ d -b││eps1*theta1│
│q2│ │-c a ││eps2*theta2│
det = a*d-b*c = (eps1+eps2-eps1*eps2) (3)
For the two hypothetical fluxes q1 and q2 we obtain:
q1 = (eps1*theta1+eps2*(1-eps1)theta2) /det (4)
q2 = (eps1*(1-eps2)theta1+eps2*theta2) /det (5)
The real heat flux from 1 to2, for theta1>theta2:
q(1→ 2) = q1-q2 = eps12*(theta1 – theta2) (6)
1/eps12 = 1/eps1 +1/eps2 -1 = det/(eps1*eps2) (7)
This is the relation from Christiansen from 1883, including reflection. For theta1=theta2 we find qreal = 0.
The important lesson is that we find always the combination (theta1-theta2): for emission alone, for a combination of emission and reflexion,… It reflects the second law, the heat flux is zero when the temperatures are equal.
In case of an atmosphere, discretized in N levels, we have
N*(N-1)/2 pairs (theta(i), theta(j)). In order that the second law is satisfied, it is heuristic that the physical phenomena can be described by fe(i , j)*(theta(i)-theta(j)).
That is what the stack model is doing: defining the heat transport by radiation between pairs of layers.
The concept of finite elements gives an elegant and transparent way to introduce pairs.
The physics consist of the determination of
the factors fe(i , j).
And if we want to introduce wave-number dependent Planck functions then we consider for each wave number k: fe(i, j, k)* (B(T(i),k)-B(T(j),k). In this way,
for T(i) = T(j), the contribution is zero for each wave number interval k.
Once more, my apologizes for the slip of the pen and forgetting the contribution of reflexion in a statement concerning Prevost.
The slip of the pen was versus the end of the paper.
What about the other examples in ref 3?
In the two slabs separated by a vacuum with eps1=eps2=1, fluxes and temperatures are continuous at the borders between the slabs and the vacuum, no place for absorption of the Prevost fluxes between the two slabs and emit the heat back by means of back-radiation.
And what about the stack of black slabs in ref 1 which absorb in the two way formulation of heat a huge amount of heat?
Like the Schwarzschild procedure described in “the code” and in “the equations” of SoD, where a too high absorption is noticed.
JWR,
I don’t have the patience of SoD to wade through all your math to find all your mistakes. Suffice it to say, that if you end up with energy flows in and out in individual layers not balancing using the two way approach, you’re making mistakes somewhere.
In the atmosphere, it’s well known that radiation is not the only method of energy transfer. In fact only about 40%, on average, of the net energy flow from the Earth’s surface is by radiation. The rest is by latent and sensible heat transfer, convection for short. In the TFK09 energy balance, 97 W/m² of the 161W/m² of the global annual average incoming solar radiation absorbed by the surface is lost by convection and only 63 W/m² is lost by radiation. This leaves a radiative imbalance at the surface of ~1 W/m², so the energy must be accumulating. We see that, in fact, it is, although at a lesser rate, from the measure increase in ocean heat content over the years.
If you try to create a purely radiative system with an opaque surface and a partially transparent atmosphere, you will get a temperature discontinuity at the surface. The surface temperature will be much warmer than the atmosphere immediately above it. And the temperature in the atmosphere will decline at a rate higher than the real atmosphere or even the dry adiabatic lapse rate.
Because most of the convective energy transfer in the atmosphere is by latent heat transfer and the scale factor of water vapor in the atmosphere is only 2km compared to 8km for dry air, convective flux declines rapidly with altitude and is effectively zero at the tropopause.
@de Witt
You are saying exactly what I am saying!
From the 168 W/m^2, 109 is by convection, 52 through the window and 6 by LW radiation into the atmosphere.
You seem to agree with Bryan and myself!
I do suggest to read my papers from December 2012 ,ref [1] in my last paper,, from April 2013, ref [3] in my last paper, and my lasy paper of December 2013.
I am not claiming to give the exact figures, I only claim to find figures which are in the right ball game, as a friend of Bryan described it.
Since you now argue that the heat evacuation from the surface to higher layers is by convection, which I tell you since more than a year, you have difficulties like myself with the SoD approach based on what is claimed “fundamental Schwarzschild physics”.
I wish you all the best for 2014.
SoD,
“The intensity at the top of atmosphere equals..
The surface radiation attenuated by the transmittance of the atmosphere, plus..
The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere”
How does this relate to the net transmittance (or net transparency) of the whole mass of the atmosphere from the surface all the way through to the TOA? This seems to be the critical point that is eluding me for some reason.
I understand that each layer emitting up ultimately has a fraction of its power absorbed and transmitted to space. I just don’t understand how the effects of each layer are summed together and somehow quantifies the net transparency of the power radiated from the surface (i.e. the fraction of surface emitted power that is transmitted to space and the fraction that is absorbed by the atmosphere).
RW,
The relationship between the intensity at TOA that results after all absorption and emission within the atmosphere and the transmittance of atmosphere from surface to TOA is complex.
It’s simple to calculate what happens to radiation from the surface in each thin layer of the atmosphere. It’s only a little more difficult to calculate what that thin layer does for total intensity of upwards radiation given the intensity radiation that enters that layer from below and the temperature of that layer. In both calculations the radiation from below must be known in detail, not only the total intensity, but separately the intensity for each wavelength and each azimuthal angle of the direction of the radiation.
These two differential effects of a thin layer are also rather simply related for every wavelength and azimuthal angle separately. Summing over all wavelengths and azimuthal angles is enough to make the relationship complex.
The relationship involves in addition the temperature of the layer, and this temperature depends on altitude.
To conclude: There is a rather simple relationship between quantities that are inside a multiple integral, but that relationship depends strongly on the variables to be integrated over. When the integration has been done, the results are not related in a simple manner. Increasing CO2 concentrations affects both in the same direction, but with a very different strength that can be determined only by a detailed calculation of the type described in the thread Visualizing Atmospheric Radiation – Part Five – The Code and the related posts of that series.
RW,
It is great to see you wrestling with conceptual problems surrounding radiative transfer.
When I first tried to understand radiative transfer I had many conceptual problems as well.
Many of these articles are written with the aim of giving insight into a complex problem where multiple variables interact in different (and non-linear) ways.
Being mathematically correct is essential. But whether they succeed in providing any enlightenment is more the concern.
This is always a challenge.
Let me suggest a few ways of ways of thinking about the problem.
1. Suppose that net transmittance is not really that important. Suppose if you never knew the value of net transmittance it wouldn’t matter.
Suppose that, given:
a) the surface temperature and therefore the surface upward flux
b) the temperature profile in the atmosphere
c) the concentration profile of various GHGs
– you could determine the TOA flux. And suppose that you could graph out the change in TOA flux as the other values changed, so you could get a bit of a feel for how TOA changed with more water vapor, colder atmospheres, more CO2, a higher surface temperature..
Would that be useful even if you never knew this interesting value of net transmittance?
2.
There is no simple relationship between the radiative transfer and the net transmittance. This is because there is a very strong dependence of TOA flux (the dependent variable) on the atmospheric temperature. So for the same surface temperature and the same GHG concentrations you can get very different TOA flux as the atmospheric temperature profile changes.
So with an isothermal atmosphere, the TOA flux stays the same regardless of GHG concentrations, ie regardless of transmissivity. Because emission and absorption are equal at each point in the atmosphere.
For very low transmissivities, the atmospheric temperature profile won’t have much effect. Because all of the surface radiation escapes to the TOA without being absorbed so it doesn’t much matter how much is emitted (update to clarify – it doesn’t matter how the temperature changes because the emission change will be small in relation to the total TOA flux – the emissivity of the atmosphere is low in this case).
3. The net transmittance is only one part of the formula – the first part (the blue bit in the equation). The other bits can dominate.
The local heating or cooling of each part of the atmosphere is determined by absorption of solar radiation + convective energy received from below + the absorption of longwave radiation received from below – radiative cooling to space.
Each part of the atmosphere may not be in balance. If the net effect locally is negative then that part of the atmosphere cools down. And the converse.
When you see heating curves (really cooling curves), as shown in Part Eleven – Heating Rates you start to appreciate that the atmosphere has to be in a state of radiative cooling because it receives convective energy from the surface, and you start to appreciate that locally each part of the atmosphere has quite unique cooling attributes dependent on the amount of water vapor and the temperature of the atmosphere.
Hope some of this helps. It might not.
I found two things to be very helpful in gaining insights –
a) reading more than once textbook, because each explanation approaches the topic differently and multiple explanations can give the conceptual insight that is missing.
b) playing around with some simple models or simple graphs – what happens if I do this – what is the result? Then follow the cause and effect.
Just my $2 worth.
Playing around could be started by a model that has
– two or three layers in the atmosphere
– two or three wavelengths or bands of IR with very different absorptivities/emissivities. By very different I mean that the transmissivity of one band in a single layer is perhaps 10 % while that of another is 90 %.
Playing would then mean varying those numbers and varying temperatures of the layers compared to the surface.
What’s up, I read your new stuff daily. Your story-telling style is witty,
keep up the good work!
[…] 4: The equations for radiative transfer are found in Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. The equations prove this […]
[…] The equations are crystal clear and no one over the age of 10 could possibly be confused. I show the equations for radiative transfer (and their derivation) in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations: […]
[…] The equations are crystal clear and no one over the age of 10 could possibly be confused. I show the equations for radiative transfer (and their derivation) in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations: […]
The discussion in this pdf provides an unusually clear explanation of the difference between thermodynamic temperature and Planck temperature and then explains why the Schwarzschild equation (and Kirchoff’s Law) depends on the existence of LTE.
Click to access LTE.pdf
Interesting read, but this gets back to a previous discussion about variants of the manifestation of LTE, in particular that of radiating gas such as the Earth’s atmosphere. No doubt for Kirkchoff’s law to be satisfied LTE is required, i.e. for the radiant energy flux absorbed to equal the radiant flux emitted (or for the emissivity to equal the absorptivity). And this is a fundamental tenant of the established view on atmospheric radiation.
However, at least one wiki source claims the LTE manifestation (for a gas) does not necessarily require the absorbed EM radiation be converted, i.e. transferred, into the linear kinetic energy of the gas molecules in motion, and subsequently back into emitted EM radiation, for LTE to still exist. This was the subtle, nuanced point GW was making sometime back that everyone dismissed.
As a precursor, the way LTE is being defined in this field and in that document you cite is equal distribution of all storage modes by collisions, including absorbed EM radiation in this case. Generally, this is the assumed physical manifestation of LTE, and its physics applies the same way as it would in the liquid or solid. Right? Or at least there is no implied differentiation.
The smoking gun that demonstrates this manifestation of LTE is not (or is at least arguably not) occurring in the atmosphere is the following hypothetical experiment:
Lets say we have device that can emit a stream of IR photons of only one wavelength and we point the device toward a container with liquid water in it (in a state of thermal equilibrium) so the stream of photons is absorbed by the liquid water, causing an energy imbalance. That is, the water is receiving more (net) energy flux than it’s radiating away, causing the water to warm and radiate more. Is the additional radiation emitted by the water (from the warming of the water) all re-radiated in the same wavelength as the single wavelength emitting device? Or is it re-radiated as a broad band spectrum based on the increased temperature of the water according to Planck’s law?
If your answer is no to the former and yes to the latter, do you then agree that what is occurring is a process of narrow band absorption being converted into broad band (Planck) emission? Surely, the answer is no to the former and yes to the latter.
It’s my understanding talking to you (and others here and elsewhere) that it is thought that the same exact fundamental physical processes are at work in the gases of the atmosphere as they are (or would be) in liquid or solid so far as absorbed radiant energy being thermalized by collisions and manifesting LTE. For a liquid or a solid, there is universally only one line of processes by which all absorbed forms of EM energy can be converted back into the EM form, and that is via broad band Planck emission (based on the temperature of the liquid or solid). That is, in a liquid or solid, the absorbed EM energy is universally converted entirely, i.e. thermalized, into the mechanical energy of molecules in motion and the only way back to EM form is via broad band (Planck) emission according to Planck’s law. There is no scenario, for a liquid or solid, where narrow band EM absorbed can be converted back into same narrow band EM emitted (yet this is what occurs in the atmosphere, except if absorbed by the water or ice in clouds). In short, if the absorbed IR in the atmosphere is being thermalized by collisions as claimed, GW is looking for there to be a conversion of narrow band absorption into broad band Planck emission, as it should be happening if the energy is being shared and transferred by collisions, but it’s not happening (and not matching what should be the wavelength intensities observed at the TOA if the conversion were occurring).
Now, the mainstream view has apparently gotten around this by applying Kirkchoff’s law to each wavelength independently. This is the only way they can accurately predict the correct spectrum. Now of course, the emitted spectrum is itself broadband in that it consists of multiple wavelength intensities, but those emitted intensities per wavelength are specifically proportional to the absorbed intensities per the same wavelength, right?
Now, whether this has any validity or not — I really don’t know, but this seemed to be the point and argument GW was making — if I understood it correctly (which I may not have). If it were true, nothing would change so far as how the IR intensity changes as it moves through the absorbing and emitting layers (predicted by the Schartzchild eqn.), and it would get the same final result as the mainstream model would (And in fact, GW gets the same results). It would however mean that gases are not emitting according to their temperature, if by ‘temperature’ you specifically mean as solely the direct result of the speed of its molecules in motion; however, the measured temperature and emission rates would still be the same as they are observed to be.
So what’s the big deal? What’s so spectacularly wrong? Even GW himself said his proposed dominant mechanism of emission by GHGs would only be a slight adjustment of established theory and not a complete re-write. And the gas and subsequent emission is LTE.
The concept of ‘temperature’ in a thin radiating gas such as the atmosphere is a fuzzy one, both conceptually and in terms of emission rate as a result of measured ‘temperature’.
Some quotes from your source:
“Translational energy is what we sense as temperature.”
“This energy is what we interpret as “temperature” in daily life (more on this later). It is the kinetic
energy of the molecules that causes the pressure on our skin that we interpret as heat.”
In the case of a liquid or solid, then yes it is the kinetic energy of molecules in motion that we primarily sense as ‘temperature’, but in a radiating gas isn’t what we sense as ‘temperature’ always the combination of a flux of incident EM radiation and that of a locally present kinetic flux of molecules in motion? The same goes to a thermometer measuring ‘temperature’. If dipped in an opaque liquid, the measurement is essentially entirely that of the kinetic energy of molecules in motion in the liquid. If placed in the gases of the atmosphere, it’s always measuring a combination of flux of incident photons and locally present kinetic flux of molecules in motion. The thermometer cannot distinguish one from the other. Hence, the ambiguity of what constitutes so-called ‘temperature’ in the atmosphere.
I note there also seems to be some belief that the mechanism of emission in the atmosphere somehow affects how the IR intensity changes as it moves through the absorbing and emitting layers of the atmosphere. I know of no reason why that would be the case. The atmosphere as a whole mass passes more IR to the surface than out the TOA (about a ratio of 2 to 1)because the rate of emission decreases with height, which itself is independent of the mechanism initiating the emission. So even if what’s proposed were occurring, it’s effect on IR intensity at the surface, the TOA, or anywhere in between, would be zero.
RW,
You wrote: “(yet this is what occurs in the atmosphere, except if absorbed by the water or ice in clouds)”.
That is entirely wrong. Up to that point, you seem to have made only one subtle error that led you astray. Broadband thermal emission is not the Planck spectrum. It is the Planck spectrum multiplied by the emissivity.
Sucked in again.
RW,
No.
Kirchoff’s Law only requires that emissivity is equal to absorptivity. There is no such requirement about absorption and emission except at thermodynamic equilibrium. But the atmosphere is not at thermodynamic equilibrium. The atmosphere emits more radiation than it absorbs directly. Approximate energy balance in the atmosphere requires net convective heat transfer from the surface.
RW: You have written far too many words to compose a sensible reply. However, in one spot above you wrote:
“In the case of a liquid or solid, then yes it is the kinetic energy of molecules in motion that we primarily sense as ‘temperature’, but in a radiating gas isn’t what we sense as ‘temperature’ always the combination of a flux of incident EM radiation and that of a locally present kinetic flux of molecules in motion?”
When LTE doesn’t exist, then two different CONCEPTS for temperatures are used – NOT A COMBINATION:
1) Thermodynamic temperature – proportional to mean kinetic energy
2) Planck Temperature – proportional to the fraction of molecules in an excited state. The rate at which photons are emitted depends on the number of molecules that are in an excited state – the Planck temperature.
When LTE exists, collisions redistribute energy between kinetic (translation) and molecular excited states (electronic, vibrational, rotational) much faster than photons are emitted. This creates a Boltzmann distribution of energy between translational and molecular excited states. In that case, Planck and thermodynamic temperature are the same.
Planck derived his law by POSTULATING a Boltzmann distribution of states. So Planck’s Law assumes that thermodynamic and “Planck temperature” are the same. His Law was based on one CONCEPT of temperature. (I don’t know why they put Planck’s name on a concept for temperature where a Boltzmann’s distribution doesn’t exist.)
****** LTE exists everywhere in the atmosphere below 70 km. The atmosphere above 70 km is UNIMPORTANT to climate. OLR doesn’t change above 70 km. DLR from above 70 km is negligible. *****
So, you are both right and wrong. For climate (the purpose of this blog), the incident flux of radiation is unimportant because collisional excitation and relaxation are much faster than emission of photons. That means that thermodynamic and Planck temperature are equal – that the “flux of incident EM” hasn’t made Planck temperature any higher than the thermodynamic temperature. When discussing radiating atmospheres at this blog, you would avoid confusing yourself AND OTHERS sticking with the practical principle: Emission depends only on thermodynamic temperature. Many skeptics believe that the only way a molecule can emit a photon is to absorb a photon. They think photons are trapped, re-emitted, and conserved. They act as if the only concept that exist is Planck temperature (though they don’t use this term). This is insanity.
A blog on the thermosphere or interstellar gases would enjoy discussing the difference between Planck and thermodynamic temperature.
Frank,
“RW: You have written far too many words to compose a sensible reply.”
OK, maybe I was a little overly long winded. The rest of your post I did read a couple times. I understand what you’re saying, but you’re really just more or less degreeing various things. There’s nothing inherently wrong with that per say, but how about a little healthy skepticism or open mindedness? It wouldn’t kill you (or anyone else here, BTW).
I think the best way to go about with is with baby steps to avoid confusion or misunderstanding.
First of all, even if what I’ve proposed (or really what GW proposed) is the actual physical reality, several important things DO NOT change, i.e. do not differ from the mainstream view or mainstream model of atmospheric radiation, in anyway at all.
They are:
1) The final result produces the exact same macroscopic view of measured temperature and emissions (even down to the individual wavelength).
2) The Schartzchild eqn. and what it predicts so far as how the IR intensity changes as it moves through the absorbing and emitting layers of the atmosphere, holds 100% exactly the same. There is zero difference.
Is this clear from the outset?
RW: I tried to pick one sentences that seemed to contain a valuable thought:
“but in a radiating gas isn’t what we sense as ‘temperature’ always the combination of a flux of incident EM radiation and that of a locally present kinetic flux of molecules in motion?”
In the macroscopic world, temperature seems to be a simple concept that can be measured with a thermometer. In a microscopic world of constantly colliding molecules, temperature is quite complicated. Check out the wikipedia article on temperature. One concept of temperature begins with the kinetic theory of gases and that pressure is produced by collisions with walls and that macroscopic temperature is proportional to the mean kinetic (translational) energy of the molecules. Another concept begins with entropy, in which dS = dq/T. Why is the T in the kinetic theory of gases the same T used in entropy? Statistical mechanics adds the idea that entropy is related to molecular disorder. And Planck’s Law provides a way to convert radiation intensity at any wavelength into a third idea of temperature. This third approach calculates a different temperature from the first two concepts when LTE doesn’t exist.
If you and George want to re-invent this area of physics and communicate with others, it helps to understand what is already known.
For climate change, the answer to your question is NO: You only need to think of temperature as kinetic energy; not kinetic energy plus radiation.
Frank,
“If you and George want to re-invent this area of physics and communicate with others, it helps to understand what is already known.”
Sure. However, GW at least claims there’s not any radical new transformative knowledge from anything he claims, including this particular component on atmospheric radiation, LTE, etc., which he says would only constitute a very slight adjustment or refinement to already well established theory.
What this all boils down to BTW are subtle, but significant nuances that relate the GHE, the underlying physics driving it, the conceptualization of those physics, and how they relate to accurately estimating climate sensitivity to increased GHGs. Now it’s true that his methods derive very low sensitivity, as his best estimate for 2xCO2 is around 0.35C. Surely, I don’t think you or anyone would argue this is a physical impossibility. In reality, he actually derives the same feedback factor as Lindzen and Choi do, i.e about 0.7C per 3.7 W/m^2 of forcing. It is brought down to 0.35C due to his claim of a factor of 2 error in the quantification of the initial forcing, so far as how it applies to surface warming. However, this claim has to do with the application of the RT calculated 3.7 W/m^2 — not the calculation itself, which he has done himself from scratch and also gets about 3.6-3.7 W/m^2 (which is the net increase in optical thickness looking up through the whole, converted in W/m^2, or the difference between the reduced IR intensity at the TOA and increase IR intensity at the surface, calculated via the Schartzchild eqn.).
Now, all this aside, let’s start with specifically how you (and the field) is defining LTE. Can you provide a clear and specific definition that we can work from?
LTE: A large group of molecules are in LTE when collisions transfer energy within the group faster than any other process (especially radiation) brings energy into or out of the group.
In LTE, the behavior of the group depends on their temperature (mean kinetic energy of the group), not their past history. The fact that some molecules in the group absorbed or emitted photons a few seconds (or milliseconds) ago is irrelevant if I know the group temperature. For molecules, the Boltzmann distribution determines how energy is partitioned within the group.
RW: As best I can tell, George is interpreting data in light of existing theories, not creating new theories. I don’t remember much except that his fit depends on what happens at low temperature in polar regions. Most of the planet is 270-300 K and the data is very noisy in that relevant range.
Lindzen’s low climate sensitivity is derived mostly from the large temperature change during large El Nino events. His conclusions depend on accepting a lagged relationship between reflected SWR and temperature. Current temperature correlates with less reflection of SWR today (positive feedback) and more reflection of SWR in 3-4 months in the future (negative feedback). Both correlations are weak (R2 = 0.25) and unconvincing Large El Ninos ARE associated with some immediate negative feedback in the LWR channel and the relationship between TOA OLR temperature is robust. In any case, El Nino warming (focused in equatorial regions) is a dubious model for global warming (with polar amplification).
In both cases, a graph that plots W/m2 vs K produces something that has the units for the reciprocal of climate sensitive (W/m2/K), but there may be little connection between the slope and a climate sensitive relevant to global warming – a rising in temperature everywhere with more at higher latitudes.
Frank,
“LTE: A large group of molecules are in LTE when collisions transfer energy within the group faster than any other process (especially radiation) brings energy into or out of the group.
In LTE, the behavior of the group depends on their temperature (mean kinetic energy of the group), not their past history. The fact that some molecules in the group absorbed or emitted photons a few seconds (or milliseconds) ago is irrelevant if I know the group temperature. For molecules, the Boltzmann distribution determines how energy is partitioned within the group.”
OK, this is a good start. Now, will you further say and/or agree that this manifestation of LTE you’re describing is independent of the material, i.e. the matter, it’s applied to? That is, the physics occurring are the same whether the matter is in the form of a liquid, a solid, or a gas?
RW: This definition of LTE should work for all materials – but the converse is not true – all materials are not in LTE. Normally we can only get significant amount visible light from materials that are above 1000 K (Planck’s Law): the sun and tungsten filaments. However, we have learned to create devices that are not in LTE: “fluorescent” and LED light and lasers, for example.
BTW, for anyone interested we, i.e. myself and Frank (and many others), are now discussing at lot this stuff over here where GW has just recently posted a new guest essay:
https://wattsupwiththat.com/2017/01/05/physical-constraints-on-the-climate-sensitivity/comment-page-1/
Frank,
“RW: This definition of LTE should work for all materials – but the converse is not true – all materials are not in LTE.”
OK good. This is a clear starting point from which to work from. Now, I want to move in very, very baby steps here and really clarify terms being used in a lot of detail before applying such terms to the discussion.
I would like to avoid a repeat of the scenario where it took 5000 posts in 50 different threads over 5 years to establish that absorption ‘A’ is IR optical thickness looking up, and transmittance ‘T’ is 1-‘A’.
Dewitt,
“No.
Kirchoff’s Law only requires that emissivity is equal to absorptivity. There is no such requirement about absorption and emission except at thermodynamic equilibrium.”
Right, but in order for that to be true it requires the condition of LTE.
But the atmosphere is not at thermodynamic equilibrium. The atmosphere emits more radiation than it absorbs directly. Approximate energy balance in the atmosphere requires net convective heat transfer from the surface.”
Yes, but it is said to be in *local* thermodynamic equilibrium, i.e. LTE. At least for the bulk of the troposphere.
RW wrote: “in order for that to be true it requires the condition of LTE.”
No, emission = absorption requires equilibrium throughout the system, not just LTE in part of the system.
Consider a blackbody in LTE at some high temperature. It is located somewhere out in interstellar space. Clearly, emission is greater than absorption.
Mike,
We’re talking about *local* thermodynamic equilibrium, and thus corresponding local absorption and emission. Sorry if I didn’t make this clear.
RW: “the end result is B, i.e. different than A. Basic logic dictates there must some difference of physical processes occurring that accounts for this.”
No, what is needed is a difference in properties. A and B might be at different temperatures, therefore producing different emissions. Or they might have different emissivities.
I can see through glass, but not through wood. Same physics, different result.
“RW: “the end result is B, i.e. different than A. Basic logic dictates there must some difference of physical processes occurring that accounts for this.”
No, what is needed is a difference in properties. A and B might be at different temperatures, therefore producing different emissions. Or they might have different emissivities.
I didn’t specify a specific temperature or emissivity, because it’s not required to illustrate the point, i.e. the difference. All matter in LTE that subsequently absorbs more photons than it’s currently emitting, does not have an infinite capacity to store this additional absorbed energy. It must eventually convert at least some of that absorbed EM energy back into EM energy via increased photonic emission. Of course some of the additionally absorbed EM energy can be lost via non-radiant means, but the point is the same.
I can see through glass, but not through wood. Same physics, different result.”
No, different physics, different result. The end result is not the same. Otherwise you would be able to see through both (and equally well).
Mike,
“RW,
You wrote: “(yet this is what occurs in the atmosphere, except if absorbed by the water or ice in clouds)”.
That is entirely wrong.”
How so? Then why the need to apply Kirchoff’s law to each wavelength independently in order to accurately predict the emitted spectrum? You don’t have to do this with the water of ice in clouds, right?
“Up to that point, you seem to have made only one subtle error that led you astray. Broadband thermal emission is not the Planck spectrum. It is the Planck spectrum multiplied by the emissivity.”
Right, but the point is you cannot predict the correct spectrum based on Planck’s law per its temperature and emissivity like you can for a liquid or solid. You have to scale the emissivity per each individual wavelength’s absorptivity in order to predict the correct spectrum emitted. If the absorbed radiant energy was being converted in the mechanical energy of molecules of motion via collisions with other gas molecules (as claimed), as it is in liquid or solid at LTE, you could (or should be able to) predict the spectrum based on Planck’s law in the same way. But you can’t. GW is seeing this alone as overt falsification that the absorbed radiant energy is being transferred by collisions to the non-GHG molecules; however, he does not think this means the gas and emission from the gas is therefore non-LTE (and in a seeming violation of Kirchoff’s law), as it appears the field of climate science at least does.
As long as the linear kinetic energy of the GHGs and non-GHGs are equalized amongst each other by collisions, LTE still exists even if the absorbed photons only have very little of their energy transferred via collisions to non-GHG molecules. Now, even if this scenario is true, collisions can still trigger emissions to some degree — it’s just that it isn’t the dominant mechanism triggering emissions from GHG molecules. Now, it’s important to note that GW claims there is no real or clear mechanism by which the energy of an absorbed photon by a GHG molecule, whose energy is stored as internal vibration energy, will transfer this energy upon collision with another GHG molecule or non-GHG molecule into linear kinetic energy, in the way it does in liquid or solid. So it’s not claimed the collisions don’t occur — they do, but only that there is largely no transfer of energy from internal vibration to linear kinetic. As long as the GHGs and non-GHGs have their linear kinetic energy equalized by collisions, Kirchoff’s law can still be fully satisfied under such conditions; and thus so can the condition of LTE (or vice versa).
Here is the wiki excerpt on this:
“It is important to note that this local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas need not be in thermodynamic equilibrium with each other or with the massive particles of the gas in order for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist.”
So again, even if a case can be made here for this subtle difference in the manifestation of LTE, what’s the big deal? This, even if true, will have no effect on how the IR intensity changes it moves through the absorbing and emitting layers of the atmosphere. This is what everyone here seemed to think, as I recall.
It’s important to note here the central tenant of LTE in the mainstream’s view on atmospheric radiation is fully agreed to be correct. It’s only being argued that the physical manifestation of the condition of LTE itself established by the mainstream view is incorrect, and it has gotten around this incorrect physical manifestation of LTE by applying Kirchoff’s law to each wavelength independently. This, it’s claimed, compensates for the error of not modeling the actual physics occurring and allows the mainstream model to get the correct answer.
But again, even if true, this is not going to radically transform our understanding of the universe as it seemed many of you were implying. It would have no affect on how the intensity changes, though it *may* reveal some nuances about dynamics.
Here is the wiki link, BTW:
https://en.wikipedia.org/wiki/Thermodynamic_equilibrium#Local_and_global_equilibrium
RW,
You wrote: “the point is you cannot predict the correct spectrum based on Planck’s law per its temperature and emissivity like you can for a liquid or solid.”
No, it is exactly like a liquid or a solid. Since you won’t listen, there is not point in saying more.
Mike,
“No, it is exactly like a liquid or a solid. Since you won’t listen, there is not point in saying more.”
Then why the need to apply Kirchoff’s law to each wavelength independently in order to predict the correct emitted spectrum?
It’s because the emission rate of each wavelength is proportional to the absorptive rate of each wavelength, right? Where LTE exists.
This is not the same process that occurs in a liquid or solid, where even if a single wavelength IR flux is absorbed affecting the liquid’s or solid’s temperature, i.e. increasing it, the incremental increased IR emitted flux due to the warming (in LTE) is NOT proportional to the single wavelength flux being absorbed. That is, increased IR emitted flux will NOT be solely an increase in the absorbed single wavelength.
You have scenario 1, where the process of absorbed IR energy is converted back into IR emitted flux, and the end result is A, and you have scenario 2, where the process of absorbed IR energy is converted back into IR emitted flux, and the end result is B, i.e. different than A. Basic logic dictates there must some difference of physical processes occurring that accounts for this.
How do you not see that it’s being claimed the same set of physical processes are at work in the gases of the atmosphere as they are (or would be) in liquid or solid, manifesting LTE via the transfer by collisions of absorbed IR energy into the kinetic energy of molecules in motion and then back into IR emitted energy, yet one results in a different end result than the other, i.e. the A and B end results from scenarios 1 and 2 I outlined above?
If the physics are the same, as they are claimed to be, this would not be the case in those scenarios.
RW,
You need to think about the meaning of “local” in “local thermodynamic equilibrium”. And the fact that the mean free path of photons can be very different from the mean free path for molecules.
RW wrote: “I would like to avoid a repeat of the scenario where it took 5000 posts in 50 different threads over 5 years to establish that absorption ‘A’ is IR optical thickness looking up, and transmittance ‘T’ is 1-‘A’.”
I don’t want to participate in any further discussion where absorption and transmittance are used in connection with an atmosphere that emits a significant amount of radiation. No would I want to even if you used the technically correct term absorbance. So don’t write for me.
One reason: George White’s recent Figure 2 at WUWT with A = 0.75 doesn’t produce anything close to the observed value for DLR. He and you are applying the wrong physics (the S-B equation) and getting the wrong answer.
Frank,
I guess you missed my reply to you here:
https://wattsupwiththat.com/2017/01/05/physical-constraints-on-the-climate-sensitivity/comment-page-1/#comment-2393978
His Ps*A/2 is NOT a quantification of actual DLR at the surface. It would surely be wrong if it were.
The problem is you don’t seem to be making any effort to understand the methods he’s employing. Like so many you’re just more or less going into ‘shout down’ mode. He’s not modeling the actual thermodynamics and thermodynamic path manifesting the energy balance. It would all surely be spectacularly wrong if this is what he were doing.
It’s actually not that all that hard to understand the foundation behind black box derived equivalent modeling and system analysis, but you have to step back and say to yourself…”you know what, I don’t understand this thing…maybe this guy really has something here and I’m missing it”.
Yes, it’s counterintuitive in the sense that what you’re looking at in the derived model is not what is actually happening, but instead it’s only being claimed that the flow of energy in and out of the whole system would be the same if it were what was happening. There is absolutely nothing more than this being claimed. It doesn’t tell us why the balance is what it is (or has manifested to what it is), nor does it describe and quantify the complex, highly non-linear path the system takes from equilibrium state to another.
Again, if it were claiming to do this, it would all be spectacularly wrong. Your instinct that it cannot possibly do this is 100% correct. It can’t — it’s not even close.
The question becomes, if it’s not doing this, then what is it doing? But you have to step back and acknowledge you don’t understand, and from that make an effort to. And again, maybe there’s an error somewhere, as certainly anyone can be wrong. But again, you’ve got to first make some effort first.
And yes, you’re right. Absorptance is the right term.
Frank,
The methods of system analysis GW is employing are widely used in the private sector in highly critical applications where a high level of accuracy and precision is required. It makes no sense they would be so widely used if the methods were not valid and didn’t consistently produce accurate and reliable results. In the private sector, they generally don’t employ the kind of system analysis that climate science is doing for an application like climate sensitivity, like GCMs or the looking at TOA net flux changes to surface temperature changes, as there are just way too many heuristics an inherent inaccuracies or ‘go wrongs’ involved in such methods.
Of course, if what were actually being claimed was what everyone here thinks is being claimed with it, it would surely be spectacular nonsense. As no doubt that’s what people here have concluded.
But, black box system analysis and a subsequently derived equivalent model is only an abstraction:
https://en.wikipedia.org/wiki/Black_box
“The black box is an abstraction representing a class of concrete open system which can be viewed solely in terms of its stimuli inputs and output reactions:
The constitution and structure of the box are altogether irrelevant to the approach under consideration, which is purely external or phenomenological. In other words, only the behavior of the system will be accounted for.”
The ‘Ps*A/2’ is only an abstraction, i.e. the simplest construct or model that results in the same rates of joules gained and lost at the surface and the TOA, given its inputs and required outputs at its boundaries (the surface and the TOA) per the black box atmosphere.
The foundation behind black box derived equivalent modeling is there are an infinite number of equivalent states that can have the same average, or there are an infinite number of physical manifestations that can have the same average.
Whether you operate as though Ps*A/2 is occurring or to whatever degree you can successfully model the steady-state atmosphere by approximating the actual thermodynamics, the final flow of energy in and out of the whole system is the same. You can even successfully construct the actual thermodynamic model out to more and more micro complexity, but it’s still bound the same end point, i.e. the same rates of joules gained and lost at surface and the TOA, otherwise the model is wrong.
Thus, Ps*A/2 is just as valid at quantifying the aggregate end result or aggregate dynamics of the complex thermodynamic path actually manifesting the steady-state energy balance. There is nothing missing from all the physics occurring, because the manifested boundary fluxes themselves are the net result of all of the physical effects, radiant and non-radiant, known and unknown, going into and out of the black box. This is why the model accurately quantifies the aggregate dynamics of the steady-state, even though it’s not modeling the actual behavior, i.e. modeling the actual thermodynamic path.
It’s only from the quantitative equivalence of Ps*A/2, given the required inputs and outputs from the black box, where the deduction is then being made that only about half the IR power absorbed by the atmosphere from the surface is acting to ultimately warm the surface within the highly complex and non-linear thermodynamic path actually manifesting the balance, where as the other half is acting to ultimately cool the system and surface within the complex thermodynamic path. It doesn’t quantify the thermodynamic path itself or tell us why the surface balance is what it is, but rather only quantifies its effect within in it, so far its ultimatel contribution to enhanced surface warming.
What is it that you think precludes the climate system from being able to apply these techniques to it? I don’t see that there are any.
RW
On May 1st, 2016 I said:
Finally, after many “no progress” comments from RW 8 months ago, I reach the end of patience.
For people interested in RW’s point of view, please read his 5,000+ comments.
George White has a view on atmospheric radiation that is completely unsupported by the physics of the last 60+ years. And George doesn’t realize it. Or can’t.
I don’t really care.
Anyhow, no more.
Interested parties can find out more about George White and RW and their insights elsewhere, or here, preserved for posterity.
I have a comment in regards the “diffusivity approximation” which is one step toward transforming, for instance, the upward solution to Schwarzchild’s equation, from spectral radiance in units of w/msqd wavenumber steradian to an expression for OLR flux in w/msqd. (One can first integrate over wave number using, say, the”Plack Body Calculator” in Spectralcalc so as to remove the inverse wavenumber dependence. Then one has at this point W/msd steradian).
Then one way to proceed further is to use the approximation referred to in the diffusivity plot above in Science of Doom, i.e. replace vertical paths by paths at some angle relative to vertical. (The best angle may depend on the criteria used, and some might use 60 degrees, although the most common choice is 54 degrees.) Pierrehumbert uses 60 degrees, (Pr. of Planetary Climate, p. 191.)
I suggest an interesting feature of the 60 degree choice. In all the equations for transmittance I have seen in textbooks, the symbol for concentration q is combined with the cos theta where theta is the angle chosen relative to vertical. The two symbols always appear together in the form F(theta, q) F(q/costheta). Rather than operate on the optical thickness expression in the usual way, for the 60 degree angle choice but a vertical path, make use of the fact that for theta = 60 degrees, q/cos 60 = 2q/cos zero. Then save time and spreadsheets, use the vertical path from the outset, and merely double the concentration.
(So…. for the present day 400 ppm concentration of CO2 the 60 degree path is the vertical path for 800 ppm CO2, and for 800 ppm CO2 the 60 degree path is the vertical path for 1600 ppm CO2). For water vapor, q is not independent of altitude; most water vapor is within a couple of km of ground level. But you can still double the water vapor U.S. standard Atmosphere scale factor and pull the same trick, IMO.
Comments?
I floated the above comment because I am completely self taught in the subject of atmospheric physics (although I had a long research career in condensed matter/x-ray physics.) , and therefore this blog is about the only way to get a critique. No one I can talk to. No one in my University knows thing one about atm physics.
Dealing with solution to Schwarzchild’s equations (S.E.) …Is this o,k? In S of D discussion equation 16, Part 6,”the equations” his equation 16 is equivalent to this from Grant Petty’s text book:
1. I^(0,Z) = I^(z) t*(0,z) + Integral from 0 to Z [B(z) dt*(0,z)}
Here one simply uses the equations from Petty p. 211 top of page and use W^(z) = dt*(z, Z)/dz. Multiply by dz in integral, get dt*(z,Z).[ I am using Z as a point high in the atmosphere…upper boundary point, z as an altitude that starts at z = 0.] My equation 1 above is equivalent to equation 4.22, p. 47 in Houghton’s “Physics of Atmospheres”, 2007. Then because B(z) is a slowly varying function of z you can – at least in most circumstances – bring B(z) inside the differential so that z = 0 to Z integral of d[ t*(0,z)x B(z)] is to be obtained.
[One wishes to stop the integration short of the upper limit Z to avoid including the upper boundary point. But that will not influence this discussion.] One obtains then a solution at z of:
3. I^(0,Z) = I^( 0,Z)t*(0,Z) + { B(Z) x t*(0,Z) minus B(0) x t*(0)}
There is no integration to do, and one does not need to determine an absorption coefficient. You just read the contribution from the atmosphere (second term to right of equal in 3 above) by looking at the EXCEL plot you generate. Then you need to find the band integrated solution that gives flux for the OLR, not just the spectral intensity I^(0,Z). You can do all this with SpectralCalc transmittance values plus diffusivity factor correction (Armstrong, Quant Spec Radiat Transfer 8, 1968, p. 1577) For CO2 wave number band for bending mode between 500 – 850 wn I get OLR vs altitude at lower altitudes about 90% of ModtranChicago, and maybe 95 % of ModtranChicago for altitudes in the tropopause.
Is this O.K.?
Curious: You are operating at a more sophisticated level than I. Trying to avoid brut force numerical integration was confusing (and unrewarding) for me. I did try using MODTRAN over the first 0.01 km to see what was happening at strongly absorbed wavelengths and wondering if the output was valid for short distances.
The first km could be tricky: thermal inversions, water vapor continuum, interface between boundary layer and free atmosphere. Different people could be using different assumptions.
Hi Frank,
Thank you for your response. I had asked essentially the same thing by e mail to Grant Petty and after some exchange of information he suggested I download some software he has developed, and follow along with exercises in his graduate level Atmospheric Physics class. I think that to analyze this is not a trivial manner, and perhaps neither you, nor Grant, or Science of Doom might have time to analyze it either…..I have in the past – say three years ago or so – asked questions on the Skeptical Science website developed by John Cook and learned a great deal there – but recently the feedback there has been that the kinds of questions I have been asking recently as I learned more are hard to deal with in a blog for educating the public.
This being said, if either you or Science of Doom have a suggestion I would appreciate it. There is also this…..if you guys would be o.k. with it I will email you the results I have obtained on this question from SpectralCalc, ModtranChicago, and from Modtran6.
(Also…ModtranChicago does not allow you to query anything at distances closer to the earth surface than 1 km, and unless you use the rather tricky gas cell APP in SpectralCalc the same limitation applies there. So did you mean Modtran6 for the first 0.01 km study you mention?)
I am using the U.S. Standard Atmosphere; therefore, the temperature structure used as input probably could not correspond to a thermal inversion. I zero out the water vapor in the input and use only CO2, no other GHG.
I am leaving today Sunday a.m. with my Wife to go watch spring training baseball games for a week (without computer access) and after that I could interact some more on SOD if you guys have any thoughts.
Regards,
Curious
Having trouble with Schwarzchild Equation in 1st km. Help Anyone?
Working from Petty’s text book. P. 211: The contribution of the atmosphere to the upward intensity is obtained as follows: 1. Find derivative of the transmittance t(z,Z) relative to z where Z is a high altitude point and might be symbolized by the infinity sign. 2. Multiply by radiance term B(z) . Integrate that product from z = 0 to z = Z.
Then total upward flux is the integral described above + I^(0)t*.
Consider CO2. I use SpectralCalc transmittance values. I use bending mode band 500 to 850 wn. For transmittance values with z > 1 km, the plots of t(z,Z) are fit well by a polynomial and then I can just take the derivative of the polynomial. I integrate using Trapezoidal rule. Seems reasonable result.
BUT, what the professionals get coming out of the 1 km altitude is an intensity almost the same as the intensity of the flux coming off the surface as determined by Stefan – Boltzmann law, This agrees with ModtranChicago, the SpectralCalc atmospheric path radiance output (A new cool feature of SpectralCalc), and also with Modtran 6. Since the I^(0) now has a t* that is not one (as is the case for z = 0) it is hard to understand this result. And I do not see it in the equations.
I think I recalled Frank saying something in a previous exchange to the effect that what Modtran gets coming out of the first km is hard to understand.
Any suggestions?
Curiousd
Although this doesn’t answer the question posted above, I am wondering about the following: When you use the approach of going from upward intensity to upward flux, then the approach taken is to (1) multiply the intensity by pi and then (2) replace a vertical path by a straight line angled path at, say 54 degrees relative to vertical.
But within the first km, the thermal IR near the bending mode frequency of CO2 is strongly refracted……so the radiation does not travel at all in a straight line. So perhaps a related question is: does the “diffusivity approximation” work close to the earth’s surface?. The figure on page 171 of Petty is for straight line paths. Could it be that the approach of replacing a vertical path by an angled straight line path breaks down in the first km?
Maybe the last two posts here are presenting related questions?
(A bit of off topic levity??….I hike a lot and there is a trail where graffiti person A wrote on a gate “Ask Questions.” This was eventually followed by newer graffiti from B: “Why”)
Curious
For the website “Modtran Infrared Light in the Atmosphere” (MILA) user output is displayed with an underlying Planck Distribution having a range between 100 wn and 1500 wn. I accidentally discovered that this wavenumber range is not merely a graphical convenience; the underlying program uses this rather limited range of wavenumbers. In addition, the computer program used assumes an Earth emissivity for thermal IR of 0.98.
I have developed corrections to MILA for several special cases. The suggestions could be incorporated as instructions for students; therefore rewriting the program would not be necessary to utilize the corrections. An authorship proposal was submitted to the Bulletin of the American Meteorlogical Society. The Editors made the following comment.
“Your corrections to Modtran sound promising for a wide audience. However, the Editors feel that BAMS is not the appropriate venue for vetting and disseminating this information. For proper exposure and discussion, they would more productively disseminated directly within the Modtran user community.”
Following the Editor’s suggestion, I will post the corrections in Science of Doom one at a time, awaiting vetting comments before posting subsequent corrections. I will do similar posting on Skeptical Science.
Correction One:
Consider the following settings for Modtran Infrared Light in the Atmosphere (MILA). 1. All greenhouse gases set to zero. 2. U.S. Standard Atmosphere, no clouds. 3. Looking down from 70 km. 4. Temperature offset minus 33.2 K giving a surface temperature of 288.2 K – 33,2 K = 255 K. The outgoing long wave radiation (OLR) given by MILA is then 225.075 W/meter squared.
From the “Black Body Calculator” feature of SpectralCalc, at 255 K, emissivity 0.98, in the 2 wn to 100 wn range the band radiance is 0.553602 W/meter squared steradian. Multiply by the pi steradians available for the Lambertian reflection case to obtain 1.742 Watts/meter squared.
Similarly the band from 1500 wn to 2200 wn contributes 6.3437 watts/meter squared.
By expanding the underlying Planck Distribution to 4 wn – 2200 wn the total OLR becomes 225.075 + 1.742 + 6.3437 = 233 watts/ meter squared.
If instead of an emissivity of 0.98 one uses an emissivity of unity then by a similar procedure one obtains 1.777 w/meter squared + 231.144 watts/meter squared + 6.473 watts per meter squared = 239 watts per meter squared. Note that the MILA out put of 225.075 for the 100 wn to 1500 wn band has increased to 231.144 W/ meter squared by changing from emissivity 0.98 to 1.00.
The context of these results is the following: Treatments of elementary environmental physics such as written by Archer of Wolfson show that for an Earth with no greenhouse effect but assuming a best estimate of cloud albedo, the Earth is in thermal equilibrium with incoming solar radiation of the sun if the Earth surface temperature is 255 K. This temperature corresponds to an OLR of 239.7 watts/meter squared if one applies the Stefan Boltzmann law.
A, sophisticated satellite analysis for the top of the atmosphere was obtained by Trenberth, Fasullo, and Kiehl, (BAMS 90,2009, 311 – 323) as 239 watts/meter squared. These authors assume the Earth surface emissivity is one.
curiousdp,
There’s your first mistake. You can’t.
Take a look at the atmospheric spectrum at zero altitude looking up. If all greenhouse gases were set to zero, there would be no intensity. But, in fact, the intensity is 27.987 W/m€ for the Tropical Atmosphere. If you want to see what’s changed and what hasn’t, look at the raw data. A quick glance shows that CO and all the CFC’s are still there as well as Aerosol 1, whatever that is
Hi DeWitt,
Note I say that the observer is looking down not looking up. Then with no gases in the atmosphere the earth will emit upward flux just using the Stefan – Boltzmann law. Thus something at temperature T K with nothing but space surrounding the object emits power per unit surface area at all wavelengths given by P = emissivity x Stefan Boltzmann constant times temperature in Kelvin to the fourth power.
curiousdp,
The difference in the emission spectrum from looking up from the surface and looking down from 70km is greater. But the spectrum looking down from 70km is still not a pure Planck spectrum from a uniform temperature surface with an emissivity of 0.98. 7.52% of the radiation emitted from the surface is absorbed by the atmosphere. The atmosphere emits ~30W/m² upward and downward. That’s for the Tropical Atmosphere. The amount varies depending on the atmosphere you select.
In order to proceed further with the corrections to Modtran I need to include some background material. Here are my results comparing the spectral calc atmospheric path radiance output for CO2, major isotopologue only, with Modtran using an effective path angle of 54 degrees as described by Sc of Doom above. 500 – 850 wn band used (CO2 bending mode) with transparent bands from 100 wn to 500 wn and 850 wn to 1500 wn added.
Alt Spec Calc Modtran
0 360.2 360.2
1 357.5 357.6
2 353.1 352.9
3 348.5 348.5
4 344.5 344.4
5 340.2 341.0
6 336.4 337.5
7 333.1 334.4
8 329.9 331.2
9 327.1 328.7
10 324.6 326.5
11 322.6 324.6
12 322.2 324.6
……
18 320.8 322.4
O2 only GHG, for SpectralCalc only major isotopologue of CO2 used.
For the Modtran Infrared Light in the Atmosphere (MILA) leave all default settings of greenhouse gases in place but choose the U.S. Standard atmosphere. No clouds, of course. The uncorrected clear sky output flux at 70 km is 260.2 w/meter squared.
Using the same SpectralCalc atmospheric path radiance application described in the previous two posts set the water vapor scale to 1.7 instead of the default 1.0. This corresponds to the diffusivity approximation with an effective angle of 54 degrees to vertical. From the 2 wn to 100 wn band plus the 1500 wn to 2200 bands the correction assuming an emissivity of 0.98 is 266.7 w/meter squared. If the emissivity is changed to 1.0 the corrected output is 272.1 watts per meter squared. This is compared to the value of 273.74 watts per meter squared from the clear sky OLR by Chen et al.
See Chen, et al, ” Comparisons of Clear – Sky Outgoing Far IR flux inferred from Satellite observations..” Table three, J. of climate Vol 30, no 9, May 2017.
Regarding the above comments by DeWitt Payne on correction one
.
My “correction one” is simply to adjust the output of ModtranChicago so that it better calculates the OLR from an earth with albedo at the best measured value of about 0.3, but with the “Greenhouse Effect Turned Off”.
Basic courses in Environmental Science (Archer, Wolfson) routinely perform this calculation and obtain a surface temperature of 255 K. At this temperature, the Stefan Boltzmann law can be used to show that the outgoing flux is 239.7 w/m sqd for emissivity of the earth’s surface of one and 234.9 for the emissivity of the earth’s surface of 0.98. I have now looked at the underlying program of ModtranChicago more carefully and determined that they not only calculate using an underlying Planck Distribution limited to the 100 wn to 1500 wn range, but they use an emissivity of the earth’s assumed to be 0.98.
To correct for the missing WNs between 2 wn and 100 wn, and 1500 wn to 2200 wn I use the black body calculator of SpectralCal using a temperature of 255K to compute the OLR from 2 wn to 100 wn assuming emissivity of 0.98, do the same for the window between 1500 wn and 2200 wn , then add the sum of these corrections to the present OLR obtained from Modtran assuming a U.S. Standard Atmosphere with a TEMPERATURE OFFSET OF MINUS 33.2K WHICH LOWERS THE U.S. STANDARD TEMPERATURE FROM 288.2 K TO 255k .
I clearly should use no GHG since Archer and Wolfson in their basic environmental class, make the same assumption..
The MODTRAN out put as it stands, under these settings, is 225.075 W/m SQD.
The corrected output IS 233 W/m SQD.
What you “should” get for 0.98 K and 255 K surface temperature of U,S. standardatmosphere obtained by the – 33.2 degree offset,and no GHG IS 234.9 W/m SQD.
My reference to the work by Kiehl and Tenberth was misleading. That result is the culmination of much effort to account for the total TOA OLR, clouds and all, and they now get 239 w/metersqd assuming an emissivity of one, although they state that clearly, it should be less than one.
So I will not mention the Kiehl and Tenberth result when I communicate with David Archer about these issues; that extra piece of information just confuses things.
Why not just use the Stefan-Boltzmann equation? That’s the integral of the Planck equation over all wavelengths. For an emissivity of 0.98 and a surface temperature of 255K you get 234.95 W/m². Your missing energy is because there is still some emission at frequencies greater than 2200cm-1.
Here is my correction number two.
S of D suggested to me that it would be interesting to work up a correction to ModtranChicago for the clear sky OLR and compare to the best results, and I will acknowledge this suggestion when I finally contact David Archer at ModranChicago; I never heard about the clear sky OLR before taking part in this blog.
Clear sky OLR measurements from satellites are of considerable interest since the properties of the atmosphere are being studied without the complications of cloud cover. For the “Modtran Infrared Light in the Atmosphere” (MILA) leave all default settings of greenhouse gases in place but choose the U.S. Standard atmosphere with no clouds. The uncorrected clear sky output flux observed by the virtual observer at 70 km is 260.2 W/m2
.
It should be kept in mind in what follows that unlike CO2, which maintains a constant concentration up to ~ 100 km, the water vapor content is concentrated close to the Earth’s surface. (This may be seen by clicking the “temperature” button underneath the plot of altitude versus temperature in MILF, and compare CO2 and water vapor on the drop down menu.)
Using the SpectralCalc atmospheric path radiance application, I set the water vapor scale of the water vapor path to 1.7 instead of the default 1.0. This corresponds to the diffusivity approximation with an effective angle to vertical of 54 degrees. These outputs are associated with water vapor since the atmosphere for the “correction” wavelength ranges is essentially transparent to CO2 but the water vapor is a strong absorber. For a .98 emissivity, the corrected OLR is 266.7 W/m2 . This value is compared to the value of the clear sky OLR obtained by Chen, Huang, Loeb, and Wei using the AIRS spectrometer, of 273.74 W/meter squared,
I am unclear about how these authors handle the emissivity issue. If I were to assume emissivity one instead of 0.98 the agreement between ModtranChicago and AIRS would improve.
{The factor of 1.7 referred to in the above paragraphs is analogous, for a calculation by the same means applied to an atmosphere consisting of CO2 as the only GHG, to using an effective angle of 54 degrees and 400 ppm CO2 . To use the diffussivity approximation I use in the SpectralCalc atmospheric path radiance APP a vertical path with 400 x 1.7 = 683 ppm of CO2 instead of a path at 54 degrees with concentration 400 ppm. (See the results in my post above of April 8, 2017, 10:26 P.M. That calculation limits the underlying Planck Distribution to the range 100 wn to 1500 wn as does ModtranChicago, or MILA. The good agreement with MILA shows that my method of using the SpectralCalc atmospheric path radiance APP with a vertical path and a scale factor for water vapor of 1.7 can be trusted; therefore applied to “correction two” for the clear sky approximation.)}
Reference: Chen, et al “Comparisons of Clear – Sky Outgoing Far IR Flux Inferred from Satellite Observations…..” Table Three, Journal of Climate,Vol 30, No 9, May 2017.
Regarding the last comment by DeWitt Payne, who states:
“Your missing energy is because there is still some emission at frequencies greater than 2200 cm-1.”
Yes I agree. I will also include this suggestion for a correction to my correction.
I wish to be consistent, and clearly in my correction two above I cannot just use the Stefan Boltzmann equation. Therefore I also include the method of adding in the missing bands, even for correction one.
I now move on now to my last correction which deals with a potentially large error the user may deal with that involves applying the Stefan – Boltzmann equation to Modtran output; this error is now more than a few percent.
Corection number three:
Here is a trap I fell into using MILA output as the standard against which to compare my own efforts to compute the CO2 only climate no feedback sensitivity using SpectralCalc and Schwarzchild’s equation. (The atmospheric path radiance tool in SpectralCalc has only recently become available.) I wanted OLR as a function of altitude. As far as I have been able to find, MILA is the ONLY source for such data. Occasionally I had looked into the “Show Raw Model Output” button, but became quickly discouraged feeling that this was leading me into source code details I would be unable to unravel. Instead I made the bad, but understandable choice of computing my own emissivity by..
1. Obtaining the upward flux in MILA for a U.S. Standard Atmosphere as seen from zero altitude. This is 360.472 watts per square meter.
2. Doing exactly as Dewitt Payne suggests above, which is to use the Stefan Boltzmann law for 288.2 K and assuming emissivity unity. One thus obtains the value 391.164 watts per square meter.
3. Dividing 360.472 watts per square meter by 391.164 watts per square meter to obtain an emissivity of 0.92 !
4. Attempting (for ~ two years) to get something reasonable for my climate sensitivity for the MILA values at low altitude, particularly at 1 km, where I found no success at all. This is because I was using an impossibly small value of emissivity, which I now realize but keep in mind I am completely self taught on this stuff and did not know about the constraints on the Earth Surface emissivity two years ago.
MILA should put out the information that they use an emissivity of 0.98 right on the output screen. This is an important correction and can be made by a single, simple step.
I didn’t say use an emissivity of one with the S-B equation. I said use 0.98. Here’s a piece on integrating the Planck equation over a range of wavelengths: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19680008986.pdf
Using Table C1 and a temperature of 288.2K, at 2200cm-1 there’s still about 0.45% of the total energy at higher frequencies. At 100cm-1, 0.47% of the integrated emission is from frequencies less than 100 cm-1 and at 1500cm-1, 5.53% of the energy is at frequencies higher than 1500cm-1. Conveniently, this adds to 6%, just what you need to close the balance.
The other thing is that zero altitude isn’t zero. If you leave all the ghg’s in, the average transmittance is 0.8919, not 1.000. That means some of the emission is coming from the atmosphere, not the surface.
To proceed further I need to say a bit about three packages: Modtran Chicago, Modtran6, and SpectralCalc.
ModtranChicago SpectralCalc Modtran6
Price Free $ $$$$ or more
Altitude Resolution 1 km 1 km about 1 cm
What it gives you OLR but only that Transmttnce Transmittnce Radiant Flux
Radiant Flux Radiant Flux
Scattering
Any distance or
wave number band no limits
Spectral Calc gives a “one million point limit exceeded ” error message if you pick altitudes too high or wave number ranges too large.
The altitude resolution of ModtranChicago is much better than 1km. You can enter decimal fractions of a kilometer. For example, if I enter 0.002km looking down, the average transmittance decreases from 0.8919 at 0km to 0.8524. That’s at the bottom of the raw data output. Looking up, the downward IR flux on the main page decreases from 258.171W/m² at 0 km to 258.139W/m² at 0.002km. There is no difference in transmittance and downward IR flux from 0km at a setting of 0.001km. I suspect that means that the actual height at zero is closer to 1m than 0m.
The program here messed up my crude table. Spec Calc gives transmittance values and radiant flux values. Modtran6 gives these but also scattering. Modtran Chicago gives OLT but only that
Hi Dewitt,
The atmospheric path radiant flux APPS in both SpectralCalc and Modtran6 can give you OLR values. The usual assumption is that within the atmosphere itself,scattering can be neglected. This assumption can be debated and Modtran6 probably has a feature for radiation scattered by particles, but I do not know how to use that (advanced) feature yet. If one neglects scattering in the atmosphere one has the “two stream approximation” to Swartzchild’s Equation. Then the upstream solution, which is solved by SpectralCalc and Modtran6 to give the atmospheric path radiance is given by: The upward Intensity I(Z) is given by I^(0) t* + an integral from the initial point on the earth’s surface to the chosen point Z higher in the atmosphere. I^(0) contains the Earth surface emissivity and t* is the transmittance of the chosen band of wavenumbers for the atmosphere of choice created by the user, for the path from zero to Z. So the only emissivity you NEED is the emissivity at the Earth’s Surface.
I thank Dewitt Payne for his useful feedback, helping me sharpen these supplements to MILA. For the same reason I thank Frank and Science of Doom; especially for the suggestion of Science of Doom to check into the clear sky OLR. I think the items related to the limited wavenumber band of MILA have now been vetted by both Skeptical Science and Science of Doom and I am on my way with this.
I have had an exciting time, nearly a month, collaborating with David Archer in updating his “Modtran Infrared Light in the Atmosphere.” I am grateful to David Archer for this opportunity. I thank the folks who contributed their feedback on this site.
We are finished updating the website for now. There is a difficult issue that remains to simmer, but do check out what has been done to date.
Some of the new results:
1. The underlying Planck distribution has been extended to encompass 2 wn to 2200 wn.
2. Now if one calls up the U.S. Standard Atmosphere, no GHG, T offset by minus 33.2 degrees to produce a surface at 255 K, whereas you used to obtain 225 w/msqd, you now obtain 233 watts/meter sqd. If you use the Stefan Boltzmann law and the emissivity of 0.98 you obtain 234 watts/ meter sqd.
3. The clear sky OLR obtained by using the default GHG used to be 260 watts / meter squared. It is now 267 watts/ meter sqd. I assume an observer distance of 70 km. If the emissivity were changed to 1 from 0.98 one would obtain an OLR of 272 watts per meter sqd. The AIRS spectrometer obtains 274 wattsper meter sqd.
It is difficult for me to deduce what AIRS uses for emissivity; in actuality the Earth surfaceemissivity is a function of wave number.
4. A “Freon Scale button” has been added.
5. Now if you look downward from the Earth Surface you obtain an OLR for U.S. Standard of 382.14 watts/ meter sqd. If you use Stefan Boltzmann law assuming emissivity one you get 391 watts/ per meter squared. Dividing 382.14/391= 0.97. This is a natural procedure a user would go through to determine the assumed emissivity of the program, if that user felt it unlikely that he/she would understand the contents of the “Show Raw Model Output” button. The procedure gives 0.97 instead of 0.98 probably because even the window between 2 wn and 2200 wn has a termination error.
The procedure described in the previous paragraph, using the old version of “Modtran Infrared Light in the Atmosphere,” would yield an emissivity of 0.92, which is impossibly small.
6. The incident insolation for the Tropical setting lies between 300 watts per meter sqd and 320 watts / meter sqd. (Petty, p. 4.) In the clear sky (no cloudsor rain) case, this should equal the upward thermal IR power, which now is 297 watts/ meter sqd, and would convert to 306 watts per meter sqd if the emissivity were changed to one.
There is one unresolved issue. What is the best way to create a reasonable altitude versus temperature plot in the chart to the right of the plot of intensity versus wavelength? Previously the shape of such plots, given a large temperature offset of the surface, were quite complex, and based on a model that is lost in the mists of time. We believe we have some idea of a means toward creating realistic plots for positive temperature offsets, but not negative offsets. Therefore, for reasons of consistency and simplicity the assumption made for now is that both the surface and the temperature of the atmosphere are given the same offset. Changing the stratosphere temperature in this manner is more than questionable, but a suitable alternative is not obvious to us.
This issue is found in other packages. SpectralCalc simply goes to the opposite extreme so that the Earth surface offset is decoupled from the temperature of the atmosphere, and if this procedure results in a abrupt temperature discontinuity so be it.
An update:
The improvement of Modtran Infrared Light in the Atmosphere is complete for now and can be found by googling Modtran and clicking Modtran Infrared Light in the Atmosphere. This development is now written up and will be published as part of the symposium on education at the annual meeting of the American Meteorological Society , which is being held in Austin Texas in January.
“Updates to Modtran Infrared Light in the Atmosphere”, Douglas M. Pease and David Archer. The text ends with the acknowledgement “D.P. is grateful for discussions with Steve Carson of Science of Doom regarding cloud free OLR studies”.
curiousdp,
Thanks for the acknowledgement. Especially as I know that I haven’t provided much feedback for your many questions, due to time pressures from non-blog activities.
[…] the derivation see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. If you don’t agree it is correct then explain […]
The URL at Am Meteorological Society for improvements in MODTRAN infrared light in the atmosphere by me and David Archer are now at the URL
file:///C:/Users/Doug/Downloads/Pease_Extended_Abstract%20Best.pdf
SOD, Mike, DeWitt, Curiousdp and others: Since I need to write an article for Wikipedia, I’m trying to write an entry on the Schwarzschild equation, but finding there are still things I don’t understand as well as I want. Most of the confusion comes from the fact that the Planck function has units of steradians and I usually think of flux in terms of W/m2. Take equation 11:
dI/ds = no(B-I)
Clearly there is something wrong if I and dI are in W/m2 and B is W/sr/m3. Assuming I am correct in saying that the Schwarzchild equation is used to calculate changes along a line of sight from point A to point B, this appears(?) to be radiation moving perpendicular to a plane (W/m2). It seems like we really need is
dI/ds = no[ (k)B – I ]
where k has the needed units, sr-m, and perhaps k = 1. If I imagine two infinite planes along a line of sight from A to B separated by a increment of distance ds, perhaps half of the flux towards the forward hemisphere (2*Pi steradians) reaches the second plane, but I’m not sure how to account for the viewing angles. So k may equal 2*Pi.
Now maybe the problem is that the intensity of radiation (I) entering the increment ds is a spectral radiance measured in W/sr/m3 (just like B) and not a spectral flux density (or spectral irradiance) measure in W/m2. Working with climate science makes one think everything is measured in W/m2 (or W/m3 before integrating over all wavelengths) moving perpendicular to a surface.
For non-native speakers, Wikipedia has a handy summary of terminology here. https://en.wikipedia.org/wiki/Radiant_intensity
Also when I look at Figures 2 and 4, they seem “backwards”. If I want to know the TOA flux through 1 m^2 say 100 km above the surface, then I need to integrate the flux from the whole the surface through that 1 m^2. I don’t see why the mathematics for the return direction should be any different or why DLR isn’t the sum of all rays from space to the single point in Figure 2 or 4.
Frank,
You need to integrate the Planck function over a hemisphere to get from watts per square meter per steradian to watts per square meter. See Lambert’s cosine law: https://en.wikipedia.org/wiki/Lambert%27s_cosine_law
That amounts to multiplying by pi.
Thanks for the reply, DeWitt SOD does an integration over a hemisphere in the section called It’s “Not Over Yet – Conversion from Intensity to Flux and the Diffusivity Approximation”.
dI = nσ.(Bλ(T) – Iλ).ds
In the end, we have a quadruple integration: from all sources (two dimensions), along a straight line path of ds increments and over all wavelengths. I’m beginning to realize that Iλ must be expressed in units of W/sr/m3. I just have difficulty understanding how the intensity of the radiation entering the increment ds can be expressed in units of W/sr/m3.
This is especially true if I claim that the Sch eqn describes the intensity of radiation traveling a straight line from point A to point B. Integration then sums up the contribution of all lines originating at the surface and passing through point B at the edge of space.
Frank,
It’s possible I don’t understand your confusion. So maybe this answer doesn’t help at all. Let’s see..
If you take the simple case of a diffuse emitter from a surface and you want to find the flux perpendicular to the surface (W/m2) then you have to integrate radiance (W/sr/m2) x cosθ over the hemisphere.
That’s one situation. You end up with a multiple of π
I can email you a scan of the chapter of Incropera and DeWitt which works through the exciting details.
If you work out the radiation through the atmosphere in the Schwarzchild equation it is a different equation. But once again we have to integrate over the solid angle. It’s just a different equation that we are integrating.
SOD: Thanks for the reply. Trying to ask a sensible question has gradually helped me refine the question. What follows are my thoughts that appear to answer my first post.
dI/ds = no(B-I)
1) Is I measured in units of W/sr/m3 ????? B traditionally is given in W/sr/m3, because emission goes out in all directions. Then integrating over a hemisphere gives the flux in W/m2. So the answer to question 1) better be “Yes” or I’ll have to return my academic degrees.
Now let’s imagine we are using a spectrophotometer with a light source that is 3000 K. The emission term is negligible.
dI/ds = -noI
Integrate and get Beer’s Law. Now, I is (normally) measured in W/m2. When I began asking questions, I believed intensity (I) in BOTH equations was measure in W/m2. Now I believe this to be a mistake.
Let’s do a little dimensional analysis, first in W/m2: n*o is (molecules/volume)*(area) or molecules/distance. A beam of area A goes through n*o molecules/m no matter how wide the beam is. When the beam travels an increment ds, it passes through n*o*ds molecules. B and I tell us how many photons/s are emitted or absorbed per molecule. Integrating n*o*ds along a line from point P0 to P1 is simple. [Later I realize this only applies when emission is negligible.]
Now, how do I do the same analysis is spherical coordinates? I start at point P and go to P+ds. I can’t conceive of radiation measured in W/sr/m3 without picturing a sphere with a center somewhere. Do I envision the center of the sphere at point P (the beginning of the increment, ds) or a P0 (the start of my line to P1)? [I find out below, I’m really lost and neither answer is right.]
Let’s go back to W/m2. I’ve cheated. Radiation is actually traveling in all three dimensions, but I’ve ignored the dimensions perpendicular to the line from P0 to P1 and separately focused on the fluxes from P0 to P1 and from P1 to P0. This supposedly is the “two-stream” approach. In the atmosphere, the line P0 to P1 is in the +z direction and I substitute dz for ds.
How do I implement a two-stream approach in spherical coordinates? Do I make the line P0 to P1 have the theta equal to zero and P1 to P0 have theta equal to Pi. Now ds becomes dr. [Nope, this is badly wrong.]
Eureka! ???? The Sch eqn isn’t used to calculate the change in intensity (in W/m2) traveling from along a LINE from P0 to P1. That only works when emission is unimportant. However, for climate we want to calculate how the emission from a SURFACE changes as it passes through the atmosphere to a detector located Z km above the surface. (The math is easier when the detector is located so that the surface can be approximated by an infinite plane rather than a finite sphere.)
So my difficulty arises from how I formulated the problem. Absorption and transmission normally involve beam of radiation (W/m2) traveling along a LINE. The Schwarzschild eqn has nothing to do with line of sight; it has to do with flux from a SURFACE to a detector (or point).
If I had been properly educated (or remembered that education), I would immediately formulate real world emission problems in terms of a surface and a detector. Chemists have the luxury of working with mathematically simpler beams (lines) in a laboratory.
(I’m still working on the questions about the Wiki article on radiative transfer.)
Wikipedia already has an article on radiative transfer written at an uncomfortably high level for most interested in climate science. I presume the equation at the end of the section on LTE is just the integrated form of the Schwarzschild equation – something I see little point in writing explicitly since it must be solved by numerical methods in any case. I may (or may not) understand the merits of using tau instead of altitude (z), but it reduces clarity for those who don’t.
B(T) = j/a (or j/o ?) is also somewhat unfamiliar, since I’m used to thinking of one cross-section that applies to both emission and absorption. Written as shown belwo, dI/ds is the net result of an emission term using an emission coefficient and an absorption term using an absorption coefficient.
dI/ds = nj – noI = noB – noI
As best I can tell, SOD and Eddington are both using the “two stream approximation), but SOD is using the diffusivity approximation is instead of the Eddington approximation. Is the Eddington approximate needed when scattering is important? If you wish to contact me directly, use frankwhobbs and add icloud.com.
My Wikipedia article about Schwazschild’s equation is process, but being mostly self-taught, I need to be sure to separate my thoughts (“original research”) from the scientific consensus Wikipedia is looking for. Since the day I first saw Schwarzschild’s equation, i’ve always thought it provided an excellent explanation for the GHE. The change, dI, in passing through a slab of thickness ds is:
dI = no[B(T) – I]ds
For an UPWARD flux, the term in brackets is likely to be negative because the incoming radiation, I, has been emitted from a lower altitude that is usually warmer (or the surface (which is warmer and has higher emissivity). Since incoming and outgoing fluxes must be equal at the TOA and since dI is negative for most slabs of atmosphere, the surface will be warmer than without an atmosphere. This approach also directly links the existence of a GHE to decreasing temperature with altitude. It also explains why rising GHGs are cooling the stratosphere (rising temperature with altitude) and may be associated with limited or more warming in central Antarctica (where temperature decreases with altitude during the summer and increases in the winter). I have references to the above narrow topics, but not to the more general ideas below:
1) Schwarzschild’s equation provides a rational for the GHE.
2) One can quantify the GHE by the difference between surface OLR and TOA OLR.
3) The existence of a GHE depends on temperature decreasing with altitude. Found one casual statement in Pierrehumbert’s textbook p 202. For Antarctica I have:
Schmithüsen, Holger; Notholt, Justus; König-Langlo, Gert; Lemke, Peter; Jung, Thomas (16 December 2015). “How increasing CO2 leads to an increased negative greenhouse effect in Antarctica”. Geophysical Research Letters. 42 (23): 10, 422–10, 428. doi:10.1002/2015GL066749.
Sejas, Sergio A.; Taylor, Patrick C.; Cai, Ming (11 July 2018). “Unmasking the negative greenhouse effect over the Antarctic Plateau”. Npj Climate and Atmospheric Science. 1 (1). doi:10.1038/s41612-018-0031-y.
Frank,
Have you seen this?
http://www.barrettbellamyclimate.com/page47.htm
It seems pretty comprehensive to me.
Hi,
I would like to look into this but I have been having problems logging into Science of Doom. I am presently working on a method of demonstrating a numerical solution to S.E. for a journal that publishes papers on geosciences education, as well as in the 2019 AMS meeting where I will show how one may use David Archer’s free online program “Modtran Infrared Light in the Atmosphere” to demonstrate a numerical solution to S.E. using only a spreadsheet as a tool. The problem I am having with S.E. is that I believe the “diffusivity constant” correction is much more difficult to apply than commonly believed and recent research by others indicates that my finding indeed is correct. If you use the standard D.F. of one over cos 54 degrees you will be lucky to get an answer comes close. One must use an appropriate Gaussian Quadrature – and those are difficult to find…perhaps even are trade secrets.
Dewitt and Doug: I recommended BB as a source for those without access to any of the textbooks I cited.
I used my article as a vehicle for discussing (perhaps resolving for some people) some controversies in terms of SE. So I have a section on what SE predicts about saturated wavelengths – no radiative forcing. And a section on SE and the GHE – and the importance of the lapse rate. And a section about its importance and use in climate science. Two stream approximation yielding OLR and DLR. Even the existence of a diffusivity factor shortcut. Linked the online MODTRAN and Spectracalc. How SE fits in with Planck’s Law/SB eqn, Beer’s Law, and more fundamental equations of radiation transfer.
As I’m sure you know from my struggles immediately above with W/sr/m2 vs W/m2 and numerous other times I’ve ventured beyond my competence and learned something new thanks to the generosity of readers, I may not have gotten everything exactly right. In fact, I’m sure I haven’t. I will appreciate any corrections and criticism whenever the article goes live. Or you can write me now at frankwhobbs and icloud.com . When the article goes live, I can add links to other articles and perhaps enlighten people who would never hear the term Schwarzschild’s Equation anywhere else.
Doug: I presume you have seen SOD’s own calculations about the diffusivity factor above. Wikipedia has a listing of radiation transfer codes, some of which may not use a diffusivity factor or have the option of not using a diffusivity factor. I don’t know if any of these programs are easy to use.
https://en.wikipedia.org/wiki/Atmospheric_radiative_transfer_codes
Hi Frank,
Please see Zhao,J.Q.and Shi, G.Y. (2013) “An Accurate Approximation to the diffusivity factor.” Infrared Physics and Techmology, 56, 21-24.
The optical depth of the chosen path is the parameter of choice. They first point out two limiting cases: (1) if the optical depth is quite large the limiting D.F. is unity (no correction.) (2) If the optical depth is small then the greatest possible D.F. is used, which is that corresponding to angle theta = 60 degrees.
Zhao and Chi derive a complicated equation for intermediate optica depths. The canonical choice, for instance in Petty and in Houghton, is the D.F. corresponding to angle ~ 54 degrees. But you can find situations where the 54 degrees is not near optimum. Thus, consider the situation where the band width is between 2 wn and 2200 wn Apply to – say – 400 ppm of CO2. Since it is a pretty good approximation to consider all the CO2 absorption to be between 500 wn and 850 wn, with all the other wn corresponding to the black body case with no GHG, you will find that – even for a path from ground to 70 km the transmittance is over 0.9, by Modtran6. Consider then vertical paths. The negative of Ln 1 is zero which is the optical depth for zero absorption. The negative of Ln 0.9 is ~ 0.1 for CO2 between 500 wn and 850 wn. But say you used a window in the vicinity of the CO2 bending mode resonance. You could easily get a transmittance of , say, 0.01 or less. negative Ln 0.01 is then ~ 4.6 and the best DF will be more in the direction of 1, i.e. no D.F. correction at all.
Furhermore, alone amongst my testbooks, Pierrehumbet considers 60 his default D.F. angle, not 54 degrees. See his page 191, where the relative merits of other choices of this angle are discussed. Apparently, Zhao and Shi have gone beyond Pierrehumbert text, as will happen with any textbook. Perhaps there will be a new text written someday that includes the Zhao – Chi formula.
As far as a Wikipedia article is concerned, I know nothing of how one adds something to Wikipedia. But the last time I looked, if one looks up the Wikipedia article on Schwarzchild himself, the only contribution mentioned was the “other” Schwarzchild’s equation, which is some kind of solution for a spherical black hole in general relativity. The fact that Schwarzchild’s equation of radiative transfer is crucial to our understanding of the Sun, and to all present day climate science is not considered to be of sufficient importance to include,apparently. A “low hanging fruit” contribution to Wikipedia might be to correct this glaring omission to the Wikepedia article on Shwarzchild himself.
Hi Frank.
I put in a reply to your comment, but it would not post. Maybe if I wait a day. If it is not up by then then perhaps we can figure out some other way to communicate with you.
Doug: You can reach me at frankwhobbs based at icloud.com.
I found your comment in the s&$m queue. No idea why WordPress decided to do that. If your comments disappear you can always email me directly – scienceofdoom you know what goes here gmail.com
Please see Zhao,J.Q. and Shi, G.I. (2013) “An Accurate Approximation to the diffusivity factor” Infrared Physics and Technology, 56, 21 – 24.
They point out two limiting cases: 1. If the optical depth is quite large the limiting D.F.is one. If the optical depth is small the limiting D.F. corresponds to 60 degrees relative to vertical. So that 1/cos theta is 2. Consider 400 ppm CO2. If the wn band chosen is 2 wn through 2200 wn, and since the CO2 absorption is pretty much limited to the range 500 wn to 850 wn, even for a vertical path from ground to 70 km by Modtran6 the transmittance is over 0.9. So using 60 degrees is not bad. For a range near CO2 bending mode region – around 670 wn you can get transmittance of less than 0.1 and d.f is ~ 1. Zhao and Shi derive a complicated formula which gives the D.F. for a general case.
BTW Pierrehumbert uses 60 degrees for his default theta,with a discussion of merits of other angles. See p.821. Petty/Houghton use 54 degrees.
With Zhao and Shi one can taylor the d.f. to a particular case.
I am presenting a paper on Schwarzchild’s Equation at the Annual meeting of the AMS in Phoenix in January, The paper describes how to take transmittance output from the free online program Modtran Infrared Light in the Atmosphere and compute the Outgoing Lingwavelength Intensity(OLI) using S.E. using an Excel Spreadsheet. ( I use OLI to distinguish from the Outgoing Longwave Radiative Flux commonly represented by OLR.) It would help the reader to have access to the spreadsheet but there are formatting restrictions which may make it difficult to include the spreadsheet.
Here is a copied and pasted spreadsheet for computinfg the atmospheric contribution to the OLI for 400 ppm CO2, U.S. Standard Atmosphere, looking down from 70 km.
Alt trans400 deriv temp Radance Rad x deriv traprule A.C.trans800
0 0.8842 0.0091 288.15 124.425 1.13226 1.043135 0.8701
1 0.8933 0.0084 281.65 113.572 0.954 0.870094 7.4 0.8801
2 0.9017 0.0076 275.15 103.445 0.78618 0.722131 0.8891
3 0.9093 0.007 268.65 94.0112 0.65808 0.6018 0.8974
4 0.9163 0.0064 262.15 85.2377 0.54552 0.49633 0.9051
5 0.9227 0.0058 255.65 77.0931 0.44714 0.407868 0.9122
6 0.9285 0.0053 249.15 69.5466 0.3686 0.33759 0.9186
7 0.9338 0.0049 242.65 62.5682 0.30658 0.273968 0.9246
8 0.9387 0.0043 236.15 56.1286 0.24135 0.218565 0.93
9 0.943 0.0039 229.65 50.1993 0.19578 0.176206 0.9349
10 0.9469 0.0035 223.15 44.7528 0.15663 0.141936 0.9393
11 0.9504 0.0032 216.65 39.7619 0.12724 0.123262 0.9431
12 0.9536 0.003 216.65 39.7619 0.11929 0.119286 0.9466
13 0.9566 0.003 216.65 39.7619 0.11929 0.117298 0.9499
14 0.9596 0.0029 216.65 39.7619 0.11531 0.115309 0.9529
15 0.9625 0.0029 216.65 39.7619 0.11531 0.115309 0.9559
16 0.9654 0.0029 216.65 39.7619 0.11531 0.113321 0.9588
17 0.9683 0.0028 216.65 39.7619 0.11133 0.111333 0.9616
18 0.9711 0.0028 216.65 39.7619 0.11133 0.105369 0.9645
19 0.9739 0.0025 216.65 39.7619 0.0994 0.097417 0.9674
20 0.9764 0.0024 216.65 39.7619 0.09543 0.094291 0.9702
21 0.9788 0.0023 217.65 40.5011 0.09315 0.087827 0.9729
22 0.9811 0.002 218.65 41.2506 0.0825 0.07906 0.9755
23 0.9831 0.0018 219.65 42.0104 0.07562 0.074173 0.9779
24 0.9849 0.0017 220.65 42.7807 0.07273 0.069035 0.9802
25 0.9866 0.0015 221.65 43.5615 0.06534 0.061501 0.9823
26 0.9881 0.0013 222.65 44.353 0.05766 0.055923 0.9842
27 0.9894 0.0012 223.65 45.1552 0.05419 0.050077 0.9859
28 0.9906 0.001 224.65 45.9682 0.04597 0.04638 0.9874
29 0.9916 0.001 225.65 46.7922 0.04679 0.042447 0.9888
30 0.9926 0.0008 226.65 47.6272 0.0381 0.036017 0.99
31 0.9934 0.0007 227.65 48.4733 0.03393 0.034231 0.9911
32 0.9941 0.0007 228.65 49.3307 0.03453 0.030214 0.992
33 0.9948 0.0005 231.45 51.7918 0.0259 0.026534 0.9929
34 0.9953 0.0005 234.25 54.3439 0.02717 0.027833 0.9936
35 0.9958 0.0005 237.05 56.9891 0.02849 0.023207 0.9943
36 0.9963 0.0003 239.85 59.7298 0.01792 0.021473 0.9949
37 0.9966 0.0004 242.65 62.5682 0.02503 0.02234 0.9954
38 0.997 0.0003 245.45 65.5065 0.01965 0.020108 0.9959
39 0.9973 0.0003 248.25 68.5471 0.02056 0.017451 0.9963
40 0.9976 0.0002 251.05 71.6924 0.01434 0.014664 0.9967
41 0.9978 0.0002 253.85 74.9447 0.01499 0.015325 0.997
42 0.998 0.0002 256.65 78.3064 0.01566 0.016009 0.9973
43 0.9982 0.0002 259.45 81.78 0.01636 0.012446 0.9976
44 0.9984 0.0001 262.25 85.3678 0.00854 0.013176 0.9978
45 0.9985 0.0002 265.05 89.0725 0.01781 0.013552 0.998
46 0.9987 1E-04 267.85 92.8964 0.00929 0.009487 0.9982
47 0.9988 1E-04 270.65 96.8421 0.00968 0.009684 0.9984
48 0.9989 1E-04 270.65 96.8421 0.00968 0.009684 0.9985
49 0.999 1E-04 270.65 96.8421 0.00968 0.009684 0.9987
50 0.9991 1E-04 270.65 96.8421 0.00968 0.009684 0.9988
51 0.9992 1E-04 270.65 96.8421 0.00968 0.004842 0.9989
52 0.9993 0 267.85 92.8964 0 0.004454 0.999
53 0.9993 1E-04 265.05 89.0725 0.00891 0.004454 0.9991
54 0.9994 0 262.25 85.3678 0 0.004089 0.9992
55 0.9994 0.0001 259.45 81.78 0.00818 0.004089 0.9992
56 0.9995 0 256.65 78.3064 0 0.003747 0.9993
57 0.9995 1E-04 253.85 74.9447 0.00749 0.003747 0.9994
58 0.9996 0 251.05 71.6924 0 0 0.9994
59 0.9996 0 248.25 68.5471 0 0.003254 0.9995
60 0.9996 1E-04 245.05 65.0805 0.00651 0.003254 0.9995
61 0.9997 0 242.65 62.5682 0 0 0.9995
62 0.9997 0 239.85 59.7298 0 0 0.9996
63 0.9997 0 237.05 56.9891 0 0.002717 0.9996
64 0.9997 1E-04 234.25 54.3439 0.00543 0.002717 0.9996
65 0.9998 0 231.45 51.7918 0 0 0.9997
66 0.9998 0 228.65 49.3307 0 0 0.9997
67 0.9998 0 225.85 46.9583 0 0 0.9997
68 0.9998 0 223.05 44.6726 0 0 0.9997
69 0.9998 0 222.25 44.0351 0 0 0.9998
70 0.9998 0 217.45 40.3524 0 0 0.9998
Column 1 is the target altitude from 70 km, and column two are the transmittance values for Tr (0,70), Tr (1,70)……..Tr(69,70). By boundary value constraints Tr (70,70) must be one. Column 3 is the derivative dTr/dAlt.Column 5 is the radiance which can be well approximated in this particular case by (sigma T^4)/3.1416. Column 6 is the product of the radiance x the derivative. Column 7 is the trapezoidal rule applied to column 6. The atmospheric contribution to the OLI is A.C. in column 8 and is 7.4 W/m2 Sr and is obtained by summing the trap rule column.
The emissivity is 0.971 and the wave number band lies between 2 wn to 2200wn. The resulting OLI (~ 114 W/msqd Sr) is in excess of that predicted by SpectralCalc ( ~ 109 W/sq m Sr)) by relative error ~ 4%.
The OLI result must indeed exceed the correct value because the diffusivity factor correction has not yet been applied. The application of the D.F. approximation is complex and still under study.
Friends and Critics: My article on the Schwarzschild equation has appeared in Wikipedia. I would greatly appreciate any comments and corrections, either here or by email at frankwhobbs hosted at icloud.com.
I used this article as a vehicle to address many mistaken idea about radiative transfer calculations that are common in the blogosphere. Since few recognize the equation by name, I tried to show how it fits in with other equations about radiation. And why this equation unavoidably (IMO) ensures that rising GHGs must cause some warming if nothing else changes. This has been a challenging “self-assignment” that I’ve turned with apprehension. After New Years, I’ll probably add some links to my article from climate science articles and that will undoubtably attract some scrutiny and critics. However, there is a need for better information about radiative transfer calculations in climate science.
https://en.wikipedia.org/wiki/Schwarzschild%27s_equation_for_radiative_transfer
Great job, Frank. Much to read and learn. Even if it gets a little over my head, I get an impression.
NK: The technical aspects of integrating the flux from all points on the surface of the Earth to an imaginary detector at the top of the atmosphere (say 70 km above the surface) are complicated. Is anything else horribly confusing?
Before seeing the Schwarzschild equation, I always wondered why doubling CO2 doubled emission and doubled absorption, but still resulted in a reduction in radiative cooling to space. I can look at the differential form of the equation and immediately recognize the answer to that question and many others. However, I don’t know what other “see” when looking at an equation. I struggle to interpret the integrate form of the equation, because the dimensionless quantity tau demands all my attention. Some sources call the integrated version of the equation the “Schwarzschild’s equation”.
Consider the atmospheric contribution term, which is the second term involving the derivative of the transmittance relative to altitude. Let z be varying altitude and Z be a fixed highest altitude in the sum. Say one increments z by one km steps. Then one deals with T(z = 0,Z) ; T(z = 1,Z)…… and at the end of the series T(z = (Z-1),Z) ; T(z = Z,Z). But T(z =Z,Z) must be one since there is now no absorbing material between z and Z.
The boundary condition T(Z,Z) = one is important to the qualitative natire of the numerical integration. Consider two atmospheres, one containing X ppm CO2 the only GHG and one containing Y ppm CO2 the only GHG. Let Z = 10 km.
Model the transmittance by the following function which is not at all a good approximation to the real case, but which contains the essence of the argument.
1. T(z,Z) X = [ exp – C (10 – z)] where T(Z,z) X is the transmittance at z = 0 for X ppm. C is a constant I set to 0.5 for X ppm
Then: T(z,Z)X = 0.0068 for z = 0 km ; 0.0825 for z= 5 km; 0.607 for z = 9 km; 0.779 for z= 9.5 km; 0.951 for z = 9.9 km, approaching T(z,Z) X= 1 at 10 km.
Then say that for concentration Y the constant C 1s set to 0.52 instead of 5.
Then T(z,Z)Y = 0.0055 for z = 0 km; 0.074 for z = 5 km, 0.592 for z = 9 km,
0.771 for z = 9.5 km; 0.901 for z = 9.9 km, approaching T(z,Z)Y = 1 at 10 km.
The total change in the transmittance between z = 0 and z = 10 km is therefore greater for the more absorbing sample Y than for the lesser absorbing sample X. Therefore the contribution of the atmosphere to the upward emitted outgoing long wave intensity (I call the OLI) is greater for sample Y than X .
But on the other hand, the Iup = radiance up x emissivity x t* must be less for the sample Y than for sample X.
A. Therefore the radiation leaving the ground and heading upward is more absorbed and therefore less intense for Y than X.
B. But the upward contribution by the atmosphere itself is greater for X than for Y.
Effect A is greater than effect B; therefore an increase in CO2 concentration produces an over – all decrease in the total OLI radiated upward.
curiousdp: When analyzing DLR traveling downward from 10 km to 0 km, would you express things in terms of T (transmission)?
I don’t know the proper definitions, but I try to avoid using the concept of transmission when emission is significant. Transmission IS one term in the integrated form of Schwarzschild’s equation, but it doesn’t deal with transmission of the photons emitted by the medium in an equivalent manner.
In some drafts of my article, I wrote that radiation transfer is about the amount of energy arriving at a destination, not the fraction of the photons that make the journey (transmission).
IIRC and I understood correctly, somewhere at SOD, DeWItt or SOD was kind enough to report on a radiation transfer calculation that used distance increments smaller than the mean free path for the most strongly absorbed wavelengths. For every 100 photons leaving the surface, something like 900 photons were emitted by GHGs in the atmosphere and 940 were absorbed.
Hi Frank,
I only know the differential form of S.E. as in Grant Petty’s textbook, and extensions of that form. This form is also what Zhao and Shi use in their article on the diffusivity factor ( DF) correction to that equation, a correction which is motivated by the fact that the form of the S.E. used commonly, is strictly speaking, only valid for a monochromatic radiation. I have now calculated solutions to the outgoing long wave intensity (OLI) using a spreadsheet and transmittance values obtained from Spectral Calc, and find detailed agreement between the OLI values obtained by the radiance application of the atmospheric paths in Spectral Calc and the DF corrections I calculate using the formula derived by Zhao and Shi.
So here is my take on the downward longwave intensity (DLI). Grant Petty’s expression, which, again, is limited to the monochromatic case, is for the DLI, given by (unsure of sign here) (dt(z,0)/dz)x B(z) integrated from z = Z to z = 0. The assumption usually made here is that, assuming Z is the “top of the atmosphere” at, say 100 km, the atmospheric contribution to the downward flux is the only contribution because, unless one is looking upward at the sun or moon, there is no extraterrestrial source that corresponds to Iup(0) x emissivity x t* for the OLI.
Actually, that assumption is probably not valid, because the assumption cannot be made that the extraterrestrial source of long wave intensity is isotropic. The temperature at 100 km might be is so low it can be neglected; however, there will be incoming rays at a slanted angle which – with a path length the same as a vertical ray between z = 0 and z = 100 km – will originate from a region of space with a higher temperature. For this reason, Modtran6 uses a method of dealing with non isotropic sources for their DLI, a method invented by Knut Stamness and his co authors and applied to Modtran6 by Lex Berk. David Archer’s Modtran Infrared Light in the Atmosphere (MILIA) outputs outgoing long wave radiation (OLR) flux values that are in good agreement with Modtran6, But the downward long wave radiation (DLR) from Modtran 6 is significantly greater than the DLR from MILIA, and the larger Modtran6 value has been experimentally confirmed.
I am interested in exploring the DLI obtained from SpectralCalc as obtained from the SpectralCalc transmittance value in a manner analagous to what I have done with the OLI. But I must submit the OLI article to a journal first. At the present time I do not at present know enough about the DLI to say more than, yes I think it must be expressed in terms of transmittance. I cannot yet in confidence refer to a calculation I have gone through on my own and do not wish to express a further opinion yet.
Finally, IMO the really hard thing to get one’s head around is converting from OLI in Watts/msqd Sr to OLR diffuse flux in Watts per msqd. If you go by SpectralCalc one should take their vertical Watts per Msqd Sr and use a proper Gaussian Quadrature to find the Watts per Msqd diffuse flux. They give no example of such a proper Gaussian Quadrature. It is my feeling that proper application of the Gaussian quadrature method may border on a trade secret from project to project and author to author.
Best,
Curiousd
Curiousdp wrote: “Finally, IMO the really hard thing to get one’s head around is converting from OLI in Watts/msqd Sr to OLR diffuse flux in Watts per msqd.”
Exactly. I have negligible confidence in what I wrote about this subject in the final section (Part 8 Application to Climate Science) of my wikipedia article about Schwarzschild’s Equation. I’m hoping someone will say the discussion is reasonable or flawed – and then explain the flaws.
https://en.wikipedia.org/wiki/Schwarzschild%27s_equation_for_radiative_transfer
Every source I relied upon expresses things with slightly different terminology and organizes material differently. For example, Grant Petty’s derivation of Schwarzschild’s equation (p 205) from Kirchhoff’s Law that I adapted is spread over two chapters (one for absorption/absorptivity and a second on emission/emissivity) and is expressed in terms of absorption and emission coefficients, not absorption cross-sections. I wanted to express concepts in the form of: 1) a density of molecules/GHG’s and 2) a measure of how strongly these molecules interact with radiation (a cross-section), but Petty rarely used these terms.
Hi Frank,
For educational purposes there is a good reason for expressing S.E. in terms of transmittance values. A student can access one of several packages that give information of Outgoing Longwave Intensity (OLI), Outgoing Longwave Flux / Radiation (OLR), and transmittance values T(z,Z). That student can then explore : “Under what circumstance and using which techniques, can one use the T(z,Z) values and S.E. to determine the OLI?”
Using given transmittance values, the student can skip the necessity of dealing with the HITRAN tables!!
Curiousd
Hi Frank,
I think your Wikipedia article is great. There is one place near the end of the article where you discuss what happens at 667 wavenumbers for 400 ppm….but you don’t say 400 ppm CO2. I dont want my IP address out to the public and I could not figure out how to register into Wikipedia. It seems to already know my curiousd user name so I cannot use this as a “new” registration because the name is used already. Perhaps there is a different person with the curiousd user name already registered there but I doubt it.
https://commons.wikimedia.org/wiki/Main_Page has a link to a help desk.
A question for anyone:
In the common derivation, let Qe be the power in the far IR going out to space and Qs be the power from the Sun incident on the Earth, assuming no GHG, and taking into account that for Earth radius Re, the Sun’s radiant power is absorbed by an effective flat disk of radius Re,and averaged day and night. Assume an albedo of 0.3, and that the emitted IR is coming from a sphere of radius Re and that the emissivity of the Earth’s surface is one. Then the outgoing flux from the Earth comes out roughly 255 W/m2. Here it is clear that the sun’s incoming visible and shortwave power on the Earth (Qs ) must equal the Earth’s outgoing far IR power Qe.
It is less clear to me that with GHG, clouds, oceans, and circulating currents of water and air that Qe still must equal Qs. This is always assumed, but why? Consider.
From first law of thermodynamics Qs – Qe = delta U where U is the change in internal energy of the Earth and its atmosphere. Note that delta U could be due partly to an increase of mechanical energy in the atmosphere.
Say that instead of a changing Sun, a sudden spike in a GHG were to take place, such as a burst of methane. In the standard language this produces a “forcing” delta U , and it is assumed that by the time the Earth and Sun were again in thermal equilibrium, the delta U would be entirely expressed in the increased temperature at the Earth’s surface.
But why must this be the case? I postulate that, during the time he Earth and Sun were again obtained thermal equilibrium (after the spike of methane)
some of that forcing delta U would have gone into – say – mechanical energy, such as making one hurricane.
The power going into an increase of temperature T would then be lessened by an amount approximately equal to the total integrated energy of the hurricane divided by the time the hurricane was going on.
Or perhaps the entire forcing goes almost completely into a temperature increase, simply because adiabatic processes dominate when an increase of temperature increases the mechanical energy of the atmosphere.
But I think some such justification is necessary if one believes the entire delta U always goes into a temperature increase.
Curious: If there is a positive inward radiative imbalance at the TOA, then the law of conservation of energy tell us the extra energy must be build up somewhere below the TOA. In the simplest case, that energy becomes internal energy – higher temperature – somewhere, most likely the ocean.
There are other possibilities, including kinetic energy of a hurricane. Friction involving winds and currents simply converts their kinetic energy to heat. During the carboniferous era, a lot of energy was stored in plant material – but that was still only a tiny fraction of incoming SWR. TdS is a possibility. I don’t understand dissipation.
There are several hundred J/m3 in the kinetic energy of the Gulf Stream and its eddies. https://journals.ametsoc.org/doi/pdf/10.1175/JPO-D-15-0235.1 However, the heat capacity of water is vastly bigger: 4 million J/m3/K. My gut feeling is that internal energy dominates everything else.
A +1 W/m2 radiative imbalance at the TOA is enough to warm a 50 m mixed layer of the ocean plus the atmosphere at an initial rate of 0.2 K/year, if none of that energy penetrated below the mixed layer. However, as soon as the planet starts warming, the planet will start radiating (OLR) and reflecting (OSR) about 1-2.5 W/m2 per K of surface warming (depending on your preferred choice for ECS). So a 1 W/m2 forcing would become a 0.5 W/m2 radiative imbalance in 1-2.5 years if no heat were entering the deep ocean. At the moment, our radiative forcing is about 2.5 W/m2 and 0.8 W/m2 is flowing into the ocean and melting ice. That means that 1.7 W/m2 must be escaping to space (or not entering as SWR) in response to about 1 K of warming. (This is the rational for an EBM.)
SOD I posted a comment – I think in the correct way – but it did not post.
curiousdp,
Glad you highlighted it – I found it in the sp&m queue, no idea why wordpress made that decision. It’s now resurrected.
Hi Frank
1. Again I agree with you that the amount of forcing going into a non thermal , i.e. mechanical, component in the final internal energy is probably small.
2. As far as I know, this issue is not even discussed in any text book.
3. It is not “obvious” to a student, who might point out that if she heats up a container of water there may be convective circulation of the water as well as a temperature increase. What I am looking for is a text – book or paper which discusses the issue.
5. So there is no disagreement between us, except on whether this is “obvious to the student”
Speaking of “obvious to the student” here is a numerical solution to S.E.I presented at the Phoenix Annual Meeting of the AMS; one cannot find such a numerical solution in any text book.
https://ams.confex.com/ams/2019Annual/meetingapp.cgi/Paper/349424
Hi Frank,
All I am saying is that the “standard” way of teaching this in all text books is to say that , as far as the OLI is concerned, a loss in OLI due to a gain in – say methane – always results in an equivalent increase in temperature. I believe Pierre humbert points out that the power in the atmosphere is larger than can be accounted for by considering the energy balance of the solar flux; presumably the downward OLI accounts for some of this.
My emphasis here is to suggest that the standard way global warming is taught in text – books is oversimplified. All these texts make the tacit assumption without justification that a forcing due to increased GHG is always reflected in an increase of temperature.
Frank I think you agree with me that the standard assumption in these texts is oversimplified. If we both agree on this point, then the question becomes…..is there anyplace in the literature that deals quantitatively with this issue?
Curious wrote: “All these texts make the tacit assumption without justification that a forcing due to increased GHG is always reflected in an increase of temperature.”
Climate change is physics. The fundamental principle is conservation of energy. If the power from a positive forcing is not going anywhere else, it must be raising internal energy somewhere beneath the TOA! If you want to be a devil’s advocate, it might worth briefly considering how much power might be consumed by other processes: Power our society uses. Kinetic energy in ocean currents, Chemical reactions/photosynthesis (which ends up as heat when vegetation decays or we eat food). Hurricanes (accumulated cyclone energy, ACE per month during hurricane season). Winds and currents can only consume power if they are accelerating. This might explain why we make your “tacit assumption”, which is justified as best I can tell.
Energy is proportional to a temperature change; power translates to a RATE of temperature change. Power per unit area translates to a rate of uniform warming in a mixed layer (a depth, which times area is a volume) of the ocean and the mass of the atmosphere. A 1 W/m2 radiative imbalance is a massive amount of power: In a closed system, 1 W/m2 would warm a 50 m mixed layer of the ocean and the atmosphere at a rate of 0.2 K/year. Since 1970, radiative forcing has been increasing at 0.4 W/m2/decade. It is difficult for me to imagine where, below the TOA, this much energy could be hiding. IMO, it is obvious that this much power must be going into the deep ocean or out to space (due to higher temperature), because the mixed layer and atmosphere certainly aren’t warming at 0.8 K/decade.
These calculations can be made more tangible for students by discussing seasonal warming in response to a seasonal change in irradiation. 1000 W/m2 of post-albedo SWR comes directly down on the Tropic of Cancer in summer and at an angle of 46 degC in winter. Lambert’s cosine law says the power delivered is reduced by the cosine of 46 deg. That’s a maximum seasonal difference of 300 W/m2 and an average seasonal difference of perhaps 200 W/m2. Thats enough to warm the atmosphere and mixed layer of the ocean at a rate of 40 K/yr or 3 K/month changing from winter to summer. (Pretty decent for a back-of-the-envelop calculation.)
BTW, converting a forcing to an amount of warming create a problem with units (or dimensional analysis). Warming is a change in internal energy. Forcing is measured in terms of power. As the planet approaches a new steady state over time, the radiative IMBALANCE approaches zero. The radiative imbalance integrated over time provides the energy that is required for warming.
!. I agree that the amount of increase in the mechanical portion final internal energy is probably small relative the increase due to temperature.
2. I think our disagreement is whether or not this is obvious to the student. After all, if I heat a container of water with a blow torch the temperature will increase but also I may produce motion in the water from convection.
3. I seek anyplace in any text book which discusses the point.
4. By the way, there are no numerically worked out solutions to Schwarzchild’s equation in any text. To slightly alleviate this deficit I published the following in the Bulletin of the 2019 Phoenix annual meeting of the AMS.
https://ams.confex.com/ams/2019Annual/meetingapp.cgi/Paper/349424
Curious: For students, it might be appropriate to say that no one has proposed a location where a significant fraction of the energy from a radiative imbalance could be hiding below the the TOA, but they will be famous if they find one.
Limiting cases for Schwarzschild equation have trivial – but commonsense – analytical solutions. When temperature is low and B(lambda, T) is much smaller than incoming radiation (I), the equation reduces to Beer’s Law. When radiation has passed far enough through a homogeneous medium that absorption and emission have come into equilibrium, I = B(lambda, T). Finally, radiation not in thermodynamic equilibrium with a homogeneous medium through which it is traveling has an intensity that approaches (via a negative exponential) blackbody intensity at a rate proportional to n*o*s (optical depth). If B(lambda, T) is independent of position, one can replace B-I with a new variable W. dI/ds = dB/ds – dW/ds and dB/ds = 0. However, I don’t have an intuitive
Questions:
1) Doesn’t the Schwarzschild equation for radiative transfer leave out one energetic pathway?
2) Can someone check my premise and math below? Thanks.
The Schwarzschild equation for radiative transfer properly includes the process by which absorbed radiation excites vibrational and rotational mode quantum states, and that energy then flows to translational mode of other atmospheric molecules via vibrational-translational and rotational-translational collisional processes in accord with 2LoT.
But I don’t see a term for the opposite process… during which the combined translational mode energy of two colliding molecules is sufficient (in accord with 2LoT and the Equipartition Theorem) to excite vibrational mode or rotational mode quantum states via translational-vibrational collisional processes.
Is this included in the Schwarzschild equation in the radiative term?
At sufficiently high combined translational mode energy of two colliding molecules (one CO2, one any other molecule), this would increase the time duration during which CO2 is vibrationally excited, and thus the probability that it would emit.
We know this takes place:
Click to access 725111.pdf
“The absorbed energy in Reaction (33) once again comes from translation. Two reactions of type (33) must occur for every one of the type indicated by Reaction (32) to maintain the CO2 in thermal equilibrium. The removal of energy from the translational modes by Reactions (32) and (33) cools the CO2 molecular system, and, concomitantly, the air.”
Figuring for the lowest vibrational mode quantum state of carbon dioxide, CO2{v21(1)}, 667.4 cm-1, we find that this equates to a quantum energy level of 0.08280438474000003 eV.
Dividing the energy (necessary to excite that CO2 quantum state from the ground state) into two (since we have two molecules colliding, each molecule would carry half the translational mode energy necessary to excite the vibrational mode), which would require 0.041402192370000015 eV of kinetic energy per molecule to excite CO2’s vibrational mode quantum states via t-v (translational-vibrational) processes.
This equates to a monotonic temperature of 320.3037803910884 K.
Per the Maxwell-Boltzmann Speed Distribution Function, a CO2 molecule at that temperature would have:
Most Probable Speed: 347.7243661547154 m/s
Mean Speed: 392.36493066047285 m/s
RMS Speed: 425.8736341058788 m/s
The Maxwell-Boltzmann Speed Distribution Function at 288 K (the stated average global temperature) for CO2 gives a gas fraction for molecular speeds from 347.7243661547154 m/s to 1650 m/s (an arbitrarily high number to encompass all molecules with sufficient kinetic energy to vibrationally excite CO2 upon molecular collision) of 52.73%.
An N2 molecule at that temperature would have:
Most probable speed: 435.8336709427169 m/s
Mean Speed: 491.7856346105226 m/s
RMS Speed: 533.785053266862 m/s
The Maxwell-Boltzmann Speed Distribution Function at 288 K (the stated average global temperature) for N2 gives a gas fraction for molecular speeds from 435.8336709427169 m/s to 1650 m/s (an arbitrarily high number to encompass all molecules with sufficient kinetic energy to vibrationally excite CO2 upon molecular collision) of 52.73%.
Therefore, at ~288 K, there are more atmospheric molecules with kinetic energy sufficient to vibrationally excite CO2 via translational-vibrational (t-v) collisional processes than not.
This increases the time duration during which CO2 is vibrationally excited and therefore the probability that it will radiatively emit.
The conversion of translational mode energy (which we sense as temperature) to vibrational mode energy is, by definition, a cooling process.
The emission of the resultant radiation to space is, by definition, a cooling process.
You will sometimes read “CO2 doesn’t have time to emit IR because the radiative de-excitation time is much longer than the mean time between collisions”. In conditions where collisions dominate (ie: below the tropopause), CO2 will indeed often vibrationally de-excite via v-t (vibrational-translational) collisional processes. But by the same token it will also often vibrationally excite via t-v (translational-vibrational) collisional processes at a rate dependent upon the ratio of atmospheric molecules which carry sufficient kinetic energy to excite CO2’s vibrational modes, as we calculated above.
Merely because a vibrationally-excited CO2 molecule undergoes collision with another molecule (in conditions where the translational mode energy of the two colliding molecules is higher than CO2’s vibrational mode energy and therefore energy cannot flow from vibrational to translational mode) doesn’t reset the “clock” on CO2’s radiative de-excitation time. Given that out of the three most abundant molecular constituents of our atmosphere (N2, O2, CO2), only CO2 can radiatively emit and break LTE, the net energy flow is to CO2 via t-v collisional processes above ~288 K.
The energetic pathways detailed above:
X (at ~288K+) + CO2{v20(0)} (at ~288K+) –(t-v)–> X + CO2{v21(1)} –> CO2{v20(0)} + 667.4 cm-1
X (at ~288.1K+) + CO2{v21(1)} (at ~288.1K+) –(t-v)–> X + CO2{v22(2)} –> CO2{v21(1)} + 667.8 cm–1 –> CO2{v20(0)} + 667.4 cm-1
X (at ~288.2K+) + CO2{v22(2)} (at ~288.2K+) –(t-v)–> X + CO2{v23(3)} –> CO2{v22(2)} + 668.10 cm–1 –> CO2{v21(1)} + 667.8 cm–1 –> CO2{v20(0)} + 667.4 cm-1
X denotes any atmospheric molecule.
Now, granted, not all molecular collisions are going to be head-on, and the kinetic energy imparted to vibrational mode quantum states is dependent upon angle of collision, but the data above shows that at and above ~288 K (the stated average global temperature), the majority of the molecular constituents of the atmosphere carry sufficient kinetic energy to begin significantly vibrationally exciting CO2 via t-v collisional processes.
We can use the Boltzmann Factor to determine the vibrationally excited population of CO2{v21(1)} due to translational-vibrational (t-v) collisional processes.
1 cm-1 = 11.9624 J mol-1
667.4 cm-1 = 667.4 * 11.9624 / 1000 = 7.98370576 kJ mol-1
The Boltzmann factor at 288 K has the value 1 / (798.370576 / 288R) = 0.36073473 which means that 36.073473% of CO2 molecules are in the CO2{v21(1)} vibrationally excited state due to translational-vibrational (t-v) processes.
At 288 K, this leaves a net 63.926527% of CO2 molecules available to absorb radiation. The rest are already vibrationally excited, and thus cannot absorb radiation unless that radiation is of sufficient energy to excite CO2 to the CO2{v22(2)) or CO2{v23(3)} vibrational mode quantum states.
The ratio of vibrationally excited : ground state CO2 will skew proportional to atmospheric temperature, leaving fewer molecules able to absorb radiation and more molecules able to radiatively de-excite with increasing temperature, and vice versa.
—–
The above doesn’t even take into account the other two energetic pathways by which CO2 can act as a net atmospheric coolant above ~288 K:
X (at ~288K+) + N2{v1(0)} (at ~288K+) –(t-v)–> X + N2{v1(1)} –> N2{v1(1)} + CO2{v20(0)} –(v-v)–> N2{v1(0)} + CO2{v3(1)} –> CO2{v1(1)} + 961.54 cm-1
X (at ~288K+) + N2{v1(0)} (at ~288K+) –(t-v)–> X + N2{v1(1)} –> N2{v1(1)} + CO2{v20(0)} –(v-v)–> N2{v1(0)} + CO2{v3(1)} –> CO2{v20(2)} + 1063.83 cm-1
X denotes any atmospheric molecule.
We can use the Boltzmann Factor to determine the vibrationally excited population of N2 due to translational-vibrational (t-v) collisional processes.
N2{v1(1)} (stretch) mode at 2345 cm-1 (4.26439 µm), correcting for anharmonicity, centrifugal distortion and vibro-rotational interaction
1 cm-1 = 11.9624 J mol-1
2345 cm-1 = 2345 * 11.9624 / 1000 = 28.051828 kJ mol-1
The Boltzmann factor at 288 K has the value 1 / (2805.1828 / 288R) = 0.10266710 which means that 10.26671% of N2 molecules are in the N2{v1(1)} vibrationally excited state due to translational-vibrational (t-v) processes.
Given that CO2 constitutes 0.041% of the atmosphere (410 ppm), and N2 constitutes 78.08% of the atmosphere (780800 ppm), this means that 4.1984 ppm of CO2 is excited to its {v3} mode quantum state via collisional translational-to-vibrational (t-v) processes, whereas 80162.3936 ppm of N2 is excited via the same (t-v) processes. This is a ratio of 1 vibrationally excited CO2 to 19093 vibrationally excited N2. You’ll note this is 10.028 times higher than the total CO2:N2 ratio of 1:1904, and 195 times more vibrationally excited N2 molecules than all CO2 molecules (vibrationally excited or not). Thus energy will flow from the higher-energy and higher-concentration vibrationally-excited N2 (said N2 vibrational mode quantum states being meta-stable and relatively long-lived because N2 is a homonuclear diatomic with no net magnetic dipole and thus cannot radiatively emit) to vibrationally ground-state CO2.
The conversion of translational mode energy (which we sense as temperature) to vibrational mode energy of N2 is, by definition, a cooling process.
The transfer of that N2 vibrational mode energy to vibrational mode energy of CO2, then that energy being emitted to space as radiation is, by definition, a cooling process. The resultant radiation from the last two energetic pathways is in the Infrared Atmospheric Window, thus any upwelling radiation has a nearly unfettered path out to space.
An increased atmospheric CO2 concentration will increase the likelihood of vibrationally-excited N2 colliding with CO2, thereby increasing the likelihood of CO2 radiatively emitting, thereby increasing the radiative cooling effect.
All impressive maths but totally useless. The Sun’s energy comes in at the speed of light and heats the Earth’s surface on one hemisphere at a time, not 2 like the models say. Part of that energy goes back to space at the speed of light in the form of IR since N2 and O2 are totally transparent to IR. The atmosphere is a gas and not a black body so the Stefan-Boltzman law does not apply. Part of the energy is absorbed by the air at the surfaceby conduction and can’t be shed since N2 and O2 for the same reasons it cant absorb IR, cant emit IR. So the only way for that energy to be released is by convection and conduction all the way up to the top of the troposphere. You therefore need a kinetic component to your climate model. So, the climate models are total fiction.
Pierre,
So the atmosphere doesn’t absorb and emit radiation. Is that what you are saying?
The maths is correct but irrelevant because O2 and N2 are transparent to IR? You didn’t mention CO2 or water vapor. Do they absorb and emit radiation?
The speed of light. Wow. That’s fast. Completely irrelevant to the question of absorption and emission.
“One hemisphere at a time..”. Wow. Who knew? Completely irrelevant for the question of absorption and emission of terrestrial radiation (which is emitted from the surface area of a sphere) by the atmosphere.
“..not 2 like the models say” Wow. You are on a roll! Except.. no.. they don’t. Provide a reference for your “model”.
For other readers, here’s theory (the equations in the article) and observation at top of atmosphere – note the two curves have been offset to make it easier to see. Somehow the calculation by wavelength of absorption and emission match experimental results (graph in next comment due to wordpress randomness):
Pierre: In the case of radiative transfer of energy, the key quantity is power, not velocity. Power is measured in Watts or more commonly W/m2. Each photon of frequency v delivers hv in energy. n photons per second of the same frequency delivers nhv in power. To get total power, you need to integrate over all frequencies.
There is no need to worry about day and night. The surface of the ocean (70%) varies only about 1 K between and the upper atmosphere doesn’t change that much either. Near land surface, the temperature varies about 10 K between day and night, only 3% in degK. Referring to an average surface temperature is an approximation suitable for most purposes.
All gas molecules are colliding and constantly exchanging kinetic energy. Temperature is proportional to the mean kinetic energy. Occasionally a collision will excite a CO2 or H2O molecule into a higher vibrational state and that state can emit a photon, leaving less kinetic energy to be shared by all molecules, not just the CO2 and H2O. In most of the atmosphere, the fraction of GHGs in an excited state varies with temperature according to the Boltzmann distribution. Most of the time, excited states are created and relaxed by collisions many thousands of times faster than emission or absorption of photons. So the rate of emission of photons (radiative cooling) varies with the temperature of the atmosphere including the nitrogen and oxygen molecules, but not the number photons being absorbed.
If there were no convection, the surface temperature of our planet would be about 350 K to drive 240 W/m2 of LWR through our IR-opaque atmosphere to balance the 240 W/m2 of incoming SWR that isn’t reflected back to space. However the atmosphere unstable towards buoyancy-driven convention when when the rate of temperature fall per km rise (lapse rate) is too great. Convection removes heat from the surface fast enough to maintain a marginally stable lapse rate (averaging 6.5 K/km), reducing average surface temperature to 288 K. Near the surface about 100 W/m2 of the vertical flux of energy is provided by convection (mostly latent heat of evaporated water molecules that will condense higher in the atmosphere) and about 60 W/m2 is via thermal IR. As that thermal IR travels through the atmosphere, some photons are absorbed and some are emitted, and the change in radiation spectral intensity is given by the Schwarzschild equation discussed above. Reduction of radiative cooling to space from 2XCO2 (radiative forcing) is calculated by numerically integrating from the surface to space over all wavelengths. This is why this physics is important
dI(λ) = nσ*B(λ,T) – nσ*I(λ)*ds
Since convection involves turbulent mixing, the large grid cells in AOGCMs can’t properly describe convective features the size of a summer thunderstorm. Since convection is handled by parameters, this doesn’t make AOGCMs “total fiction”, but these parameters are tuned and not derived from fundamental physics. So AOGCMs are approximations that are “wrong”, but we can’t say how wrong.
In many comments at SoD, I’ve pointed out that the standard derivation of Planck’s Law starts by assuming that absorption and emission are in equilibrium (with a medium of quantized oscillators). For this reason, I’ve asserted that we can’t reliably apply Planck’s Law and the S-B equation to the atmosphere because at some altitudes and wavelengths, GHGs don’t absorb and emit often enough to create such an equilibrium. They certainly don’t absorb and emit significantly at “atmospheric window” wavelengths. In these locations, we must rely of the Schwarzschild (Equation 11 in this post), which describes how the spectral intensity of radiation changes as it passes an incremental distance ds through the atmosphere.
dI = emission from ds – absorption by ds
dI = n*o*B(lambda,T)*ds – n*o*I*ds = n*o*{B(lambda,T) – I}*ds
Translated into words, this equation says that the net result of absorption and emission of radiation changes the spectral intensity of radiation traveling through an atmosphere toward “blackbody intensity”, B(lambda,T) for the local temperature. The rate (with distance traveled) at which it approaches blackbody intensity depends on the density of GHGs (n) and their absorption cross-section (o). Unfortunately, this explanation has always struck me as not being very tangible.
I stumbled across a way to make this explanation more tangible using Modtran. At altitudes where the spectral intensity of DLR and OLR are equal, absorption and emission are in equilibrium and both have “blackbody spectral intensity”. For example, if you print out and overlay the spectrum of OLR looking down from 9 km and DLR looking up from 9 km, the spectral intensity of both is about 10 W/m2/um. (Tropical atmosphere, standard GHG concentration, no clouds, wavelength, not wavenumber.) The temperature at 9 km is 243.6 K and the center of the CO2 band is just above the blackbody curve for 240 K.
This equality is harder to observe in the bands due to water vapor, where intensity varies quickly with wavelength. However, you can simply look and see how the intensity of DLR at any altitude matches the Blackbody spectra for 220, 240, 260, and 280 K. These are the temperatures at 3.57, 6.57, 9.57, 12.57 and 15.57 km above the tropical surface. Looking up from 3.57 km, DLR is almost perfectly superimposed on the 280K blackbody spectrum at and below 5 um and above 25 um and at peaks between 22 and 25 um (all water vapor) and at 14 to 16 um (CO2). If you want, you can look at a single GHG at a time to assign bands to a particular GHG. Then you can look up from 3.57 km and see where OLR has blackbody spectral intensity for 280 K. At some wavelengths, it is higher than this intensity because the atmosphere is transparent to thermal IR emitted by the surface at these wavelengths.
If you check out 3.57, 6.57, 9.57, 12.57 and 15.57 km, you’ll find that water bands have BB intensity through 6.57 km, but most have dropped below that by 9.57 km. The CO2 band has BB intensity up through the tropopause (17 km), but narrows somewhat. By 24.5 km (in the stratosphere), the temperature has risen to 220 K and the strongest absorption (now a narrow line) is nearly touching the 220K BB curve. By 34 km (240K), this line is below the 220K curve. The density of CO2 at this altitude doesn’t produce enough absorption and emission to maintain equilibrium at this (or any) wavelength.
In conclusion, you can use DLR from Modtran to see what altitudes and wavelengths do and don’t have radiation with BB intensity appropriate for the temperature at that altitude.