During a discussion following one of the six articles on Ferenc Miskolczi someone pointed to an article in E&E (Energy & Environment). I took a look and had a few questions.

The article is question is The Thermodynamic Relationship Between Surface Temperature And Water Vapor Concentration In The Troposphere, by William C. Gilbert from 2010. I’ll call this WG2010. I encourage everyone to read the whole paper for themselves.

Actually this E&E edition is a potential collector’s item because they announce it as: *Special Issue – Paradigms in Climate Research*.

The author comments in the abstract:

The key to the physics discussed in this paper is the understanding of the relationship between water vapor condensation and the resulting PV work energy distribution under the influence of a gravitational field.

Which sort of implies that no one studying atmospheric physics has considered the influence of gravitational fields, or at least the author has something new to offer which hasn’t previously been understood.

### Physics

Note that I have added a WG prefix to the equation numbers from the paper, for ease of referencing:

First let’s start with the basic process equation for the first law of thermodynamics

(Note that all units of measure for energy in this discussion assume intensive properties, i.e., per unit mass):dU = dQ – PdV ….[WG1]

where dU is the change in total internal energy of the system, dQ is the change in thermal energy of the system and PdV is work done to or by the system on the surroundings.

This is (almost) fine. The author later mixes up Q and U. dQ is the heat added to the system. dU is change in internal energy which includes the thermal energy.

But equation (1) applies to a system that is not influenced by external fields.

Since the atmosphere is under the influence of a gravitational field the first law equation must be modifiedto account for the potential energy portion of internal energy that is due to position:dU = dQ + gdz – PdV ….[WG2]

where g is the acceleration of gravity (9.8 m/s²) and z is the mass particle vertical elevation relative to the earth’s surface.

[Emphasis added. Also I changed “*h*” into “*z*” in the quotes from the paper to make the equations easier to follow later].

This equation is incorrect, which will be demonstrated later.

The thermal energy component of the system (dQ) can be broken down into two distinct parts: 1) the molecular thermal energy due to its kinetic/rotational/ vibrational internal energies (CvdT) and 2) the intermolecular thermal energy resulting from the phase change (condensation/evaporation) of water vapor (Ldq). Thus the first law can be rewritten as:

dU = CvdT + Ldq + gdz – PdV ….[WG3]

where Cv is the specific heat capacity at constant volume, L is the latent heat of condensation/evaporation of water (2257 J/g) and q is the mass of water vapor available to undergo the phase change.

Ouch. dQ is heat added to the system, and it is dU which is the internal energy which should be broken down into changes in thermal energy (temperature) and changes in latent heat. This is demonstrated later.

Later, the author states:

This ratio of thermal energy released versus PV work energy created is the crux of the physics behind the troposphere humidity trend profile versus surface temperature. But what is it that controls this energy ratio? It turns out that the same factor that controls the pressure profile in the troposphere also controls the tropospheric temperature profile and the PV/thermal energy ratio profile. That factor is gravity. If you take equation (3) and modify it to remove the latent heat term, and assume for an adiabatic, ideal gas system CpT = CvT + PV,

you can easily derivewhat is known in the various meteorological texts as the “dry adiabatic lapse rate”:dT/dz = –g/Cp = 9.8 K/km ….[WG5]

[Emphasis added]

**Unfortunately, with his starting equations you can’t derive this result**.

What I am talking about?

### The Equations Required to Derive the Lapse Rate

Most textbooks on atmospheric physics include some derivation of the lapse rate. We consider a parcel of air of one mole. (*Some terms are defined slightly differently to WG2010 – note 1*).

There are 5 basic equations:

The **hydrostatic equilibrium** equation:

dp/dz = -ρg ….[1]

where p = pressure, z = height, ρ = density and g = acceleration due to gravity (=9.8 m/s²)

The ideal gas law:

pV = RT ….[2]

where V = volume, R = the gas constant, T = temperature in K, and this form of the equation is for 1 mole of gas

The equation for **density**:

ρ = M/V ….[3]

where M = mass of one mole

The **First Law of Thermodynamics**:

dU = dQ + dW ….[4]

where dU = change in internal energy, dQ = heat added to the system, dW = work added to the system

..rewritten for dry atmospheres as:

dQ = C_{v}dT + pdV ….[4a]

where C_{v} = heat capacity at constant volume (for one mole), dV = change in volume

And the (less well-known) equation which links **heat capacity** at constant volume with heat capacity at constant pressure (derived from statistical thermodynamics and experimentally verifiable):

C_{p} = C_{v} + R ….[5]

where C_{p} = heat capacity (for one mole) at constant pressure

With an adiabatic process no heat is transferred between the parcel and its surroundings. This is a reasonable assumption with typical atmospheric movements. As a result, we set dQ = 0 in equation 4 & 4a.

Using these 5 equations we can solve to find the **dry adiabatic lapse rate (DALR)**:

**dT/dz = -g/c _{p} ….[6]**

where dT/dz = the change in temperature with height (the lapse rate), g = acceleration due to gravity, and c_{p} = specific heat capacity (per unit mass) at constant pressure

dT/dz ≈ -9.8 K/km

Knowing that many readers are not comfortable with maths I show the derivation in The Maths Section at the end.

*And also for those not so familiar with maths & calculus, the “d” in front of a term means “change in”. So, for example, “dT/dz” reads as: “the change in temperature as z changes”.*

### Fundamental “New Paradigm” Problems

There are two basic problems with his fundamental equations:

- he confuses
*internal energy*and*heat added*to get a sign error - he adds a term for gravitational potential energy when it is already implicitly included via the pressure change with height

A sign error might seem unimportant but given the claims later in the paper (with no explanation of how these claims were calculated) it is quite possible that the wrong equation was used to make these calculations.

These problems will now be explained.

### Under the New Paradigm – Sign Error

Because William Gilbert mixes up internal energy and heat added, the result is a sign error. Consult a standard thermodynamics textbook and the first law of thermodynamics will be represented something like this:

dU = dQ + dW

Which in words means:

The change in internal energy equals the heat added plus the work done on the system.

And if we talk about dW as the work done **by** the system then the sign in front of dW will change. So, if we rewrite the above equation:

dU = dQ – pdV

By the time we get to [WG3] we have two problems.

Here is [WG3] for reference:

dU = CvdT + Ldq + gdz – PdV ….[WG3]

The first problem is that for adiabatic process, no heat is added to (or removed from) the system. So **dQ = 0**. The author says dU = 0 and makes dQ = change in internal energy (=CvdT + Ldq).

Here is the demonstration of the problem using his equation..

If we have no phase change then **Ldq = 0**. The **gdz** term is a mistake – for later consideration – but if we consider an example with no change in height in the atmosphere, we would have (using his equation):

CvdT – PdV = 0 ….[WG3a]

So if the parcel of air expands, doing work on its environment, what happens to temperature?

dV is positive because the volume is increasing. So to keep the equation valid, dT must be positive, which means the temperature must **increase**.

This means that as the parcel of air does work on its environment, using up energy, its temperature increases – adding energy. A violation of the first law of thermodynamics.

Hopefully, everyone can see that this is not correct. But it is the consequence of the incorrectly stated equation. In any case, I will use both the flawed and the fixed version to demonstrate the second problem.

### Under the New Paradigm – Gravity x 2

This problem won’t appear so obvious, which is probably why William Gilbert makes the mistake himself.

In the list of 5 equations, I wrote:

dQ = C_{v}dT + pdV ….[4a]

This is for dry atmospheres, to keep it simple (no **Ldq** term for water vapor condensing). If you check the *Maths Section* at the end, you can see that using [4a] we get the result that everyone agrees with for the lapse rate.

I **didn’t** write:

dQ = C_{v}dT + Mgdz + pdV ….[should this instead be 4a?]

[*Note that my equations consider 1 mole of the atmosphere rather than 1 kg which is why “M” appears in front of the gdz term*].

So how come I ignored the effect of gravity in the atmosphere yet got the correct answer? Perhaps the derivation is wrong?

The effect of gravity already shows itself via the increase in pressure as we get closer to the surface of the earth.

Atmospheric physics has **not** been ignoring the effect of gravity and making elementary mistakes. Now for the proof.

If you consult the Maths Section, near the end we have reached the following equation and not yet inserted the equation for the first law of thermodynamics:

pdV – Mgdz = (C_{p}-C_{v})dT ….[10]

Using [10] and “my version” of the first law I successfully derive dT/dz = -g/cp (the right result). Now we will try using William Gilbert’s equation [WG3], with Ldq = 0, to derive the dry adiabatic lapse rate.

0 = CvdT + gdz – PdV ….[WG3b]

and rewriting for one mole instead of 1 kg (and using my terms, see note 1):

pdV = C_{v}dT + Mgdz ….[WG3c]

Inserting WG3c into [10]:

C_{v}dT + Mgdz – Mgdz = (C_{p}-C_{v})dT ….[11]

which becomes:

C_{v} = (C_{p}-C_{v}) ↠ **C _{p} = C_{v}/2 ….[11a]**

A New Paradigm indeed!

Now let’s fix up the sign error in WG3 and see what result we get:

0 = CvdT + gdz **+** PdV ….[WG3d]

and again rewriting for one mole instead of 1 kg (and again using my terms, see note 1):

pdV = -C_{v}dT – Mgdz ….[WG3e]

Inserting WG3e into [10]:

-C_{v}dT – Mgdz – Mgdz = (C_{p}-C_{v})dT ….[12]

which becomes:

-C_{v}dT – 2Mgdz = C_{p}dT – C_{v}dT ….[12a]

and canceling the -C_{v}dT term from each side:

-2Mgdz = C_{p}dT ….[12b]

So:

dT/dz = -2Mg/C_{p}, and because specific heat capacity, c_{p} = C_{p}/M

**dT/dz = -2g/c _{p} ….[12c]**

**The result of “correctly including gravity” is that the dry adiabatic lapse rate ≈ -19.6 K/km. **

Note the factor of 2. This is because we are now including gravity twice. The pressure in the atmosphere reduces as we go up – this is because of gravity. When a parcel of air expands due to its change in height, it does work on its surroundings and therefore reduces in temperature – adiabatic expansion. Gravity is already taken into account with the hydrostatic equation.

### The Physics of Hand-Waving

The author says:

As we shall see, PV work energy is very important to the understanding of this thermodynamic behavior of the atmosphere, and the thermodynamic role of water vapor condensation plays an important part in this overall energy balance.

But this is unfortunately often overlooked or ignored in the more recent climate science literature. The atmosphere is a very dynamic system and cannot be adequately analyzed using static, steady state mental models that primarily focus only on thermal energy.

Emphasis added. This is an unproven assertion because it comes with no references.

In the next stage of the “physics” section, the author doesn’t bother with any equations, making it difficult to understand exactly what he is claiming.

Keeping this gravitational steady state equilibrium in mind, let’s look again at what happens when latent heat is released (condensation) during air parcel ascension.

Latent heat release immediately increases the parcel temperature. But that also results in rapid PV expansion which then results in a drop in parcel temperature. Buoyancy results and the parcel ascends and is driven by the descending pressure profile created by gravity.

The rate of ascension, and the parcel temperature, is a function of the quantity of latent heat released and the PV work needed to overcome the gravitational field to reach a dynamic equilibrium. The more latent heat that is released, the more rapid the expansion / ascension. And the more rapid the ascension, the more rapid is the adiabatic cooling of the parcel. Thus the PV/thermal energy ratio should be a function of the amount of latent heat available for phase conversion at any given altitude. The corresponding physics shows the system will try to force the convecting parcel to approach the dry adiabatic or “gravitational” lapse rate as internal latent heat is released.

For the water vapor remaining uncondensed in the parcel, saturation and subsequent condensation will occur at a more rapid rate if more latent heat is released. In fact if the cooling rate is sufficiently large, super saturation can occur, which can then cause very sudden condensation in greater quantity. Thus the thermal/PV energy ratio is critical in determining the rate of condensation occurring. The higher this ratio, the more complete is the condensation in the parcel, and the lower the specific humidity will be at higher elevations.

I tried (unsuccessfully) to write down some equations to reflect the above paragraphs. The correct approach for the author would be:

- A. Here is what atmospheric physics states now (with references)
- B. Here are the flaws/omissions due to theoretical consideration i), ii), etc
- C. Here is the new derivation (with clear statement of physics principles upon which the new equations are based)

One point I think the author is claiming is that the speed of ascent is a critical factor. Yet the equation for the moist adiabatic lapse rate doesn’t allow for a function of time in the equation.

The (standard) equation has the form (note 2):

dT/dz = g/c_{p} {[1+Lq*/RT]/[1+βLq*/c_{p}]} ….[13]

where q* is the saturation specific humidity and is a function of p & T (i.e. not a constant), and β = 0.067/°C. (See, for example: *Atmosphere, Ocean & Climate Dynamics* by Marshall & Plumb, 2008)

And this means that if the ascent is – for example – twice as fast, the amount of water vapor condensed at any given height will still be the same. It will happen in half the time, but why will this change any of the thermodynamics of the process?

It might, but it’s not clearly stated, so who can determine the “new physics”?

I can see that something else is claimed to do with the ratio CvdT /pV but I don’t know what it is, or what is behind the claim.

Writing the equations down is important so that other people can evaluate the claim.

And the **final “result”** of the hand waving is what appears to be the crux of the paper – *more humidity at the surface will cause so much “faster” condensation of the moisture that the parcel of air will be drier higher up in the atmosphere*. (Where “faster” could mean dT/dt, or could mean dT/dz).

Assuming I understood the claim of the paper correctly it has not been proven from any theoretical considerations. (And I’m not sure I **have** understood the claim correctly).

### Empirical Observations

The heading is actually “Empirical Observations to Verify the Physics”. A more accurate title is “Empirical Observations”.

The author provides 3 radiosonde profiles from Miami. Here is one example:

*Figure 1 – “Thermal adiabat” in the legend = “moist adiabat”*

With reference to the 3 profiles, a higher surface humidity apparently leads to complete condensation at a lower altitude.

This is, of course, interesting. **This would mean a higher humidity at the surface leads to a drier upper troposphere**.

But it’s just 3 profiles. From one location on two different days. Does this prove something or should a few more profiles be used?

A few statements that need backing up:

The lower troposphere lapse rate decreases (slower rate of cooling) with increasing system surface humidity levels, as expected. But the differences in lapse rate are far less than expected based on the relative release of latent heat occurring in the three systems.

What equation determines “than expected”? What result was calculated vs measured? What implications result?

The amount of PV work that occurs during ascension increases markedly as the system surface humidity levels increase, especially at lower altitudes..

How was this calculated? What specifically is the claim? The equation 4a, under adiabatic conditions, with the additional of latent heat reads like this:

C_{v}dT + Ldq + pdV = 0 ….[4a]

Was this equation solved from measured variables of pressure, temperature & specific humidity?

Latent heat release is effectively complete at 7.5 km for the highest surface humidity system (20.06 g/kg) but continues up to 11 km for the lower surface humidity systems (18.17 and 17.07 g/kg). The higher humidity system has seen complete condensation at a lower altitude, and a significantly higher temperature (−17 ºC) than the lower humidity systems (∼ −40 ºC) despite the much greater quantity of latent heat released.

How was this determined?

If it’s true, perhaps the highest humidity surface condition ascended into a colder air front and therefore lost all its water vapor due to the lower temperature?

Why is this (obvious) possibility not commented on or examined??

### Textbook Stuff and Why Relative Humidity doesn’t Increase with Height

The radiosonde profiles in the paper are not necessarily following one “parcel” of air.

Consider a parcel of air near saturation at the surface. It rises, cools and soon reaches saturation. So condensation takes place, the release of latent heat causes the air to be more buoyant and so it keeps rising. As it rises water vapor is continually condensing and the air (of this parcel) will be at 100% relative humidity.

Yet relative humidity doesn’t increase with height, it reduces:

*Figure 2*

Standard textbook stuff on typical temperature profiles vs dry and moist adiabatic profiles:

*Figure 3*

And explaining why the atmosphere under convection doesn’t always follow a moist adiabat:

*Figure 4 *

The atmosphere has descending dry air as well as rising moist air. Mixing of air takes place, which is why relative humidity reduces with height.

### Conclusion

The “theory section” of the paper is not a theory section. It has a few equations which are incorrect, followed by some hand-waving arguments that might be interesting if they were turned into equations that could be examined.

It is elementary to prove the errors in the few equations stated in the paper. If we use the author’s equations we derive a final result which contradicts known fundamental thermodynamics.

The empirical results consist of 3 radiosonde profiles with many claims that can’t be tested because the method by which these claims were calculated is not explained.

If it turned out that – all other conditions remaining the same – higher specific humidity at the surface translated into a drier upper troposphere, this would be really interesting stuff.

But 3 radiosonde profiles in support of this claim is not sufficient evidence.

### The Maths Section – Real Derivation of Dry Adiabatic Lapse Rate

There are a few ways to get to the final result – this is just one approach. Refer to the original 5 equations under the heading: *The Equations for the Lapse Rate*.

From [2], pV = RT, differentiate both sides with respect to T:

↠ d(pV)/dT = d(RT)/dT

The left hand side can be expanded as: V.dp/dT + p.dV/dT, and the right hand side = R (as dT/dT=1).

↠ Vdp + pdV = RdT ….[7]

Insert [5], C_{p} = C_{v} + R, into [7]:

Vdp + pdV = (C_{p}-C_{v})dT ….[8]

From [1] & [3]:

Vdp = -Mgdz ….[9]

Insert [9] into [8]:

pdV – Mgdz = (C_{p}-C_{v})dT ….[10]

From 4a, under adiabatic conditions, dQ = 0, so C_{v}dT + pdV = 0, and substituting into [10]”

-C_{v}dT – Mgdz = C_{p}dT – C_{v}dT

and adding C_{v}dT to both sides:

-Mgdz = C_{p}dT, or dT/dz = -Mg/C_{p} ….[11]

and specific heat capacity, c_{p} = C_{p}/M, so:

dT/dz = g /c_{p} ….[11a]

The correct result, stated as equation [6] earlier.

### Notes

**Note 1**: Definitions in equations. WG2010 has:

**P**= pressure, while this article has**p**= pressure (lower case instead of upper case0- C
_{v}= heat capacity for**1 kg**, this article has C_{v}= heat capacity for**one mole**, and c_{v}= heat capacity for 1 kg.

**Note 2**: The moist adiabatic lapse rate is calculated using the same approach but with an extra term, Ldq, in equation 4a, which accounts for the latent heat released as water vapor condenses.

on June 12, 2011 at 12:14 pm |BryanSoD

The internal energy(U) is a state function and contains any relevant energy in the given situation.

Kinetic energy of molecules (but not motion of the centre of mass)

Potential energy (in this case gravitational)

Perhaps chemical and electrical energy (if appropriate) here the mL factor(latent heat)

and so on.

Perhaps W C Gilbert thought that to expand the Internal Energy collection (by removing the brackets) so to speak might illustrate the mechanics of the system.

I would agree with you that great care needs to be exercised here to avoid double counting.

However I thought that the main contribution that W C Gilbert and Hans Jelbring brought to the debate is that greenhouse effect pays no part whatsoever in the dry adiabatic lapse rate.

That this is now accepted by all rational people with the exception of some unreconstructed greenhouse diehards is due to their popularising of the issue.

on June 12, 2011 at 3:08 pm |Neal J. KingBryan,

I’ve never seen anyone claim that the dry adiabatic lapse rate has anything to do with the greenhouse effect. I’ve seen several derivations; none of them even mention carbon dioxide.

So that is no contribution of Gilbert & Jelbring.

on June 12, 2011 at 4:00 pm |BryanNeal J. King

When did you see your first derivation of the dry adiabatic lapse rate?

I can assure you that “out there”, several commentators who consider themselves quite expert are astonished to find that the “greenhouse effect” plays no part in the basic temperature profile of the troposphere.

I think that this is the first post that SoD has headlined it.

W C Gilbert and Leonard Weinstein were major contributors to a posting on the atmosphere of Venus.

SoD did not raise any issues with W C Gilbert then, as far as my memory goes.

Perhaps SoD will give you a link to that posting, I cant remember the exact title.

on June 12, 2011 at 4:15 pm |Neal J. KingBryan,

The first derivation I saw around 1977, from a book first published in 1936: Fermi’s book on thermodynamics. A simple two-page derivation, based only on the adiabatic gas law and hydrostatic equilibrium.

on June 12, 2011 at 5:09 pm |BryanNeal J. King

I agree there was nothing new in the thermodynamics calculation of Gilbert & Jelbring.

Rather it has been “overlooked” by people following a basic climate science course.

Otherwise I cannot account for the astonishment I find when it is brought to the attention of greenhouse enthusiasts.

My clinching argument if the point comes up is to say that even SoD accepts this fact.

I was hoping you could have found it in an early climate science textbook.

The famous historical “greenhouse” experiment by R W Wood was similarly overlooked by some quite eminent authors of thermodynamics textbooks.

I have some textbook examples of an actual greenhouse being used to describe the greenhouse effect.

Even now some people (thankfully fewer and fewer) think that R W Wood got it wrong.

on June 12, 2011 at 5:21 pm |Neal J. KingBryan,

From what SoD documents, there is not much that is right about G&J’s calculation.

I have stumbled across other derivations of the adiabatic lapse rate, but I’ve never seen any suggestion that it depended in any way on the greenhouse effect.

on June 12, 2011 at 10:03 pm |omnologosIt would be nice to get SoD to confirm (or deny) the absence of any role by the “greenhouse effect” in determining the temperature profile in the troposphere (apart, perhaps, from effects on the height of the troposphere itself?).

on June 12, 2011 at 10:49 pm |scienceofdoomomnologos:

The subject gets confused by people wanting simple answers to non-precise questions.

Many times the question is asked by people who don’t grasp

the differencebetween the more specific questions that can be posed. I am not making any comment about you by the way – just the reason for my lengthy answer..1. If you asked the question: “

Is the adiabatic lapse rate determined by any radiation considerations?” – the answer isNo, radiation has absolutely nothing to do with the adiabatic lapse rate.2. To the precise question you actually asked maybe without realizing it, the answer is –

Deny(=Deny the absence of any role by the “greenhouse effect” in determining the temperature profile in the troposphere apart, perhaps, from effects on the height of the troposphere itself)3. To the question you probably meant to ask – and let me rephrase it – “

Is dT/dz of the atmosphere determined by the greenhouse effect?” – the answer is –Yes, but not in the way you might think. Without a greenhouse effect, dT/dz is on average =0; with a greenhouse effect, dT/dz = the adiabatic lapse rate.4. To a more precise phrasing of your question “

Would the top of the troposphere be at the surface with no greenhouse effect?– the answer isYes, i.e., there would be no troposphere (although a more difficult subject), which is why I answered “Deny” to your actual question.Note that points 3 & 4 depend on the definition of the troposphere, which is usually taken to be the region where convection operates.

To understand the answers, first a citation from an earlier article, followed by a more lengthy explanation.

From Things Climate Science has Totally Missed? – Convection

With no radiatively-active gases it is difficult to see how the lapse rate can be

sustained. The lapse rate is not determined by any radiative physics. But if no process exists toinitiateconvection then you will not see the adiabatic lapse rate operating.The stratosphere, for example, does not follow the adiabatic lapse rate. Why not? The same laws of physics operate in the stratosphere.. The reason is because radiation is a more effective mover of energy than convection. So convection does not take place (see note 1)

So in an atmosphere where no gases absorb longwave radiation, the question becomes “what initiates convection?”

At the moment, in our atmosphere, convection is a

netmover of energy from the surface into the troposphere. And radiation from the atmosphere to the surface returns this energy.If the surface radiated all of the absorbed solar radiation back into space (as would eventually happen with no “greenhouse” gases), then

ifconvection became a net mover of energy from the surface, what returns this energy to the surface?There was an interesting discussion about this in Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion. You can see the view of Leonard Weinstein who (massive over-simplification coming, sorry Leonard, just trying to keep my explanation short), sees the mechanical energy of planetary rotation plus differential solar heating as providing energy to initiate & sustain convection (both up and down obviously).

Sorry for my long answer to your short question. Perhaps I should have written an article.

Note 1– Some convection does take place in the stratosphere. But as a general rule the vertical convection that we see in the troposphere does not operate in the stratosphere.Note 2– Often I write “the inappropriately-named ‘greenhouse’ effect” and nearly always use quotes around “greenhouse” effect. Due to multiple quote marks I removed them for readability.on June 13, 2011 at 12:21 am |omnologosthank you SoD for your kind answer(s). Somehow I think if there were more of you and less of realclimaters or skepticalscientists there would be no controversy at all. 😎

I shall read the other link, most likely will be back for more q’s.

on June 12, 2011 at 11:30 pm |scienceofdoomAnd the short answer:

The adiabatic lapse rate describes what happens to air that is convected.

The adiabatic lapse rate does not determined whether convection takes place.

If no convection takes place, dT/dz will therefore not be determined by the adiabatic lapse rate. If convection does take place, dT/dz = adiabatic lapse rate.

Adiabatic lapse rate is not determined by radiative processes.

Convection (whether or not it takes place) is affected by radiative processes.

Note: in common convection we write: dT/dz = -Γ where Γ is a positive number. Γ = 9.8 K/km for dry air and a number between 3 K/km up to 9.8 K/km for moist air.

Γ is the greek capital Gamma.

on June 13, 2011 at 2:20 am |williamcgSOD,

Thank you for spending the time to review my paper. Feedback is always welcome (with the exception of the water vapor kind). This has also been very helpful to me in better understanding the broken paradigm that so many of the radiation heat transfer aficionados seem to suffer from. It appears to be simply a general lack of understanding of basic thermodynamics outside of radiation heat transfer. So be it.

I began the section on “Physics” with people such as yourself in mind when I said:

There is nothing controversial in this entire section – it is just basic physics. It has been reviewed by several physicists both before and after publication and the only criticism I received was that it was too basic to be included in a scientific paper. But I felt it was necessary to keep it in since it would probably be read by some “Climate Scientists” and they would need all the help they could get. It seems that I was right. (Note: there is a misstatement on pg. 266 where I said for adiabatic processes dU = 0. I tried to correct it before publication but was too late. But it does not affect the validity of the rest of the discussion).

I will not try to address all your points – they are a jumbled mess – but I will address a few key items to help you get back on track.

First, you seem to tie yourself in knots over the sign of the work term in the first law. It is very simple and you explained it correctly in your discussion. The term is positive if work is being done on the system

bythe surroundings. The term is negative if the system is doing workonthe surroundings. I show a negative sign since I am dealing with upward convection and the parcel is expanding and doing work on the surroundings. If I was dealing with downward convection (subsidence) I would show a positive sign for work energy since the parcel is being compressed by the surroundings. It’s all just common sense. Unfortunately the bulk of your discussion on this point is non-sense.Did you actually read the paper? Just in case, here is a very brief synopsis. If the system (parcel) does work on the surroundings, the parcel expands (- PdV). This energy has to come from somewhere and since the process is assumed to be adiabatic, the energy will come from CvdT and the temperature will

decrease. But the parcel is now more buoyant and it will rise; and gdz will increase, offsetting the loss in CvdT. Thus, according to the first law dU = 0. But you don’t “believe” in “gdz” so it is pointless to spend more time on this. Once you get things figured out, and know the difference between dQ and dU, I can spend some time going into “free energy” and maybe you will understand better.Second, you seem to have had a problem with my statement:

You said:

Let me help you. Here is my equation (3):

dU = CvdT + Ldq + gdh – PdV (3)

Remove the latent heat term (the parcel is dry) and you get my equation (2):

dU = CvdT + gdh – PdV (2)

But

CvT + PdV = CpT

How? (I skipped that part in my paper because I thought everyone knew it already)

Cp – Cv = R and

PV = nRT

Substituting:

Cp – Cv = PV/T

CpT – CvT = PV

CpT = CvT + PV

You can ignore the (n) if you convert the terms to intensive units (J/Kg). Also you need to use the correct sign for PV (work is being done on the parcel in this case). Now equation (2) becomes:

dU = CpdT + gdh

But since dH = 0 (read the paper);

dT/dh = – g/CpVoila!

I have no idea what you were doing when you came up with Cp = Cv/2. But it shows you need a refresher course in basic physics (why “gdh” is important for instance – that pesky gravity thing). Now, if you want to derive the same relationship using 5 equations instead of 1 (first law), go right ahead. But I would prefer going through the Panama Canal rather than round Cape Horn (I hear it is a very treacherous journey).

By the way, the first law is generally written in the form of the next to last equation above: dU = CpdT + gdh. I purposely used the Cv version to point out the role of PV work in atmospheric thermodynamics. The energy distribution in the atmosphere is constantly shifting between the four terms in equation (3); thermal energy (CvdT), potential thermal energy (Ldq and gdh) and work energy (PdV). The radiation heat transfer aficionados generally forget (or are unaware of) this basic fact. But it is the distribution of these forms of energy at any point in time that determines the radiative properties of the atmosphere. (Repeat that last sentence 10 times before you go to bed at night – and continue to repeat each night until it has sunk in).

Just a few other comments concerning your article and I will return to my Sunday plans:

• I strongly advise you not to use molar quantities when dealing with thermodynamic equations. Molar quantities are useful in balancing chemical equations but are pretty worthless elsewhere (one of my degrees is in chemistry and I know from experience). Using molar quantities in thermodynamic equations can lead to awkward results – as you have aptly demonstrated.

• You make the following statement concerning one of my graphs:

No! The “thermal adiabat” is defined as I describe it in the paper. If you would read and understand that very salient point, you may understand the rest of it.

• As with the Miskolczi paper, you seem to struggle with the concept of “empirical”. As an example, you said this:

It was determined directly from the radiosonde data – it’s called “measurement”. That is also why this section was titled “Empirical Observations”. You do realize that a lot of the equations you worship in the textbooks were actually the result of “empirical observations” don’t you?

One last comment. You say:

Yes, that is the whole point! Reread the paper. Good luck!

Bill Gilbert

P.S., Contrary to your statement, the dry adiabatic lapse rate does not require convection to exist. It can exist in a static as well as a dynamic atmosphere. Convection is the means for maintaining the dry adiabatic lapse rate, not for creating it. It’s that gravity thing again. But I will save that discussion for another time. But if anyone wants to pursue the topic it can be found in the SOD thread starting at:

https://scienceofdoom.com/2010/06/22/venusian-mysteries-part-two/#comment-3472

For some reason SOD does not usually refer back to this thread.

on June 13, 2011 at 2:49 am |BryanSoD says

……”And the (less well-known) equation which links heat capacity at constant volume with heat capacity at constant pressure (derived from statistical thermodynamics and experimentally verifiable):

Cp = Cv + R ….[5]”…..

In fact this derivation is done in one page from the classical thermodynamic theory as taught to first year physics students.

No need to complicate matters here.

See University Physics Young and Freedman page 547

on June 13, 2011 at 2:51 am |scienceofdoomwilliamcg,

Thanks for replying. I will take it one point at a time.

This is my first point of difference.

The fact that you explain it differently in words doesn’t mean your sign error has gone away in the equation. We both agree with the physics explanation of adiabatic cooling. We have a different

equation.It is easy to prove which one is correct.

When dV of the parcel increases (expansion), what happens to dT? Well, the parcel cools, so dT is negative.

dV +ve, dT -ve.

An equation with CvdT + pdV = 0 fulfils this.

An equation with CvdT – pdV = 0 cannot fulfil this.

That’s a sign error in your equation.

I haven’t said you don’t understand adiabatic cooling, I have said your equation is incorrect.

Would you like to explain how CvdT – pdV = 0 can be valid with expansion

andcooling?on June 13, 2011 at 3:11 am |scienceofdoomAnd for the second step..

I was substituting your equation 2 [ 0 = CvdT + gdh – PdV ] into the correct derivation.

Thankyou for showing how you got your result and for your kind words. Both have made my day.

Just to get other readers thinking, I will wait to see who claims first prize for spotting your rather large error – in the section between

“Here is my equation (3):”and“Voila!”on June 13, 2011 at 6:32 am |scienceofdoomThe Panama Canal method:Recommended for all budding maths students who want to reduce the time to the result, perhaps compromising very slightly on mathematical integrity, but, heh, let’s not get hung up on details..Your equation 2 for dry conditions: CvdT + gdh – PdV = 0 … [2]

You explain your derivation of the dry adiabatic lapse rate thus – and I’ve got to tell you, it is awesome:

[

Note: I added “[4]” as a reference]So you substitute

PdVin [2] with the value forPVfrom [4]. Tidy!First, the sign in [2] changes as I already said it should, so equation 2 is rewritten as:

CvdT + gdh + PdV = 0 [2a]

Substitute via

Panama Canal method:CvdT + gdh + CpT – CvT = 0 [2a]

And using said “Panama Canal” method:

Cv.dT – Cv.T = 0? (I’m guessing here)

⇒ gdh = – CpT, we’ll just call this one CpdT, via the “let’s be friends” argument and VOILA:

gdh = -CpdT and so we have proven dT/dh = -g/Cp !!!!!!!

I remember (a long time ago) a friend of mine telling me about one of his first year maths classes on vectors, where at the end of the first year course with half an hour remaining in the lecture the lecturer said “Ok, so we have a bit of time left, shall we start our end of term party, or are there any questions?”A slight pause and one guy shouted out from the back – “Yes, I’ve got a question. All term I’ve noticed that you put a line under some of the letters, but not under others.. why is that?”

It’s a hilarious maths joke but not that funny for anyone who wasn’t there, or who doesn’t know how vectors are written in maths..But I was reminded of it when I saw the Panama Canal method.

William, what do you think the “d’s” actually mean in all these equations?

I agree that it’s quicker to just drop them wherever convenient for expediency, but – and call me old-fashioned – I think “VOILA” is a little premature.

P x V does not equal P x dV.Cv x T does not equal Cv x dT

If you check the derivation in the Maths Section of my article you will find that it is mathematically correct and follows sound thermodynamic principles.

And the result that we both agree on can only be derived with the first law of thermodynamics written as:

Cv.dT + P.dV = 0

– for reasons already explained.

Unless – and you have made my month with this – you use the

Panama Canal method.on June 13, 2011 at 3:56 am |scienceofdoomAnd on the third point of empirical observations.

First, thank you again for your kind words, it does you credit.

Let’s take the first two ridiculous questions I asked demonstrating my confusing over the concept of empirical measurements:

“Less than expected” is not a measurement term I have seen before. I always understood it to mean a comparison between an empirical result and a theory.

What value did you measure and what theoretical value did you compare it with? How did you calculate this theoretical value? What are the actual numbers – was it like 99% of the expected theoretical value or 50% of the value – for example?

Forgive my huge confusion over the meaning of empirical. Let’s review my 2nd question on this topic:

Obviously, from your response you

measured PV work. What model radiosonde was it?Forgive my confusion, I thought you had calculated this value from some

other measured parametersand wondered how you did the calculation. Now that I know the radiosonde did the measurement, all is well.My last question that you chose was less important. It was really about how you determined that “complete condensation” had been reached? I’m not sure what “complete condensation” means. Specific humidity = 0? Specific humidity = saturation humidity for that pressure and temperature? If so, how did you calculate it?

I’m just curious about these things and like to understand them.

on June 13, 2011 at 7:32 am |BryanSoD says

……”And the (less well-known) equation which links heat capacity at constant volume with heat capacity at constant pressure (derived from statistical thermodynamics and experimentally verifiable):

Cp = Cv + R ….[5]”…..

In fact this derivation is done in one page from the classical thermodynamic theory as taught to first year physics students.

No need to complicate matters here.

In fact it is derived in 3 lines.

SoD it is quite clear that you have not completed a first year physics course or you would have known that.

You are well outside your comfort zone.

Stop before you make more basic errors.

on June 13, 2011 at 10:05 am |havinasnusBryan, are you quite sure that “less well-known” isn’t meant to be ironic?

on June 13, 2011 at 10:09 pm |Neal J. KingBryan,

Actually, if you’ve been following this thread, you would notice that SoD is more than holding his own with respect to the physics.

Yes, he should have remembered the

Cp = Cv + R

with less strain, but thermo is a somewhat unintuitive subject for most physicists, because it is purely mathematical.

I recall the remark made by one unquestionably great physicist as he was cleaning off some thermodynamic equations from the blackboard for a lecture: “That’s the subject where you can only remember the equations when you’re taking the class, or teaching the class.”

on June 13, 2011 at 10:43 pmscienceofdoomActually I thought many of

my readerswouldn’t know it.Most articles which point out flaws in “popular” papers get all manner of criticism. People challenge anything and everything. Given that this E&E paper has made a dog’s breakfast of fundamental physics but reaches a popular conclusion I expected people to challenge:

a) the ideal gas law – the atmosphere is not an ideal gas

b) hydrostatic equilibrium – the atmosphere is not static

c) where does that Cp=Cv-R equation come from? you made that up!

..

and so on (even though the author also needs to use them).

In fact I even start writing a bunch of stuff on why the hydrostatic equilibrium equation was valid, and why the adiabatic condition was valid (but deleted it due to length). Such is the tribal mentality.

I look forward to further ravaging attacks.

on June 13, 2011 at 10:45 pmscienceofdoomAnd Bryan is usually the one who challenges basic science. Kirchhoff’s law, Stefan-Boltzmann law, ability of a body to absorb radiation.. let’s not draw up a list and point to past sins. It’s a happy day when Bryan accepts stuff in textbooks. Let’s leave it there.

on June 14, 2011 at 10:55 amBryanNeal J. King

You say

…”but thermo is a somewhat unintuitive subject for most physicists, because it is purely mathematical.”…..

I would disagree with that.

Most actual physicists will acquire a realistic feeling for the topic they are dealing with.

Thermodynamics is not remote from the real world.

Practitioners in the field will know well the formulas and also the background that gives rise to the formulas.

They will have a feel for when some formula should be accurate and when it is at best an approximation.

W C Gilbert spent a lifetime working with thermodynamics in the real world and has demonstrated exactly that here.

Contrast this with SoD

It appears that he has never taken a physics course at university even at a first year level.

He is like an American tourist who goes to France with no knowledge of the language but hopes to get by with a tourist phrase book.

Some of the time it works but if the transaction departs from the predictable he is lost.

For an in depth examination of this topic read

https://scienceofdoom.com/2010/06/22/venusian-mysteries-part-two/#comment-3472

This set of exchanges showed the SoD site at its best.

The dialog on all sides of the discussion contributed in a non polemical way and there even appeared to be a consensus at the end.

Months later SoD throws all that constructive dialog aside and decides to launch a “hatchet ” job on W C Gilbert .

This in a way was so predictable and follows a path as SoD puts” right ” the “mistakes” of Gerlich & Tscheuschner, Nicol ,Miskolczi and so on.

SoD drops some clangers along the way but he still stumbles on.

Ask him if its possible that heat can transfer spontaneously from a colder object to an object at a higher temperature.

on June 14, 2011 at 3:45 pmNeal J. KingBryan:

wrt the unintuitive nature of thermo, you say: “I would disagree with that.”

You can disagree if you like. My quote was from Richard Feynman – and I heard it first-hand: “That’s the subject where you can only remember the equations when you’re taking the class, or teaching the class.”

Gilbert’s ability to manipulate basic equations seems to be highly questionable – as amply demonstrated by SoD’s deconstruction of it.

The paper by G&T is laughable; I’m still in discussion with Miskolczi, so I won’t talk about that now.

And, yes, it’s obviously possible to transfer heat spontaneously from a colder to a hotter body: So long as the NET transfer is from hotter to colder. Put a dim lightbulb next to a bright lightbulb: Both will transfer light & heat to the other; but the NET transfer will be from the hotter to the colder.

on June 14, 2011 at 5:09 pmBryanNeal J. King

So many mistakes, so little time!

“And, yes, it’s obviously possible to transfer heat spontaneously ”

What is your definition of heat?

on June 15, 2011 at 7:09 amBryanNeal J. King

….”And, yes, it’s obviously possible to transfer heat spontaneously from a colder to a hotter body: “……

Not according to Clausius and his famous second law.

Perhaps you have a different definition of heat.

Heat as defined in the classic thermodynamics textbooks never moves spontaneously from a lower temperature object to a higher temperature object.

on June 15, 2011 at 12:16 pmNeal J. KingBryan,

Your interpretation of Clausius is quite wrong.

Just to quote the wiki article on Clausius’ statement on thermodynamics:

“No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature.”

Note the words “SOLE RESULT”: This fits in precisely with what I was saying before. There is no problem with cooler objects transmitting heat to warmer objects, provided the warmer object is transmitting MORE heat to the cooler. In fact, when this heat is transmitted by radiation, it’s practically unavoidable.

on June 15, 2011 at 12:57 pmscienceofdoomFor people reading Bryan’s comment:

Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics. And see the comments in that article from Bryan..

on June 15, 2011 at 1:05 pmcynicusOh my, Bryan proves parallel universes really do exist…

on June 15, 2011 at 8:46 pmNeal J. KingBryan,

I don’t have much interest in getting caught up in word games. But basically, heat is energy transferred by methods not reversible by adjustment of an external parameter. Mathematically:

dQ = dU + dW

where:

dW = pdV + (magnetic terms) + (stretching terms) + etc.; all these work terms can be reversed by changing the sign of the parameter. Systems that include magnetic terms incorporate magnetic moments, rubber bands include a stretching terms, etc.

Traditionally, modes of heat transfer are listed as conduction, convection and radiation.

The concept of heat itself is a bit fuzzy, because there is no such thing as “the amount of heat” in a system, that can be meaningfully distinguished from the “amount of energy” of the system. The old idea of “caloric” which was based on that concept died in the early days of the development of thermodynamics.

Basically, the point is that the internal energy of a system U is a well-defined state function, and the amount of work done is quantifiable in terms of changes to a macroscopic parameter (volume, magnetic field, length, etc.); and heat is what is transferred by other methods, including conduction, convection and radiation.

The other useful point is that the entropy change is:

dS ≥ dQ/T

on June 13, 2011 at 3:01 pm |DeWitt Paynescienceofdoom,

That’s what I used to think too before the Venus threads. You would be correct if the problem were one dimensional. But it’s not. There will be a meridional temperature gradient because high latitudes on a sphere receive less insolation. That meridional gradient leads to a meridional pressure difference gradient as altitude increases causing circulation. The end result will be a non-zero lapse rate, probably near adiabatic. The temperature difference between the poles and the equator provides the free energy for the work needed to establish and maintain a positive lapse rate (decrease in temperature with altitude).

That doesn’t explain why temperature

increaseswith altitude in the stratosphere. The stratosphere wouldn’t exist in anything like its present form if free oxygen weren’t present. The reason the temperature inversion blocking convection exists at the tropopause is that oxygen absorbs incoming UV radiation which produces ozone which absorbs even more UV. Given the chemistry and kinetics of ozone formation leading to the ozone concentration profile in the stratosphere, the temperature in the stratosphere must increase with altitude.on June 14, 2011 at 10:50 am |scienceofdoomTrying to keep it simple in this reply.

I don’t think it is simple and I lean towards lack of convection in this fictitious planet. It is simple and easily experimentally verifiable to show convection in an atmosphere heated from beneath. It is less easy to demonstrate convection in an atmosphere heated from above.

But as I said in Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion, perhaps Goody & Walker have made a valid point:

on June 14, 2011 at 12:41 pm |scienceofdoomIt seems to me that Bryan (June 14, 2011 at 10:55 am) is telling everyone that he doesn’t like my conclusion but can’t explain what’s wrong with it.

If any readers can come up with something of scientific substance from Bryan’s poetry, please let me know and we can assess it.

on June 14, 2011 at 1:27 pm |BryanSoD says with apparent approval

….” If the circulation is sufficiently rapid, and if the air does not cool too fast by emission of radiation, the temperature will increase at the adiabatic rate. This is precisely what is observed on Venus.”…..

The derivations used by W C Gilbert used the hydrostatic condition to derive the adiabatic lapse rate.

Hydrostatic means stationary or moving at constant speed.

So the convection effect (though hard to isolate) is not part of the derivation.

The convection effect is in addition to the adiabatic lapse rate and will in fact make it depart from ALR.

The only significant part played by radiation seems to be at the TOA.

Without radiative loss there, the atmosphere would eventually be isothermal at the surface temperature

on June 14, 2011 at 2:04 pm |DeWitt Paynescienceofdoom,

Yes we can. Venera 9 took pictures of the surface of Venus with available visible light. If I remember correctly, the average insolation at the surface is ~18 W/m², or about the level seen on Earth under heavy cloud cover.

on June 14, 2011 at 6:45 pm |scienceofdoomThat was the Goody & Walker quote from before this was known.

on June 14, 2011 at 2:11 pm |DeWitt PayneBryan,

Wrong. The surface of a sphere with plane parallel illumination isn’t isothermal. If the surface isn’t isothermal, then the atmosphere won’t be either. The rate of pressure decrease with altitude, atmospheric density, is a function of surface temperature. If you have a surface temperature that decreases with latitude, then the pressure at any given altitude will be higher at low latitudes than at high latitudes. A pressure difference will cause circulation. That circulation will not be restricted to high altitudes. Any vertical circulation will result in a near adiabatic lapse rate.

on June 14, 2011 at 4:31 pm |BryanDeWitt Payne

We are exploring the hypothetical.

If we have NO RADIATION leaving the Earth the atmosphere will tend to become increasingly isothermal.

With each new daily supply of solar energy causing a new Sun facing earth surface temperature ever higher than the night surface temperature.

However I agree it will never become completely isothermal.

However we are exploring the hypothetical, since radiation to the universe from TOA day and night keep the planet cool.

on June 14, 2011 at 5:11 pm |mkellySOD said: “a) the ideal gas law – the atmosphere is not an ideal gas”

I was not clear if you agree with this or disagree with this.

The atmosphere can be considered an ideal gas as the major constituents are far above their critical temperatures and PV=nRT for the atmosphere will yield a less than 1% error. That is a rough quote from my thermo book. Maybe I could scan it and sent it.

I pointed this out to you almost two years ago and you said you had not considered it.

Using the ideal gas law i.e. STP the temperaure is 0 C. That alone accounts for 18 degrees of the 33 degrees of poorly named GHE.

on June 14, 2011 at 6:34 pm |DeWitt PayneBryan,

Could have fooled me. In fact, please cite a direct quote that at least implies that hypothetical rather than the hypothetical of an optically transparent atmosphere where all radiation is absorbed and emitted by the surface. The trivial case of a planet with no solar illumination at 2.7 K is not very interesting or controversial.

on June 14, 2011 at 10:21 pm |BryanDeWitt Payne

You cannot put restrictions on hypothetical situations.

The planet that does not radiate does not exist, except perhaps in a black hole.

To make a hypothetical non radiating planet as you say opens a can of worms.

I said.

……”The only significant part played by radiation seems to be at the TOA.”….

I should have stopped at that, since the next part does not happen;

……”Without radiative loss there, the atmosphere would eventually be isothermal at the surface temperature.”………………..

on June 14, 2011 at 6:37 pm |DeWitt Paynemkelly,

You forgot the smiley face sarcasm tag. You can’t possibly mean that seriously.

on June 14, 2011 at 6:56 pm |scienceofdoommkelly:

I agree that the atmosphere is very close to an ideal gas – that’s why I use the equation. I was anticipating comments from people who don’t know that the atmosphere can be closely approximated to an ideal gas. I see these comments regularly on popular blogs.

The way you write your comment makes it sound like you explained to me the atmosphere was an ideal gas and I had never considered it..

You claimed some strange point about PV=nRT being responsible for the “greenhouse” effect and I correctly said I had not considered it up to that point. That’s true because it was such a strange idea. I have a vague memory of later explaining what was wrong with the idea that you proposed. But I can’t now remember what your strange idea was.

on June 14, 2011 at 7:18 pm |scienceofdoommkelly:

I tracked down some old comments, under username “Mike Kelly”. In among lots of other statements you made a claim about the effect of the idea gas law..

This quote:

And this quote:

And this explanation:

I have now seen many other people with similar mistaken claims.

You are claiming that pressure accounts for the earth’s surface temperature?

You can see an explanation of why that is wrong in Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion – under the heading “

Introductory Ideas – Pumping up a Tyre“.Increased pressure can cause a smaller volume or a higher temperature. Increasing pressure quickly does work on a system and can increase temperature in the short term.

The surface temperature of the earth with no sun and the exact same pressure would be close to absolute zero. Pressure does not cause temperature.

on June 14, 2011 at 8:56 pm |mkellySOD says: “Pressure does not cause temperature.”

So how are stars made? Gravity.

Gravity causes pressure. I agree if the sun went out all things go to “absolute” zero.

No I am not claiming and have never claimed that pressure accounts for the surface temperature of the earth. Air cannot heat ground. At least it is very difficult.

I was not explaining to you. I have far more regard for you than you appear to have for a fan of this blog as I have stated several times.

I have said before and say again now that some portion of the poorly named GHE can be explained via the ideal gas law for near surface temperature.

DeWitt if you disagree then please state why some of the supposed 33 degrees cannot be accounted for by pressure.

on June 14, 2011 at 9:18 pm |scienceofdoomI’m just trying to understand your claim. Sorry if I misunderstood it.

on June 14, 2011 at 9:12 pm |Neal J. KingI have seen elsewhere this very strange idea that “high temperature can be attributed to high pressure”. This is a complete misunderstanding.

There are three intensive variables (aside from composition) for a gas: pressure (p), density (n) and temperature (T). Because the equation of state relates them, there are two degrees of freedom.

Therefore, one cannot say that “p determines T”, because there is the variable n. If p is large, but n is also large, T can be quite small:

T = (1/k)*(p/n)

With respect to stars: Gravity does not cause the heating of stars:

– Gravity forces nuclei together

– Nuclear fusion releases kinetic and radiant energy

– This power release slows down the collapse of the star, producing a near-steady-state; and

– This power release also produces the high temperatures

on June 15, 2011 at 1:45 pm |mkellyMr. King if you know all the variables is a equation but one you can find the last one. P is 101.3 kpa or 14.7 psi. n is number of moles in a volume and R is a constant please solve for T. As far as I know the volume of the atmosphere has not changed in any significant way in a very long time.

As for the star explanation you seem to contradict DeWitt as to the reason stars ignite. Pressure caused by gravity. Without the pressure there is no reason for ignition.

on June 15, 2011 at 2:25 pmNeal J. Kingmkelly:

– “if you know all the variables is a equation but one you can find the last one. P is 101.3 kpa or 14.7 psi. n is number of moles in a volume and R is a constant please solve for T. As far as I know the volume of the atmosphere has not changed in any significant way in a very long time.” First: it doesn’t make a whole lot of sense to talk about the “volume” of a gas for which the temperature and pressure change from point to point: That’s why I talk about density, which is a locally defined characteristic. Secondly, your assumption that pressure is the “cause” of temperature fails to take into account, among other things, the fact that temperature changes on an minute-by-minute basis: Are you assuming that the weight of the atmosphere becomes lighter at night?

The basic problem is that you’ve gotten cause & effect mixed into a situation in which that doesn’t apply, and you’re neglecting dynamical considerations, like power input/output.

– “As for the star explanation you seem to contradict DeWitt as to the reason stars ignite. Pressure caused by gravity. Without the pressure there is no reason for ignition.” Not really: What happens is that as the compressed plasma descends, it loses gravitational potential energy, and this loss is converted into kinetic energy (=> increase in temperature). It is this increase in temperature that allows the electrical repulsion between protons to be overcome. In other words: If you could remove a cup-full of plasma at that same temperature/pressure/composition/radiative environment and maintain it at those conditions but at a very different gravitational potential, it would behave the same way. Gravitational potential energy is important insofar as it keeps the situation together and as motion induces changes that ARE reflected in thermodynamic characteristics. Specifically, when a packet of gas ascends or descends in a g-field, its pressure and volume are forced to change: this changes the thermodynamic situation. But it’s not the change in g-potential directly that does this. That’s why we refer to the “internal” energy of the gas: it’s not due to external forces like gravity.

The situation would be different if the most important interaction between the constituent particles were gravitational. But that’s very far from being the case, by a factor of roughly 10^40.

on June 14, 2011 at 9:27 pm |omnologosMisunderstanding or not, were there be a kilometer-deep depression with a sea-level rim, the bottom temperature would be around 9.8K warmer than the rim’s.

Perhaps one should say that it’s the decrease in pressure that decreases temperature, rather than the increase in pressure increasing temperature. On a practical level though, the two descriptions are one and the same.

on June 14, 2011 at 9:50 pm |scienceofdoomPerhaps a helpful way to see the role of pressure in temperature is like this:

Something sets the baseline – the surface temperature.

This is the reference point.

Because rising air expands and because sinking air compresses, and because these motions taking place relatively quickly, the process of rising or sinking is “adiabatic” = no net transfer of heat into, or out of, the parcel of air.

So that’s why the temperature

differencebetween the surface and any given height above or below is set by the adiabatic lapse rate.But what sets the

reference temperatureis something else.on June 14, 2011 at 10:17 pm |Robert P.William Gilbert writes the following expression (equation 3 of his paper) for the internal energy of a gas in a gravitational field:

dU = CvdT + Ldq + gdh – PdV

For the case of dry air, we can drop the latent heat term, getting:

dU = CvdT + gdh – PdV

I am convinced that these equations are incorrect, but they are incorrect in a very interesting and instructive way. I agree with SOD that Mr. Gilbert has, in effect, double-counted the gravitational work, but I think I can simplify his argument at the cost of a little bit more mathematics. Let me focus on the second.

The internal energy of a gas – ideal or not – in a gravitational field is some function of the number of molecules, temperature, volume, and altitude, U(N,V,T,h). If there are no chemical reactions taking place, N is a constant. We may then write, as a mathematical identity:

dU = (dU/dT) dT + (dU/dV) dV + (dU/dh) dh

where the derivatives should be interpreted as partial derivatives in which all of the other variables are held constant, i.e. (dU/dT) = (partial U/ partial t) at constant V and h.

Now, (dU/dT) at constant V and h is, by definition, equal to Cv. The potential energy of a molecule of mass m in a gravitational field is mgh, so the third term is NMg dh (which for a unit mass is equivalent to Gilbert’s term.)

But what about the second term? For an ideal gas, it vanishes:

(partial U/partial V) at constant T and h = 0 (ideal gas.) In other words, the internal energy of a given mass of an ideal gas is determined solely by its temperature and by its position in the external field.

So: the correct equation is: dU = CvdT + Nmg dh. The -PdV term is wrong.

How can we reconcile this with the first law of thermodynamics, dU = dq + dw ?

The work dw consists of two parts, PV work and work against gravity:

dw = -P dV + Nmg dh

So, dU = dq – PdV + Nmg dh, Equating this with our expression above, we get:

dq – P dV + Nmg dh = Cv dT + Nmg dh

And we see that the gravitational term cancels out of the expression for dq in terms of volume and temperature!

dq = P dV + Cv dT

If you now set dq = 0 (for an adiabatic process), and insert the ideal gas law and the hydrostatic equilibrium condition, you get the standard expression for the adiabatic lapse rate.

on June 15, 2011 at 12:33 pm |Neal J. KingI don’t see why anyone is trying to include gravitational potential energy as part of the INTERNAL energy of the gas. It’s not.

For example, the gravitational potential energy of the gas in a small region is not going to affect the chemical interactions going on there, which will be affected by the pressure, the temperature, the density, the chemical potential, the composition, the phases, and other thermodynamic characteristics.

If we consider an extended region, in which there is substantial variation of the gravitational potential energy, this variation will affect chemical interactions ONLY insofar that it changes the directly thermodynamical variables I mentioned above. For example, in the immediate neighborhood of a strong gravitational field, the temperature will vary from place to place (aka Pound-Rebka effect). That means that the chemical interactions will be going on at a higher temperature at lower potential energy than at higher: But this is taken into account by looking at the local temperature, not by incorporating the gravitational potential energy within the internal energy function.

on June 15, 2011 at 7:52 pmRobert P.(Hmm, I replied earlier but it doesn’t seem to have shown up, which is just as well since it was partly incorrect.)

In my experience, exactly what counts as a part of the “internal energy” is inconsistent from one discipline to another, and even within disciplines . As a physical chemist I am accustomed to defining the internal energy as the ensemble average of the Hamiltonian, excluding the kinetic energy of the center of mass but including potential energy due to external fields, if those fields couple to anything of interest.

But it hardly matters what you choose to call it; my basic point is that if you write the first law of thermodynamics as dU = dq + dw, then if you are going to explicitly account for the work done when an air parcel moves from one altitude to another on the right hand side, it should also be included as a part of the statee function “U”, whatever you wish to call it, on the left hand side. The usual procedure in atmospheric physics appears to be to account for that work implicitly through the hydrostatic equilibrium, but you could also do it explicitly as I have outlined in my post above. The results should be the same, and they appear to be if I have done my algebra correctly. Mr. Gilbert’s error amounts to adding the gravitational work explicitly to an expression that already included it implicitly.

on June 16, 2011 at 8:21 amBryanInternal energy is a state function and treating it as such gives advantages in working out problems on a PV diagram as to work done in going through a particular difficult thermodynamic path.

To solve the problem for a state function you can pick instead an easy path as the state functions actual path gives the same result .

Therefor it is generally wise to include gravitational potential energy in the internal energy folder.

However if you are not using the properties of the state function you can set out the energy quantities as you like.

Indeed from an educational point of view this approach is justified as you can detect the interchange between the energy types.

Likewise with direction of the PV work , different textbooks set it out as sometimes + or sometimes – depending on their initial assumptions.

A bit like convectional current and electron flow current in an electrical circuit.

If some confused person comes along they may think that a grave mistake has been made and twice or no current is flowing.

As long as the definitions are clearly set out there should be no problem.

on June 16, 2011 at 9:39 amNeal J. KingBryan,

a) “A bit like convectional current and electron flow current in an electrical circuit.”

What does convectional current have to do with electrical circuits?

b) Nothing you have said suggests to me that there is any value to incorporating altitude as a thermodynamical variable. What makes sense is to make understand the thermodynamical variables as functions (fields, really) of the altitude: T, p, n are functions of altitude; and therefore internal-energy density, work density, etc. are also functions of altitude; or of the three spatial coordinates generally.

on June 15, 2011 at 12:34 am |Robert P.(Note that there are a few typos – M for m, t for T – in my last post – I hope they are not confusing.

Here is a more intuitive approach to my preceding argument. Let’s think about this on the molecular level. Each molecule in the gas has the following forms of energy:

Kinetic energy of translational motion: 3/2 kT per molecule.

Kinetic energy of rotational motion: 0 for atoms, kT per molecule for diatomic molecules, 3/2 kT per molecule for polyatomic molecules

Intramolecular potential energy – depends on the molecule, but like the kinetic energy depends only on temperature.

Potential energy in the gravitational field: mgh per molecule of mass m.

Potential energy due to interactions of the molecules with each other. This depends on the gas, but is generally small as long as we don’t need to consider condensation, and is zero by definition for an ideal gas.

That is all. The “internal energy” U of thermodynamics is the ensemble average over the distribution of these microscopic, molecular energies. The first three terms are all incorporated into the heat capacity Cv (leading to a heat capacity Cv = 3/2 R per mole for monatomic gases, 5/2 R per mole for diatomic gases, and a more complicated expression for polyatomic gases.) If we are considering an air parcel that is small enough so that we can regard all the molecules in it as being at the same altitude, the fourth term just averages to Nmgh for N molecules of mass m, or Mgh where M is the mass of the parcel. The fifth term is the only one that depends explicitly on the volume of the sample, but it is generally small (for noncondensing gases) and zero by definition for an ideal gas.

So for an ideal gas, dU = Cv dT – Mg dh. The first term is the intramolecular energy (kinetic plus potential) and depends only on temperature. The second term is the gravitational potential energy.

on June 15, 2011 at 7:24 am |BryanRobert P. says

So for an ideal gas, dU = Cv dT – Mg dh.

Should this not be dU = MCv dT – Mg dh. ?

on June 15, 2011 at 8:50 amBryanYou can work with unit mass of 1kilogram or one mole but unless you are very carefull mixing up units on the same line could lead to errors.

on June 15, 2011 at 2:36 pmRobert P.In my equation, Cv is the heat capacity (extensive) not the specific heat (intensive). This is the standard usage in my field – an unadorned Cv denotes the extensive quantity, and when you want the intensive quantity you decorate it with an over-bar or a subscript m. I suppose I could have been clearer above, by writing out Cv = (3/2) nR for a monatomic ideal gas instead of saying “Cv = 3/2 R per mole” . It’s very easy to make unit mistakes when typing equations into a comment box.

on June 15, 2011 at 4:16 pmBryanRobert P

By writing out your line in this way the mass(M) must be put down as 0.029Kg rather than a variable M.

In physics the more usual way is specific heat capacity which has units Joules.Kilogram^-1. Kelvin^-1

on June 15, 2011 at 12:45 am |DeWitt Paynemkelly,

What causes the proto-star to heat up to ignition level is gravitational potential energy. (see here) Take a volume of gas equivalent to the volume of the solar system at very low pressure and temperature of 2.7K and allow it to collapse. The gravitational potential energy will be converted to kinetic energy resulting in a core temperature sufficiently high, assuming sufficient initial mass, to ignite fusion.

on June 15, 2011 at 12:54 am |DeWitt Paynemkelly,

Assuming the same input of energy to the surface as the Earth get’s now, 240 W/m², and a perfectly transparent atmosphere, The average surface temperature would be slightly less than 254 K, depending on the exact assumptions of heat capacity and thermal conductivity. So precisely none of the ~33 degrees is due to pressure. It’s all energy in and out. If you increase the surface pressure over the whole planet by 10%, the surface temperature will not change. (It might become a little more uniform) It’s only when the atmosphere is not transparent that the thickness of the atmosphere makes the surface hotter or if the hole area is small compared to the total surface area that the bottom of the hole would be hotter.

on June 15, 2011 at 12:56 am |DeWitt PayneLet me be more precise. If you suddenly increased the mass of the atmosphere by 10%, the surface temperature would go up temporarily, but it would cool back to the steady state value over time because radiation out would be higher than radiation in.

on June 15, 2011 at 1:35 am |KingOchaosIf you increased the mass, the pressure would increase vrs altitude, raising the tropopause surly? Effectively permanently raising the temperature… Due to its effect on optical depth vrs altitude surly?

So it would permanently increase temperatures, but due to its effect on opacity(assuming uniform mass increase of all the gases that make up atmosphere)

KingOchaos previously posted under Mike Ewing, just havnt been commenting in a long time, just reading blog o late.

on June 15, 2011 at 3:30 amomnologosThis is a scenario I would like to see modeled. We have a planet the size of Venus with a CO2-rich atmosphere of similar mass to Earth’s. Due to a colossal catastrophe, the entire planet is “instantly” resurfaced (iow the whole surface melts down). As a consequence the day becomes of similar lenght as the year, and the atmospheric mass is increased 90 times due to outgassing.

Given the fact that the atmosphere is thick to IR and visible light, how is the surface going to cool itself down and on what timescales? What will be the final steady-state temperature (or tropopause height, it’s the same thing) if any steady-state is achieved?

on June 15, 2011 at 3:37 amscienceofdoomKingOchaos:

DeWitt Payne is commenting on a transparent atmosphere.

on June 15, 2011 at 4:04 amKingOchaosMy bad.

on June 15, 2011 at 1:34 pm |mkellyDeWitt says: “Assuming the same input of energy to the surface as the Earth get’s now, 240 W/m², and a perfectly transparent atmosphere, The average surface temperature would be slightly less than 254 K, depending on the exact assumptions of heat capacity and thermal conductivity. ”

Again please I never said “surface of the Earth.” In fact I said near surface. The air will not heat soil at least it would be very difficult. We measure temperature in the air not the surface of the earth. Soil etc is heated by the sun. The air is heated several ways, but I never claimed the air heats the soil.

SOD said “pressure does not cause temperature”. Then why is there pressure welding. Why is there heat bursts?

on June 15, 2011 at 2:39 pm |Neal J. King– If the temperature of the air exceeds the temperature of the soil, the air will heat the soil. That’s kind of what temperature is about.

– There are two things I find looking up pressure or cold welding:

1) Welding without energy input and without temperature increase: If no heating is needed, why are you assuming that the pressure causes heating? It seems to apply to cases where there is no barrier to joining together (e.g., when two plates of identical metals are placed together).

2) Welding by application of ultrasound: In this case, the heating comes from conversion of the vibrational energy of the ultrasound into heat, locally; because the fact that you have two different pieces allow relative motion that steals the vibrational power and diverts it into heating. No flames needed, but energy is being provided.

– Heat bursts: These seem to occur when air descends from an altitude. As it loses altitude, the pressure applied to it increases (pressure always increases downward) and it undergoes adiabatic compression. It is the adiabatic compression that results in increased temperature: work is being done on the air packet.

on June 15, 2011 at 2:56 pm |DeWitt PayneCold welding ( http://en.wikipedia.org/wiki/Cold_welding ) is an interesting phenomenon. The principle is that if you take two very clean metal surfaces, no oxides or other contaminants, and place them together under high vacuum (no adsorbed gas layer), the atoms at each surface can no longer tell that they are at a surface and the two pieces become one.

on June 15, 2011 at 3:06 pm |DeWitt Paynemkelly,

STP is an arbitrary reference point picked for convenience. The different international standards organizations even have slightly different definitions of STP. ( http://en.wikipedia.org/wiki/Standard_conditions_for_temperature_and_pressure )There is nothing fundamental about a pressure of 1013mbar and a temperature of 0C. It in no way explains any part of the increase in the surface temperature of the planet caused by an atmosphere that absorbs and emits in the thermal IR.

on June 15, 2011 at 4:28 pm |BryanNeal J. King

I have asked you twice what your definition of heat is!

But no answer!

How can you say something is transferred when you cannot even define what is transferred.

I strongly suspect that you are using the colloquial meaning of heat rather than the thermodynamic meaning of heat.

This is a very common mistake made by proponents of the IPCC position

So for the third time what is your definition of heat!

on June 15, 2011 at 4:33 pm |Bryancynicus says

…….”Oh my, Bryan proves parallel universes really do exist”…

I have never commented on parallel universes however most theoretical physicists on the planet believe they exist.!

on June 15, 2011 at 4:55 pm |Neal J. KingBryan,

You have been reading a bit too much science fiction.

A vocal minority of physicists think the many-worlds interpretation of quantum mechanics has some value. Most think it is unnecessary to speculate that far into QM.

on June 15, 2011 at 6:34 pmBryanNeal J. King

So for the fourth time what is your definition of heat!

on June 15, 2011 at 8:50 pmNeal J. KingBryan,

I don’t have much interest in getting caught up in word games. But basically, heat is energy transferred by methods not reversible by adjustment of an external parameter. Mathematically:

dQ = dU + dW

where:

dW = pdV + (magnetic terms) + (stretching terms) + etc.; all these work terms can be reversed by changing the sign of the parameter. Systems that include magnetic terms incorporate magnetic moments, rubber bands include a stretching terms, etc.

Traditionally, modes of heat transfer are listed as conduction, convection and radiation.

The concept of heat itself is a bit fuzzy, because there is no such thing as “the amount of heat” in a system, that can be meaningfully distinguished from the “amount of energy” of the system. The old idea of “caloric” which was based on that concept died in the early days of the development of thermodynamics.

Basically, the point is that the internal energy of a system U is a well-defined state function, and the amount of work done is quantifiable in terms of changes to a macroscopic parameter (volume, magnetic field, length, etc.); and heat is what is transferred by other methods, including conduction, convection and radiation.

The other useful point is that the entropy change is:

dS ≥ dQ/T

on June 15, 2011 at 9:51 pmBryanNeal J. King

Yours must be the record for a definition of heat stretching to several paragraphs but where temperature is not mentioned once.

Here are some real definitions of heat.

From University Physics by Harris Benson page 382

Modern definition of Heat

Heat is energy transferred between two bodies as a consequence of a difference in temperature between them.

University Physics Young and Freedman

Energy transfer that takes place sole because of a temperature difference is called heat flow or heat flow transfer and energy transferred in this way is called heat. page 470

Heat always flows from a hot body to a cooler body never the reverse. page 559

Your old friend Feynman on finishing his thermodynamics sections in the famous 3 volume lectures he recommended interested readers who wanted to take the matter further only one book.

The book he recommended was the ultra orthodox Heat and Thermodynamics by Zamansky.

Zemansky comments on radiation on page 105

….the difference between the thermal radiation which is absorbed and that which is radiated is called heat.

Thus radiation like conduction and convection has spontaneous heat transfer from a higher temperature to a lower temperature but not the other way around.

Now you can see that within the framework of physics it is complete nonsense to say that heat moves spontaneously from a cold object to one at a higher temperature.

on June 15, 2011 at 10:17 pmNeal J. KingBryan,

Sorry, but it sounds like you’ve focused on an unduly small section of the text. I don’t have a copy of Z., but I don’t accept your interpretation. Because radiation from a dimmer lightbulb to a brighter lightbulb will be absorbed; and it is heat. And if Zemansky doesn’t agree, so much the worse for Zemansky. This wouldn’t be the first time that I picked a fight with a professor or a textbook.

on June 15, 2011 at 8:01 pm |mkellyDeWitt Payne

Cold welding (

I noted pressure welding not cold welding. We see cold welding whenever you have highly polished gage blocks. They will stick together and if you get to them on time you can slide them apart not pull.

But pressure welding is used on large pieces of metal stacked one on the other and an explosive charge force the two plated together under pressure and they weld together.

on June 15, 2011 at 8:10 pm |mkellyNeal says: “It is the adiabatic compression that results in increased temperature:”

Thanks Neal compression, wow, I think that is caused by pressure. So pressure can cause temperature.

Because of the difference in mass and Cp it would take lots of air and higher temperatures to add heat to the earth. I didnot say it could or isn’t done I said it would be difficult.

on June 15, 2011 at 8:58 pm |Neal J. Kingmkelly,

Not really: It is the process of compression (= work done on the packet of air) that causes the adiabatic warming. How to tell the difference? Just wait: when you have pulled the packet down to surface level, the pressure will remain about the same, but the temperature will drop as the heat leaks out.

Think again: If pressure were the “cause” of higher temperature, that should require that air pressure be much higher in the daytime than at night.

on June 16, 2011 at 1:29 pmmkellyPV = joules

P= pascals (Pa)=N/m^2

V= m^3 (volume)

joule= Nm

(N/m^2) * m^3 = Nm

Mr. King pressure with constant volume = joule. And as I said as far as I know the volume of the atmosphere has not changed in a very long time.

Again I said portion not total. Please read what I wrote not what you want to here.

on June 16, 2011 at 1:39 pmNeal J. Kingmkelly,

I don’t get what you’re trying to show by relating units. The fact remains that when the pressure remains the same but the heat has leaked away, the temperature will drop. If you think that “part of the temperature is caused by pressure,” why don’t you calculate this part.

To me, it just looks incoherent.

on June 15, 2011 at 10:22 pm |DeWitt PayneBryan,

Word games. No one says that

net energy, what you call heat, is transferred spontaneously from a lower temperature body to a higher temperature body. Your continuing efforts to misinterpret statements to imply that such a thing happens is boring beyond belief. So was section 3 of G&T.on June 15, 2011 at 11:06 pm |DeWitt PayneBryan,

You do understand that pyrheliometers and pyrgeometers are sophisticated versions of Wood’s experiment, don’t you? That hardly qualifies as being overlooked. It also calls into question why you think data from pyrgeometers is unacceptably bad if you’re so enamored of the far cruder experiment by Wood.

I can’t find the reference, but I know I’ve seen a comment from Wood himself that his experiment was problematic as far as the atmospheric greenhouse effect.

on June 16, 2011 at 4:16 am |scienceofdoomDeWitt Payne said on Bryan:

Bryan’s core beliefs prevent him ever understanding radiant heat transfer.

But for those readers wondering if Bryan is correct I just suggest they read Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics.

The textbooks are very clear. And in the comments – which new readers of this blog should check out – Bryan demonstrates his inability to understand the textbooks. This inability will continue and unfortunately bore us senseless on a weekly basis.

on June 16, 2011 at 8:00 am |BryanThe particular post is so rich in comic material it is as if professional comedians were competing to post the most ridiculous entry.

To judge which one is the funniest will not be easy.

To start of SoD launches a “hatchet job” on a topic that had been extensively covered before in an unusually civil exchange of views.

https://scienceofdoom.com/2010/06/22/venusian-mysteries-part-two/#comment-3472

Contributers like Arthur Smith, Leonard Weinstein, Nick Stokes and DeWitt Payne apparently missed the grave errors of W C Gilbert at the time.

It takes the unusual comedian SoD to boldly go….

Common sense would indicate that W C Gilbert who has spent his professional life working with thermodynamics is more likely to be correct than the comedian SoD.

Then we have Neal J. King giving a his several paragraph definition of heat without mentioning temperature.

Neil goes on to say “This wouldn’t be the first time that I picked a fight with a professor or a textbook.”

Neil do you kinda wonder just why that is?

Then we have DeWitt Payne saying its all just word games.

We can therefor can any physical quantity what we like (which of course is true) but very unhelpful for communication purposes .

Having personal definitions for momentum, atomic mass, heat, force, work and so on is exactly opposite to what is required for clear communication.

But the top prize for comedian of the year must go to SoD

He announced the results of his latest research with profound gravitas ;

..”the (less well-known) equation which links heat capacity at constant volume with heat capacity at constant pressure (derived from statistical thermodynamics and experimentally verifiable):

Cp = Cv + R ….[5]”……

on June 16, 2011 at 8:18 am |Neal J. KingBryan,

“Then we have Neal J. King giving a his several paragraph definition of heat without mentioning temperature.

Neil goes on to say “This wouldn’t be the first time that I picked a fight with a professor or a textbook.”

Neil do you kinda wonder just why that is?”

Simple: Because 9 times out of 10, I win. It may take a few days, but virtually every time, I’ve convinced the professor teaching the class that I was right, and he was wrong.

The most satisfying time was when I was presenting something I had noticed about a rapidly rotating rod: That from a different relativistic frame, the rod would no longer be seen as straight, but would appear curved. The professor immediately told me I was wrong, and suggested three possible reasons why I was making such an elementary mistake, and how I shouldn’t be embarrassed about making mistakes, etc., etc. I wasn’t embarrassed, I was pissed off: I had discussed a half-dozen questions with him before, and he frankly should have given my observation more thought before dismissing it.

The very next day I ran into him in the hallway, and he apologized profusely for being both:

a) rude; and

b) wrong.

on June 16, 2011 at 8:47 am |BryanNeal J. King

Lets take your recent quarrel with Zemansky

Zemansky comments on radiation on page 105 Heat and Thermodynamics by Zamansky.

….the difference between the thermal radiation which is absorbed and that which is radiated is called heat…..

Now the King version would go something like this;

An object emits heat and an object absorbs heat and the net heat goes from higher to a lower temperature

Now the Zemansky version would go something like this;

An object emits thermal energy and an object absorbs thermal energy and the difference is what we call HEAT and it goes from higher to a lower temperature.

Now why should thermal energy be distinguished from Heat?

Because heat has the thermodynamic capacity to do WORK in the given situation.

That is to be changed into electrical or translational kinetic energy or some other higher quality of energy.

This is true for higher to lower temperature transfer of thermal energy so heat is transferred

It is not true of lower to higher temperature transfer so no heat is transferred

on June 16, 2011 at 9:31 am |Neal J. KingBryan,

When we talk about heat, we refer to the energy itself, not to the book-keeping that you are doing.

Let’s take it in steps:

a) An incandescent lightbulb is powered by electricity; it attains a high temperature and radiates electromagnetic power, closely approximating the traditional Planck formula for blackbody radiation. Why? Because that’s essentially what it is. It doesn’t match perfectly because there ought to be an enclosure to ensure reflections and attainment of an equilibrium; but it’s very close. The radiated power meets the Stefan-Boltzmann equation for radiated intensity.

b) The power given off by the lightbulb is heat radiation, by definition. Heat radiation is a form of heat; as the famous phrase goes, “heat transfer is by conduction, convection, and RADIATION.” If the power given off by a lightbulb is not heat radiation, than there IS no such thing, and more textbooks than Zemansky need to be trashed.

c) If the power given off by the lightbulb is heat radiation, the energy thus transferred is heat. Just as what is transported by a shipment of eggs are eggs; if it was just the shells, or white oval-shaped objects, it wouldn’t be called “a shipment of eggs.” So the energy radiated out of the lightbulb is a form of heat.

d) If radiant energy is emitted by the lightbulb and falls upon another object, what will happen to it? Will it:

– Pass through the object? No.

– Will it be magically cancelled out some distance from the object? No.

– Will it be absorbed and/or reflected? Yes; and the object’s absorption coefficient will be proportional to its emissivity at the same frequency. So if the object is visible at a certain frequency, it will absorb at that same frequency.

e) So what happens if the second object is also a lightbulb? Will it:

– Become transparent to the incoming radiant energy? No.

– Magically cancel out the incoming radiant energy? No.

– Absorb this energy? Yes, in part; if the second lightbulb is visible at a frequency, it will absorb at this frequency.

f) What happens if the second lightbulb is turned on, and is brighter than the first lightbulb? Will it:

– Repel the radiant energy of the first lightbulb? No, it will not.

– Absorb the radiant energy of the first lightbulb? Yes, in part.

Therefore:

– Two lightbulbs, both powered electrically, each give off radiant heat energy in accordance with their individual temperatures.

– Light from each bulb will fall upon and be absorbed by the other.

– Therefore, heat is transferred from each bulb to the other.

– However, the hotter bulb will radiate more powerfully, in accordance with the Stefan-Boltzmann law; so the NET TRANSFER of heat will be from the hotter to the cooler.

If you think the electrical power source is an issue, think about two stars of different temperature. The argument works the same way. And if you happen to be between the two stars, and then fall into the cooler star, quoting Zemansky and saying “There is no heat coming from the cooler star because the hotter star is producing more power” will not save you from being burned up.

on June 16, 2011 at 9:57 am |BryanNeal J. King

You seem to have overlooked completely the thermodynamic properties of heat.

Have you ever studied the Carnot Cycle?

This is the usual route to discussion of the second law in thermodynamics.

The thermal energy heat transfer from a higher temperature object will not only have a greater intensity but the average frequency will be higher.

The quality of the radiation is as important as the intensity.

Why do you think that no educated physicist would choose to describe heat in the way you choose to?

on June 16, 2011 at 10:11 amNeal J. KingBryan,

How would YOU know what educated physicists would, or would not, do?

I have dealt one on one with Nobel Laureates in physics, on physics. Don’t try to teach your grandmother to suck eggs. Instead, why don’t you do some urgently needed repair to your own scientific education – which is very weak in physics.

You could start by attempting to actually understand the very simple and straightforward analysis I’ve posted above; instead of trying to divert the issue.

But I think the point should be clear enough to other readers, who are, after all, my target audience.

on June 16, 2011 at 10:24 am |BryanNeal J. King says

…”think about two stars of different temperatures”……

Now of course the resultant intensity will vary as you move between the line joining the two stars.(inverse square law)

Very close to the lower temperature star its intensity will be higher than the other.

Lets say you position yourself at the point where you have equal intensity from both stars.

A thermometer pointing at each would read the same.

According to your interpretation you could use the “heat” from both to turn into useful work like electricity or translational KE.

I think you will find that this is impossible.

Where is your heat sink?

on June 16, 2011 at 10:49 am |Neal J. KingBryan,

The point is that they are both producing heat energy.

You could run a solar cell from either, or boil an egg with a solar oven.

So they are each receiving heat from each other.

And that, by the way, is enough to shoot down the premise of the G&T paper.

on June 16, 2011 at 11:46 amBryanNeal J. King says

….”You could run a solar cell from either, or boil an egg with a solar oven.”…

Why don’t you try the experiment.

Go to the equal intensity point between Earth to Proxima Centauri point and boil an egg.

Your modest hopes for the dual heat point will be disappointed.

The solar oven will be transmitting as much radiant energy as it is receiving

on June 16, 2011 at 11:57 amNeal J. KingBryan,

You are perhaps unacquainted with how solar ovens work:

– Radiant energy is focused to a point

– The point is where you put the egg

– The egg absorbs the power and cooks

on June 16, 2011 at 10:29 am |EdimHEAT is not thermal radiation. HEAT is NET thermal radiation between bodies.

That’s basics. It is very important not to confuse the two terms.

When I studied heat transfer, I was not allowed to confuse the two.

Again:

Heat is defined in physics as the transfer of thermal energy across a well-defined boundary around a thermodynamic system.

Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero.

on June 16, 2011 at 11:33 am |EdimThe most important thing is to properly define the thermodynamic system you study. The boundaries must be well-defined, otherwise your flawless equations and math is pointless.

on June 16, 2011 at 12:34 pm |DeWitt PayneBryan,

I went back and looked at that thread. In no way did I ever agree with W C Gilbert’s conclusions. I paid zero attention to his math. Why bother when his conclusions were so obviously incorrect.

on June 16, 2011 at 12:43 pm |DeWitt PayneBryan, According to your interpretation you could use the “heat” from both to turn into useful work like electricity or translational KE.

I think you will find that this is impossible.

Where is your heat sink?

Um, deep space at 2.7 K perhaps. Also, have you never heard of lenses and mirrors? Admittedly, you’d need a very large mirror to focus sufficient energy, but in principle it could be done. Even if the effective temperature of the radiation light years away from a star is very low, the brightness temperature of the radiation, what you call energy quality, is still thousands of degrees K.

on June 16, 2011 at 1:53 pm |BryanDeWitt Payne

“I went back and looked at that thread. In no way did I ever agree with W C Gilbert’s conclusions. I paid zero attention to his math. Why bother when his conclusions were so obviously incorrect.”

Is this the same DeWitt Payne ?

Shy and retiring, only posting when he is almost in agreement with other posters.

on June 16, 2011 at 2:11 pm |BryanNeal J. King and DeWitt Payne

“You are perhaps unacquainted with how solar ovens work:

– Radiant energy is focused to a point

– The point is where you put the egg

– The egg absorbs the power and cooks”…..

The egg at the focus is also a transmitter.

The balanced intensity point is in profound thermal equilibrium with both stars.

The parabolic mirror will transmit as much thermal energy as it receives.

Rectilinear propagation of light.

The transmitted rays follow exactly the same path(in the other direction) as the absorbed ones

on June 16, 2011 at 2:18 pm |Neal J. KingBryan,

Nice attempt at poetry.

But more effort at MEANING would be good.

At a second pass, we could worry about CORRECTNESS.

But that would be a bit ambitious at this stage.

C+

on June 16, 2011 at 2:58 pm |DeWitt PayneBryan,

And the egg has a much smaller surface area than, say, a 10Mm2 mirror, or however large it has to be.

No. It’s in thermal equilibrium with the total incident radiation over the entire sphere, most of which is deep space at 2.7K.

It will

reflect and focusas much thermal energy as it receives. The effective temperature of solar radiation at local noon at the equator on a clear day is ~365 K. Yet solar furnaces can produce temperatures on the order of 3,000 K by increasing the effective surface area of the sun at the focus. The theoretical upper limit of a solar furnace is ~6,000 K, the brightness temperature of the surface of the sun as seen at the Earth’s surface.on June 16, 2011 at 3:20 pm |Neal J. KingDeWitt,

Heroic effort at trying to make sense of Bryan’s mumbles!

But I would still write it as off as bad poetry; perhaps an attempt at haiku.

But I regret giving the “+” after the “C”; since no effort was made to count syllables.

on June 16, 2011 at 3:10 pm |BryanDeWitt Payne says

…..”The effective temperature of solar radiation at local noon at the equator on a clear day is ~365 K. Yet solar furnaces can produce temperatures on the order of 3,000 K by increasing the effective surface area of the sun at the focus.”…….

Yes you have convinced me.

A higher temperature object can heat an object at a lower temperature.

But hold on is that not the point I was making.

If I drifted off you must excuse me I was having to contend with Neal’s problem of trying to boil an egg in deep space.

You say the temperature there is 2.7K and I have no reason to doubt you.

I think that Neal will have to wait a very long,long time for his egg to boil.

Physicists however are interested in such problems.

Engineers would however rule it out as completely impracticke.

on June 16, 2011 at 3:23 pm |Neal J. KingBryan,

The import of DeWitt’s remark is that the maximum temperature attainable using light from a star at 6,000 K is 6,000 K.

I don’t know how good a cook you are, but I think most people would be able to boil an egg with an oven able to reach that temperature.

on June 16, 2011 at 3:32 pm |BryanDeWitt Payne

Back to the main point.

Yesterday if I understood you, you were saying that if there is any form of heat transfer available the adiabatic lapse rate would be set up.

Here on Earth that is -9.8K/km.

This was the point that Leonard Weinstein and W C Gilbert were making.

In fact you adopted a very hard line version of it.

I was something of a “lukewarmer” on the topic suggesting that if no heat could leave the Earth the atmosphere would tend to be isothermal.

But you would have none of it, and argued for a very hard Weinstein/ Gilbert line at the time.

The outcome of this debate gives no comfort whatsoever to SoD.

The radiative effects of CO2 and H2O play no part in the adiabatic lapse rate.

Apart from radiating long frequency radiation at the top of the atmosphere.

on June 16, 2011 at 3:40 pm |BryanNeal J. King says

…..”The import of DeWitt’s remark is that the maximum temperature attainable using light from a star at 6,000 K is 6,000 K.

I don’t know how good a cook you are, but I think most people would be able to boil an egg with an oven able to reach that temperature.”…..

Not only that but DeWitt’s is liable to work out the size of the parabolic dish and the time taken for the egg to boil from 2.7K in deep space at the intensity midpoint. between Earth and Proxima Centauri

The age of the universe is the order of magnetude I would guess.

on June 16, 2011 at 3:47 pm |Neal J. KingBryan,

Your guess is as bad as any I’ve seen, on any topic.

You should give up while you’re behind: Because you’re just getting farther behind.

on June 16, 2011 at 4:06 pm |BryanNeal J. King

Although the distant Sun is at 6000K and even though heat (as Ive always maintained can flow) from a higher to a lower temperature of 2.7K.; any Engineer would rule out your proposal as completely impractical.

Also you should not be so generous with your marking scheme.

I have you down for a fail and a repeat of thermodynamics 101.

Your notion of heat travel spontaneously from a lower temperature to a higher temperature would get that treatment from any well run physics department.

on June 16, 2011 at 6:12 pm |Neal J. KingBryan,

I’m glad to hear that. I would be more worried if you thought well of my understanding of physics!

I guess I’ll have to be satisfied with the approval of Nobel Laureates in Physics…

on June 16, 2011 at 4:42 pm |DeWitt PayneBryan,

That’s an oversimplification. In order to have a lapse rate at all, there must be energy flow because either work has to be done to establish a lapse rate in the case of a perfectly transparent atmosphere or radiative cooling causes the lapse rate to increase until balanced by convection.

I always held the position that if there is an opaque shell around the planet enclosing an atmosphere, then the temperature of the surface and the temperature of the shell will be the same. It gets tricky, though, if the shell isn’t isothermal. If illuminated by plane parallel radiation, it wouldn’t be isothermal unless it was superconductive. In that case there would be internal circulation that would transfer heat from the equator to the poles. Then, like the Earth, more radiation would be absorbed near the equator than emitted and more radiation would be emitted than absorbed near the poles.

For a transparent atmosphere, the lapse rate at the surface would always be zero and the surface would be in radiative equilibrium with the visible sky. At the internal surface of the shell, the lapse rate would be negative near the equator and positive near the poles.

And precisely where did he say that? You continual denial that there must be energy flow in both directions which has measurable consequences is beyond boring.

on June 16, 2011 at 6:36 pm |BryanDeWitt Payne says

…. “Your notion of heat travel spontaneously from a lower temperature to a higher temperature “….

“And precisely where did he say that? You continual denial that there must be energy flow in both directions which has measurable consequences is beyond boring.”

You obviously have not been paying attention.

I have said on several posts here that the energy flow is bidirectional but the heat flow is only from the higher temperature to lower temperature object.

Likewise Neal J. King has said on several posts that heat travels spontaneously from a lower temperature to a higher temperature as long as more heat travels from a higher temperature to lower temperature object ”

You will easily get bored if you don’t pay attention to detail!

on June 16, 2011 at 6:09 pm |omnologosthere’s no atmosphere of note on the Moon but the surface is not in radiative equilibrium “with the sky”. Am not sure if the Lunar surface is ever in radiative equilibrium at all. And what is the average surface temperature on the Moon?

on June 16, 2011 at 9:17 pm |DeWitt PayneBryan,

Clearly a distinction without a difference. Or, as I said earlier, word games.

on June 17, 2011 at 4:25 pm |mkellyHeat, energy. Energy, heat. They are different. Heat (thermal energy) only exists crossing a boundary where a temperature gradient exists. Heat requires matter to exist.

In all heat transfer equations if T1 minus T2 is zero then W is zero and no heat was transfered. It also says by way of W = j/s that no energy was transfered. If when T1 = T2 that no heat or energy was transferred it is hard to say that when T1 not = T2 that energy was transferred or that heat was transferred.

There is no equation in which you can plug in back conduction, back radiation, nor back convection.

on June 16, 2011 at 9:56 pm |omnologosSoD – the cacophony is almost too much to bear. Have you considered introducing a limit on the number of comments anybody can add to a blog? Say, five should be enough.

The ongoing “heat” discussion is now a farce among people that don’t know when to stop. And it’s smearing your blog. ENOUGH ALREADY!!

on June 17, 2011 at 4:33 am |KingOchaosit is getting a bit beyond ridiculous… There are series of threads dedicated to educating Bryan on this very subject if my memory serves me correct.

But there is nothing wrong with a good discussion, if there was a five comment limit to an article, the venus threads for example would not have been as educational and interesting as they were.

But the whole “heat” debate is just playing with words.

on June 16, 2011 at 10:04 pm |scienceofdoomomnologos:

1. The first important principle to understand is that with a totally static system, reaching equilibrium depends on the heat capacity of the system. (

Actually it takes an infinite time to reach, but that is more “splitting hairs” – e.g., if the theoretical equilibrium is 20’C and the system starts at 10’C, it might take 1 hour to reach 19’C, and 10 hours to reach 19.9’C and 100 hours to reach 19.99’C – and a million years later it is at 19.999999999999…’C– and sharp-eyed observers can note my ratios are probably not correct).2. The second important principle to understand is that with a dynamic system, equilibrium is only an approximation. You can see this with a simple experiment where you heat a tank of water with a varying input of heat. Due to thermal lag the temperature will always be changing, “trying” to reach the new equilibrium.

If these two points aren’t clear then the following section won’t be understood.

3. The lunar surface has a very low heat capacity meaning that its surface temperature can adapt very quickly to whatever solar radiation is absorbed by the surface. But if you understand point 1 and point 2 you can see that any given point might be close to radiative equilibrium at any time, or might never be at any given time. However, over a period of time energy in = energy out (or the overall system will be heating up or cooling down). So the concept of radiative equilibrium usually means “averaged over some time period” and often means “averaged over some time period and over some surface area, and even then is more of a useful approximation”.

4. Average temperature is not a useful average when there are wide swings in temperature.

You can see this explained in much more detail in Lunar Madness and Physics Basics.

on June 17, 2011 at 12:01 am |DeWitt PayneSo how big a mirror would you need to cook an egg with solar radiation at a distance of 2 light years? I’m not going to bother to look up the luminance of Proxima Centauri to calculate the exact point of balance. I’ll also define cooked as hard boiled. To hard boil an egg takes 8-15 minutes at boiling water temperature, 373.15 K, hardly the age of the Universe. Assume a spherical egg weighing 0.06 kg and having a density = 1g/cm3. The volume of the egg is then 6E-5 m3 and the surface area is 0.007412 m². At a skin temperature of 373.15K and ε=1, that’s 8.15 W. So we’ll spin the egg at the focal point of a parabolic reflector. What’s the diameter of the aperture of the reflector needed?

At 93E6 miles from the sun, the radiance is 1346 W/m2. 2 light years is 1.175E13 miles. Since radiance declines at r^-2, the radiance will decrease by 6.26E-11 and the surface area of the aperture needed to capture 8.4 W would be 1E8 m². That makes the diameter equal to 11.284 km.

I haven’t specified a starting temperature, but unless it’s frozen and I have to deal with the heat of fusion, the heat capacity of water is ~4200 J/kg/K which makes the heating rate for a 0.060 kg egg at 8.4 J/sec ≅ 2 K/minute. Still nothing like the age of the Universe.

on June 17, 2011 at 7:42 am |BryanDeWitt Payne

Thank you for your calculation.

The starting temperature is 2.7K and so the egg would be frozen

Have you factored in the simultaneous cooling of the egg during the time.

on June 17, 2011 at 8:32 amBryanThe heat loss from the egg at 373K is about 8.6 Joules/second

on June 17, 2011 at 4:22 pm |DeWitt PayneBryan,

If there’s 8 W in and less than 8 W out, there is no cooling.

I get 8.15 and you have enough of the details of the calculation to see how I did it. Asserting I’m wrong won’t cut it. Either show me your calculations or show me where I’m wrong.

Doing the full calculation for radiation in and out and assuming 2.1 kJ/kg/K for a frozen egg, 4.2 kJ/kg/K for a liquid egg and 334 kJ/kg for the heat of fusion, and neglecting internal thermal conductivity, it would take 4313 seconds for the egg to reach 0 C, an additional 3306 seconds to thaw and 6900 seconds more to reach an internal temperature of 90C where the yolk is solid and crumbly for a total of 15319 (way more significant figures than are justified) seconds or ~4 hours and 15 minutes, less if you only want soft boiled. The freezing would probably ruin the consistency, though, and the over half hour spent between 40 and 140 F might not help either.

on June 17, 2011 at 5:17 pm |DeWitt Paynemkelly,

You demonstrate your lack of understanding of thermodynamics. Try reading this:

http://www.av8n.com/physics/thermo-laws.pdf

Energy transfer between two parallel plates at different temperatures neglecting emissivity:

E = σ(T1^4-T2^4)

From the point of view of plate 2, that is energy in from plate 1 = σT1^4 less energy out from plate 2 = σT2^4. That’s back radiation in a nutshell.

on June 18, 2011 at 12:16 pm |BryanDeWitt Payne

At a temperature of 373.15K the egg will be radiating at 7.5W using

Your area and SB equation

σ(T^4)

The egg will not have the same emissivity/absorbency value for the solar radiation and the eggs thermal emission

Solar e = 0.8

Thermal e = 1

Both these factors will make the time taken considerably longer than ~4 hours and 15 minutes, you calculated

11.284 km. diameter reflector seems about right .

I calculated 13.8 Km for a 10W heating effect scaled up for the emissivity difference.

You could argue about some of the assumptions, but I think you will agree that for a reflector diameter less than 10Km the time taken would not just be longer.

It would not happen at all.

So for these cases, my estimate of the age of the Universe was a considerable underestimate.

on June 19, 2011 at 12:07 am |IanScience of Doom

Totally off topic…would it be possible for you to include a ‘latest comments’ box somewhere on your home page?

Best wishes, ian

on June 19, 2011 at 3:01 am |scienceofdoomIan,

Brilliant idea, it has been done.

on June 19, 2011 at 2:38 am |jaeFascinating, indeed! You are FINALLY taking your AGW/CO2/radiation-only blinders off and addressing the issues that Alan Siddons, et. al. have been touting for years. If and when you finally sift through all of your calculations, you will probably find that there is really no greenhouse effect at all, just storage of heat.

LOL.

ps. I have all the empirical evidence on “my side.”

on June 19, 2011 at 3:17 pm |DeWitt PayneBryan,

373.15^4 = 1.94E+10

1.94E+10 * 5.67E-08 = 1.10E+3 W/m²

1.10E+3 * 7.412E-3 = 8.15 W

I’ll paint the egg black.

Obviously, below a certain area, you won’t even thaw the egg, much less cook it.

on June 20, 2011 at 7:27 am |BryanDeWitt Payne says;

………………………………”I’ll paint the egg black”…..

You will need to make sure that its black in the IR as well.

Carbon nanotubes will be your best bet.

Then of course there’s the extra mass of the paint to be budgeted for and so on ….

on June 20, 2011 at 2:59 pm |mkellyDeWitt Payne

mkelly,

Heat, energy. Energy, heat. They are different. Heat (thermal energy) only exists crossing a boundary where a temperature gradient exists. Heat requires matter to exist.

You demonstrate your lack of understanding of thermodynamics. Try reading this:

Sorry Mr. Payne I was quoting my thermodynamics book. So where does that leave us. Battling books? My PHD in engineering knows more than your PHD in engineering?

Heat only exists when crossing a boundary when a difference in temperature gradient exists. No gradient to heat transfer.

Your back radiation explanation is totally different than shown elsewhere. Call it what it is then a temperature gradient only.

on June 20, 2011 at 4:04 pm |MaitWhy is the theoretical limit of a solar oven 6000 K (I’m not quite sure I understand how you can attribute a temperature limitation at all to this device to be honest). There are a lot of practical limits to what it can do, but theoretical limits kind of puzzle me at the moment to be honest (at least at this scale).

on June 20, 2011 at 4:14 pm |lgldT/dz = g /cp

Does this mean;

1. more ghgs will not increase the temperature of a layer, just ‘speed up’ the convection?

2. the downward radiation will increase only because of higher ghg concentration (and not because of higher temperature)

If so, is this how it is modeled?

on June 20, 2011 at 6:59 pm |DeWitt PayneMait,

Think of a solar oven as expanding the sky coverage of the sun. The limit of coverage is 100%. The temperature of the sky would then be ~6,000 K (the brightness temperature of the surface of the sun) and so would the temperature of the object inside. It can’t go any higher or you violate the Second Law by having

~~heat~~energy flow from a cooler to a warmer object.Another way of looking at it is that you can’t focus the sun’s light to an arbitrarily small point and thus an arbitrarily high temperature because the sun isn’t a point. Even for a perfect lens with perfect transparency, the image of the sun at the focus can’t have a temperature higher than the sun itself.

on June 20, 2011 at 8:23 pm |MaitThe temperature of the sky would depend on the height of the sky from the surface and some other stuff (emissivity and thingies like that). Other solutions would be a violation of the law of conservation of energy.

I’m sure someone can come up with an explanation why a solar oven that would create higher temperatures than 6000 K doesn’t violate the second law of thermodynamics as well. I unfortunately can’t due to my negative feelings towards them laws of thermodynamics (they are mostly unnecessary and tend to create “monsters” like Bvian).

on June 20, 2011 at 7:03 pm |DeWitt PayneIgl,

That’s the adiabatic lapse rate for dry air. That only means that a package of dry air raised or lowered will always have the same temperature as the air around it. The work done lifting the package is exactly balanced by the increase in gravitational potential energy and the work done by expansion and cooling. It has precisely nothing to do with the greenhouse effect and greenhouse gases.

on June 20, 2011 at 7:06 pm |DeWitt PayneIgl,

Well, not precisely nothing. The greenhouse effect only exists if the temperature decreases with altitude. But that will always happen in the lower atmosphere in the real world.

on June 20, 2011 at 7:48 pm |DeWitt Paynemkelly,

From John Denker linked above:

Section 10.5.5 is about viscous dissipation in a layer of oil between a rotating and a fixed plate. The temperature of the oil increases over time. Fit that into your overly simplistic definition of “heat”. Quoting textbooks doesn’t mean you actually understand concepts, because you clearly don’t.

on June 20, 2011 at 8:36 pm |mkellySection 10.5.5 is about viscous dissipation in a layer of oil between a rotating and a fixed plate. The temperature of the oil increases over time. Fit that into your overly simplistic definition of “heat”. Quoting textbooks doesn’t mean you actually understand concepts, because you clearly don’t.

I find it odd that you and sometimes SOD find it necessary to insult folks that post here. I have never insulted you or SOD.

Having gotten my degree in mechanical engineering I believe I understand concepts as well as you. I taught basic radar theory and basic physics of underwater sound in the Navy so I can and do understand concepts. (Many years ago) I believe most people should know and discern the different types of energy. KE, PE, thermal, chemical, atomic, etc. so I prefer to keep heat(thermal energy) and just energy apart. Disagree as you will but it does not hurt to have a distinction.

So I assume you think your PHD knows more than my PHD. 🙂

on June 20, 2011 at 8:25 pm |DeWitt Paynemkelly,

I forgot this one:

Heat transfer by conduction across a temperature gradient in an insulator is proportional to the difference of the two temperatures. Heat transfer by radiation across a vacuum is proportional to the difference of the fourth power of each temperature (not the fourth power of the difference). Not the same at all.

on June 20, 2011 at 8:47 pm |mkellyDeWitt Payne

mkelly,

I forgot this one:

Your back radiation explanation is totally different than shown elsewhere. Call it what it is then a temperature gradient only.

Heat transfer by conduction across a temperature gradient in an insulator is proportional to the difference of the two temperatures. Heat transfer by radiation across a vacuum is proportional to the difference of the fourth power of each temperature (not the fourth power of the difference). Not the same at all.

Mr. Payne as you know the two mimumum requirements for heat transfer are path and temperature gradient, no temperture gradient no heat transfer either for conduction or by radiation. If T1 – T2 no matter what power you raise it to is ZERO then no transfer. They are the same in that respect.

on June 20, 2011 at 10:45 pm |BryanDeWitt Payne says

…….”From John Denker linked above”………

John Denker is a “whacko” character with some odd ideas.

Nobody quotes John Denker apart from himself and perhaps you.

John Denker likes to bask in the reflected glory of Feynman by refer to him often.

Yet what does Feynman say on thermodynamics?

The only book recommended by Feynman in his famous 3 volume lecture notes was ;

Heat and Thermodynamics by the ultra orthodox Zemansky.

Even Denker admits that the traditional definitions of heat, work, energy are the ones used in physics textbooks.

In fact the only people who seem to have a difficulty with the traditional definitions are those who peddle the “greenhouse theory”.

I suppose that when Clausius says that heat only moves spontaneously from a higher temperature surface to a lower temperature surface gives them a problem.

Their solution is to change the meaning of heat- its such an awkward word.

on June 21, 2011 at 3:18 am |DeWitt Paynemkelly,

In a word, Duh! And this applies to the greenhouse effect and the concept that radiation goes in all directions how? The atmosphere does radiate toward the surface. You can easily check this with an IR thermometer. That is the source of the greenhouse effect. You can call it back radiation or something else, but it certainly exists. The atmosphere is cooler than the surface. The surface is cooler than the sun. There is net energy flow from the sun to the Earth and from the Earth to the atmosphere and then to space.

on June 21, 2011 at 7:27 pm |mkellyMr. Payne,so you agree with then. Very good. I am sure the insult was unintended.

Because you can measure something does not mean it has importance. Only the temperature difference is important how much back IR is measured is not a factor in heat transfer. No equations use it.

on June 21, 2011 at 3:54 am |DeWitt PayneBryan,

Hardly. It’s only awkward when it’s misused, generally by those who claim that greenhouse theory violates one or more of the laws of thermodynamics. There is nothing in greenhouse theory that requires a spontaneous

netflow of energy from high to low temperature. There is no energy balance diagram that shows such a flow. It’s a classic strawman argument that can only be made by willful misinterpretation and tortured logic.on June 21, 2011 at 4:17 am |coheniteIf radiation were heat then space would be hot; I don’t see that the difference between energy and heat is a semantic one; 2 lightbulbs of different strengths will emit radiation at different levels; the stronger one will not be heated by the weaker one but it will lose heat less quickly because of the radiation coming from the weaker one; the weaker one will be heated by the stronger and this is shown by the equilibruim between the 2 being higher than the base temperature of the weaker.

I disagree strongly with suggestions that comments be restricted; things have a way of emerging and serendipity rules any conversation such that insights can appear from the least likely and usually the more unconventional sources.

SoD and indeed others such as Neal King have pronounced that pressure does not cause temperature. I have been looking at Uranus and Neptune; solar imput is minimal and almost nonexistent in Neptune’s case where at its surface, where its clouds touch space, the temperature is -218C; however, there is a pronounced lapse rate in Neptune’s atmosphere and its core surface is much warmer than its outer surface; Neptune does produce some internal heat so that might explain some of the downward temperature increase but Uranus does not have internal heat and it still has a lapse rate:

http://en.wikipedia.org/wiki/File:Tropospheric_profile_Uranus_new.svg

Is that lapse rate due to the methane in the atmosphere or the pressure of the atmosphere itself?

on June 21, 2011 at 8:48 am |BryanThe big picture discussed here brings no comfort to those who believe in a greenhouse theory.

The Postma article page 14 (easily checked by your own calculations) calculates that in sun facing equatorial regions the ground temperature reaches 50C while the air temperature reaches 35C.

As DeWitt Payne points out almost any heat transfer in the troposphere will quickly set up the dry adiabatic lapse rate =-9.8K/km.

Now since this is determined by the hydrostatic formula it is independent of convection.

However convection currents are almost always present and they modify the actual lapse rate in such a way as to produce a cooling effect.

In any serious article that I have read( e.g. from NASA) the modification seems to be largely determined by the presence of H2O, determining humidity and in particular the latent heat of vapourisation’s contribution which helps stabilise the temperature of the troposphere.

The radiative properties of CO2 seem to play no significant part in the troposphere.

At the top of the atmosphere CO2 and other radiative gases and cloud surfaces radiate long wavelength EM radiation to the universe.

There is very little discussion on the effects of the increase in CO2 concentration above the troposphere.

on June 21, 2011 at 9:04 am |scienceofdoomI propose that Bryan puts forward his values for globally averaged quantities of:

1. Upward emission of radiation by the earth’s surface

2. Downward emission of radiation by the atmosphere (as seen at the earth’s surface)

3. Absorbed radiation at the surface from 2 (atmospheric radiation).

4. What happens to the balance of 2 – 3?

5. Net convective transfer from the surface to the atmosphere

6. Upward emission of radiation by the climate system into space

7. Absorbed solar radiation by the total climate system

And his theory that explains these values of radiation & convection.

on June 21, 2011 at 9:45 am |Bryanscienceofdoom

Since this topic is on water vapour and convection, I prefer to stay on topic.

You have not disputed my post on the major mechanisms of the troposphere temperature distribution.

In what way and to what extent will the radiative properties of CO2 modify the broad outline?

There is no doubt for instance that CO2 will absorb 15um radiation and that by collisions this produces a tiny local heating effect.

But this small heating effect is swamped by the massive convective flows and is almost unmeasurable as an independent heating effect within a cubic metre of the earths troposphere.

on June 22, 2011 at 12:36 am |KingOchaos”In what way and to what extent will the radiative properties of CO2 modify the broad outline?”

By changing the height at which energy can be moved from the boundary layer/ at what height convection is replaced by radiation as the means of moving energy… raising the average height of the tropopause.

Take a pot of water(we’ll assume the sides are perfectly insulated to save silliness) , put it on an element at a low setting, and heat it, the “heat” will conduct through the bottom of the pot into the lower levels of the water, convection will mix the “heat” through the water, the rate at which it looses energy from the surface will determine the equilibrium temperature… yes? so the depth of the water vrs surface area is going to effect how much energy it looses(through evaporation/conduction/radiation)

Now we put a lid on the pot, further limiting energy loses, it will increase the temperature of the water , until enough energy is being moved from the air/water vapor through conduction/radiation into the lid of the pot, that the pot lid is radiating out as much energy as what is being input into the water through the bottom of the pot… yes? (we can put the sealed pot in a vacuum too if yah want 😉

Now if we increase the thickness of the lid…. Do you believe this will have no effect on the equilibrium temperature of the pot? (bearing in mind that a temperature differential is required to move energy via conduction/convection or radiation(the ole heat only flows from higher to lower etc) the outer radiating surface of the pot will be the same T, but it will require the T gradient to move the energy to the radiating surface… the same applies for convective flows.

on June 22, 2011 at 7:30 amBryanKingOchaos

I said above

“The Postma article page 14 (easily checked by your own calculations) calculates that in sun facing equatorial regions the ground temperature reaches 50C while the air temperature reaches 35C.”

Would not be changes by more CO2

“As DeWitt Payne points out almost any heat transfer in the troposphere will quickly set up the dry adiabatic lapse rate =-9.8K/km.”

Would not be changes by more CO2

“Now since this is determined by the hydrostatic formula it is independent of convection.

However convection currents are almost always present and they modify the actual lapse rate in such a way as to produce a cooling effect.”

Would not be changes by more CO2

“In any serious article that I have read( e.g. from NASA) the modification seems to be largely determined by the presence of H2O, determining humidity and in particular the latent heat of vapourisation’s contribution which helps stabilise the temperature of the troposphere.

The radiative properties of CO2 seem to play no significant part in the troposphere.”

More CO2 might induce a very small rise in evaporation which;

1. Decrease the lapse rate and increases the height of troposphere

2. induces more cloud, which many argue leads to net cooling and an increase in the lapse rate

“At the top of the atmosphere CO2 and other radiative gases and cloud surfaces radiate long wavelength EM radiation to the universe.

There is very little discussion on the effects of the increase in CO2 concentration above the troposphere.”

More CO2 at TOA will produce more cooling .

More cooling at TOA works back to bring tropospheric lapse rate back to dry adiabatic lapse rate.

The interactive mechanisms seem to produce negative feedback which stabilizes our climate rather than positive feedback which will exaggerated small changes.

on June 22, 2011 at 7:32 amBryanWould not be changes by more CO2

should read

Would not be changed by more CO2

on June 21, 2011 at 10:25 am |scienceofdoomOn topic? A sudden desire to stay “on topic”? The world is a beautiful place once again.

Anyway, I’m sure staying “on topic” would be preferable to putting forward values for radiation and convection in the earth’s atmosphere. I totally understand why you don’t want to put numbers out there. Take the 5th, that’s my advice.

For the people reading who are starting out, I will just help by explaining that if you put together 10 people who believe the inappropriately-named “greenhouse” effect doesn’t exist, you will get 10 different sets of numbers and 10 different explanations for these numbers.

That’s if you can get numbers at all.

on June 21, 2011 at 11:03 am |Bryanscienceofdoom

Please feel free to put your own values forward.

In the KT averaged atmosphere what is the heating effect of CO2 on a cubic metre of the earths troposphere.

Considering the alarm and despondency evident among greenhouse theory experts surely this value must have been their starting point.

on June 21, 2011 at 11:13 am |scienceofdoomIn the many articles you have read and commented on, I have put numbers forward. You dispute the numbers, you dispute measurements, you claim that everyone with any sense can see that the consensus numbers are wrong..

Now it’s your turn. Your time to shine.

Don’t want to state your case? I understand. The readers understand.

Call your lawyer and take the 5th.

The readers who didn’t understand before may

nowget the message.on June 21, 2011 at 11:31 am |Bryanscienceofdoom

If I had a problem lets say an electric kettle with kilogram of water and a 2000W heater.

I could tell you the heating effect per cubic centimetre when operating.

In the averaged KT diagram despite all the flows into and out of the troposphere there is no such calculation for CO2s heating effect.

R W Wood said it is so small it can be ignored.

G&T and Postma agree.

You must have this figure somewhere.

Why keep it a secret?

on June 21, 2011 at 4:17 pm |lglDeWitt Payne

“That only means that a package of dry air raised or lowered will always have the same temperature as the air around it”

But doesn’t it also mean that a atmosferic layer can’t heat unless the surface was heated first?

on June 21, 2011 at 5:47 pm |DeWitt Paynelgl,

Yes. If the flow of thermal energy upward from the surface (and also from the atmosphere) caused by absorption of solar radiation didn’t exist, then there couldn’t be a temperature gradient. At high latitudes in the winter, the energy flow can be downward and the sign of the temperature gradient reverses because the ground radiates energy away more efficiently than the atmosphere. That’s called a temperature inversion. Vertical convection doesn’t happen when there’s a temperature inversion because a raised parcel of air would be colder and denser than the air around it and would sink back down.

A lapse rate greater than the adiabatic rate is unstable to vertical convection because a lifted parcel of air would be warmer and less dense than the air around it and would continue to rise.

See R. Caballero’s Lecture Notes on Physical Meteorology:

http://maths.ucd.ie/met/msc/PhysMet/PhysMetLectNotes.pdf (20+ MB)

on June 22, 2011 at 9:24 am |scienceofdoomCompletely off-topic.

DeWitt –

I started reading a Climateaudit post about the tropospheric hotspot, or lack thereof. Lots and lots of random comments, a few interesting ones about stats, and your comment from Apr 29, 2008 at 9:41 AM captured the real question that everyone

shouldhave been asking:Did you get enlightenment on the answer, at ClimateAudit or elsewhere?

on June 22, 2011 at 11:35 am |KingOchaosBryan June 22, 2011 at 7:30 am

Bryan, what exactly are you referring to as TOA? An increase in CO2 in the stratosphere will cause cooling, but that’s because at those pressures the path length is so short, that its a net emitter, a decrease in stratospheric temperatures, would result in a negative forcing on the top of the troposphere… But it would not be overly significant, SoD has figures for these in his stratospheric cooling threads, and he had a few good papers linked there as well..

But the tropopause, is the tropopause because its at this layer where radiative losses from the troposphere become dominant, this is at around 10km… this altitude is where energy is lost via radiation from the troposphere to space(ok the average is 6km, but 15micron is at the tropopause) This is because the path length has shortened due to the decreased pressure/number o molecules in a given area… by increasing the number o molecules, you increase path length at this boundary layer, slightly raising the height at which the path length becomes short enough that energy is moved out via radiation..(thickening the pot lid)

So it raises the height of the troposphere. Yes… So this new higher altitude would need to reach 255k so as that it is able to move out as much energy as what is coming in… which would mean the layer below would need to warm enough that its differential was enough to move this energy up to the new higher layer, and so on, all the way down to the surface(on average, assuming SW/albedo etc remain constant)

The energy is moved up from the surface predominately by convection.. convection stops when radiation is moving away more energy than what is being pushed up via convection.

on June 22, 2011 at 3:18 pm |DeWitt Paynescienceofdoom,

No. I’m still waiting for an answer on that one, not that I’ve actively pursued it. That whole topic of the tropical upper tropospheric hot spot seems to have gone down the memory hole on both sides of the question. There did seem to be some evidence that long term behavior was different than short term. Lapse rates in the short term, seasonally for example, seem to behave as expected. In the long term, not so much. But given the uncertainty in the data, it’s hard to tell. Maybe everyone is just waiting for another decade of satellite data.

There was also the question of whether the hot spot was a signature of greenhouse warming specifically or if it would occur for any warming. I think the answer was that it wasn’t specific to greenhouse, but I’m not absolutely sure. If it isn’t a signature, it becomes somewhat less interesting.

on June 22, 2011 at 3:25 pm |DeWitt PayneKingOchaos,

The tropopause is where it is not just because the optical density in the thermal IR goes below 1 for CO2. It’s there because that’s where the temperature inversion caused by warming from absorption of UV by oxygen and ozone becomes dominant. In fact, the break point in the lapse rate is used to define the location of the tropopause:

When the lapse rate is much less than the adiabatic rate, convection, for all intents and purposes, doesn’t happen.

on June 22, 2011 at 8:12 pm |KingOchaosYes… but there is a temperature inversion because of the optical depth in 15micron from the tropopause up, if this was not so, the stratosphere would be isothermal, there wouldn’t be an inversion. So the T profile exists above the tropopause because of 02 03 absorption of UV, and the cooling effects of co2/h2o at these altitudes…

I was talking 255k, as in the average height, but point taken on height varying vrs latitude

Now, if radiation wasn’t moving the energy vis radiation from this level, it was optically thick, would it not result in a buildup of energy, which would result in convection once again restoring balance?

on June 22, 2011 at 3:37 pm |DeWitt PayneKingOchaos,

Another thing that affects the height of the tropopause is air density. In the tropics where the air is warmer and less dense, the height of the tropopause is greater than for high latitudes where the air is colder and denser. The temperature profile used in MODTRAN for tropical atmosphere has the tropopause at 18 km altitude. For sub-arctic winter, it’s 9 km. That makes the temperature difference between the surface and the tropopause, and hence the magnitude of CO2 forcing, greater in the tropics than in the sub-arctic.

on June 23, 2011 at 2:13 am |KingOchaosIve had a think about this… and the reasoning for the variance doesn’t seem correct to me. I would assume the higher altitude of the tropopause in the tropics is not a function of air density as such, but path length… with vastly more water vapor in the atmosphere vrs altitude in mid latitudes, than at higher/lower latitudes. As a result of greater evaporation/convection?

With denser air, it should be more opaque, if the composition is the same as less dense air?

on June 24, 2011 at 10:35 pm |suricatI think both you guys are ignoring Earth rotation. As the ‘boundary layer’ is almost stationary to Earth’s surface, the atmosphere here ~follows Earth’s rotational speed. Thus, we have to follow a ‘weights and measures’ directive for the latitudinal calibration of a ‘spring balance’!

At the equator, Earth’s rotation provokes a centrifugal moment in excess of 7.3 cm/sec^2 (can’t remember the exact number, it’s a long time since I made calculations [and they were in inches]) in counterpoise to the gravity constant. This results in a ‘dead weight’ weighing ~5-10% less (ball park figure) at the equator than it would at a pole when employing a ‘spring balance’ or a ‘load cell’. A ‘mass comparison device’ (beam balance) is excluded due to its own mutual affectation of both the ‘test mass’ and the ‘mass to be weighed’.

However, Earth’s atmospheric envelope is a bit more dynamic than a ‘dead weight’ and calls for a more ‘dynamic’ explanation. I can only do this in ‘engineering terms’. I would respectfully suggest that you consider the atmosphere above the equator to display the properties of a ‘radial turbine’, with the atmosphere above the poles displaying the properties of a ‘planar turbine’. These properties give us ‘climate cells’.

Best regards, Ray Dart.

on June 22, 2011 at 8:15 pm |MaitPayne,

On the subject of theoretical limits of solar funaces. Imagine we used some huge mirrors’n’stuff to focus all the light from the sun on a blackbody football. According to my logic the flux to the football has to be the same as the output of the sun (That’s like a gazillion watts or something I imagine). As the poor football is entirely black it has no choice but to absorb all that energy and as we deprived him from any other means of losing energy (we put him all alone in a vacuum) it can only loose energy it radiates at sigma times T to the power of 4 times the surface area of the football. I would imagine the T would have to be quite great in order to reradiate all the energy shoved into it from the sun.

At least that’s what my simple logic tells me (help me out if I got lost somewhere).

on June 23, 2011 at 1:22 am |DeWitt PayneMait,

The problem is that you simply can’t focus all the energy on an arbitrarily small object, your football e.g. The image has a finite size and the effective temperature of the image can’t be higher than the sun itself. You can’t further focus an image because the light rays at the image aren’t parallel. A compound mirror or lens still only has one focal length.

Suppose you put a mirror in orbit with the apparent diameter of the sun when observed from the Earth’s surface. You now have effectively two suns in the sky. Keep doing that and eventually you cover the whole sky. But the temperature of the sky is now the surface temperature of the sun and can’t go any higher, nor could any object inside.

on June 23, 2011 at 4:51 am |DeWitt PayneKingOchaos,

Vastly more water vapor is still only a few percent and that’s only near the surface. In terms of precipitable water it’s about 6 cm compared to a global average of 2.5 cm. Six cm/m² of liquid water is 60 kg of water or about 75 m³ of vapor at STP. That’s a tiny fraction of the 10,000 kg of the atmosphere/m².

When you look at variation in optical density, the best unit for measure is pressure rather than altitude. The optical density of CO2 at 15µm will be about 1 at a pressure of ~200mbar.

on June 23, 2011 at 10:57 pm |KingOchaosThank you, yea pressure works for me, although an increase of an approx 120% water vapor compared to average still falls under the bill o vastly more to me 😉 but relative to the atmosphere i am aware its not a huge percentage… it is a trace gas.

on June 23, 2011 at 12:17 pm |MaitPayne,

What is stopping you from focusing light down to an arbitraily small object? The wavelength would be a limit, I would imagine, but that is not really an issue compared to a football. And what has light rays not being parallel have anything to do with it (all you’d need is a more complicated mirror if they are not)?

on June 23, 2011 at 12:51 pm |Neal J. KingMait,

The issue is a bit complicated, but it goes kind of like this:

– The radiation from a thermal source is characterized by the “brightness”, which is the power intensity per unit steradian per unit area: In other words, the power emitted into a solid angle by an emitting area, divided by that solid angle and by that area. For an object emitting as a blackbody, the Planck and Stefan-Boltzmann formulas apply.

– If you look at an optical system (lenses and mirrors), light transmitted through this system cannot increase in brightness, but only decrease.

– This means that if you take a big lens and focus light from the sun down onto an object, the characteristics of the radiation can never be more “intense” than it was when it left the sun. For example, from the distance of the earth, even a big lens catches only a small solid angle of the radiation of the sun’s area: it scales as the area of the lens over the distance to the sun. When this captured light is focused on the football, the area of the football is much smaller, but the solid angle subtended by the light is much greater, because the football will be catching light from many angles. What ends up happening is that the lens doesn’t increase the temperature of the radiation (= temperature of the blackbody emitter), but does increase the solid angle that is exposed to it. As DeWitt stated, it becomes equivalent to covering over a larger solid angle with the image of the sun, like surrounding the football by chunks of sun: You can never heat the football higher than the temperature of the sun that way.

– You might feel dissatisfied, thinking, “What if I capture the entirety of the power of the sun, rather than just the part captured by a big lens?” Fair enough: Think up a rational geometrical structure that does this, and we can analyze it. But you should be aware that there is a whole history of attempts to break the second law of thermodynamics (which is what this is) by using geometrical optics. They’ve all failed eventually, although sometimes you have to take into account the finite size of “ideal point masses”.

– The only reference I have on this topic is Born & Wolff’s Optics, which discusses very briefly the “conservation of brightness” result. Unfortunately, I haven’t found anything else on the web so far. But I remember this topic came up in discussion in some optics class many years ago.

on June 23, 2011 at 3:22 pm |MaitI can’t really describe the geometry of such an awesome mirror-thing, but I would imagine it is theoretically possible (I can’t at least think of any reason why it shouldn’t be). But I still can’t think a way how you can avoid having an increase of power per unit area when you focus radiation emitted by a larger surface area to a smaller one without losing energy (which shouldn’t be allowed in my opinion).

I’d like a better explanation why this isn’t possible please. 🙂

on June 23, 2011 at 3:52 pm |mkellyTry this one Mait. The available energy call it sunlight at any point from sun to us is based on the r^2 law. So as you get farther away there is less available. The best you can do is what is 1 meter from the sun surface ( 1^2 is 1).

on June 25, 2011 at 12:12 amMaitCould you elaborate on that (I don’t quite understand how this is relevant to the topic to be honest)? And why would it be better to be 1 meter from the sun instead of 42 cm? Are you saying there is more sunlight available 1 meter from the sun than 42 cm from the sun?

on June 23, 2011 at 4:33 pm |DeWitt PayneMait,

If you surrounded the sun with a spherical shell that was perfectly reflective, the temperature inside would go up until the shell material failed. But in the process the surface temperature of the sun would increase because it would no longer be losing energy to space. Focusing sunlight on an object other than the sun won’t do that.

Probably the reason why a detailed answer can’t be found is that most people are satisfied with the explanation that it would violate the Second Law because net energy would have to flow from cooler (6,000 K sun) to warmer (greater than 6,000 K football).

on June 24, 2011 at 10:45 am |MaitPayne,

Are you sure you can view this problem in the context of the second law of thermodynamics. The sun is constantly “producing” energy – it’s not exactly an equilibrium state I would think.

on June 24, 2011 at 4:34 pm |DeWitt PayneMait,

What matters is where you draw the lines to define your system. Instead of the sun, consider an incandescent light bulb. You can’t construct a system that doesn’t include the light bulb to create a temperature higher than the filament without postulating a substance with physically unrealistic properties. If you mount a light bulb in a highly reflective container, the temperature of the filament will go up, but now the system includes the light bulb. If you had a substance that was transparent in one direction and reflective in the other, you could concentrate the energy of the light bulb in a container made from that substance with the light bulb outside the container. You could do the same thing if you had a substance whose emissivity was different from it’s absorptivity at the same wavelength. But no such substance can exist. Depending on whether absorptivity was higher or lower than emissivity, such an object would spontaneously heat up or cool down below the ambient temperature. That amounts to a perpetual motion machine of the second kind (I think).

The same goes for the sun. Unless you reflect a substantial amount of energy back to the sun, thus increasing the surface temperature, you can’t get a temperature higher than the surface temperature absent the reflection.

on June 24, 2011 at 6:56 pm |DeWitt PayneMait,

All the above applies to thermal systems. One can easily achieve higher temperatures with non-thermal systems. For example, use photovoltaic cells to charge batteries and then use the electricity to power a plasma torch or laser to generate local temperatures much higher than 6,000 K. But entropy still increases for the system as a whole so the second law is not violated.

on June 24, 2011 at 10:28 pm |MaitPayne,

What are you on about (magic materials and perpetual motion machines)? And why would you construct a system that doesn’t involve the lightbulb in the first place? What does this have to do with theoretical sun furnaces?

In addition – doesn’t your example of a reflective sphere around the sun show an example, where heat flows from a colder body to a warmer one? The sun heats up while the light reaches the reflector and reaches the sun again – so the origin of the radiation is a colder body than the destination.

on June 26, 2011 at 2:09 pm |DeWitt PayneMait,

Those examples are thought experiments. They’re very useful in evaluating whether something is theoretically possible or not because you can look at the limits if you had some sort of perfect device.

A perfect reflector has no temperature of it’s own. A perfect reflector is also a perfect insulator, so there is no flow from the reflector at all. All the energy comes from the source. But you can easily calculate the behavior of a constant energy source in a partially reflective and partially transparent shell as the reflectivity goes to 1 and the tranmissivity goes to zero. The internal temperature increases without limit. There’s a post here on something very similar using an insulating sphere.

https://scienceofdoom.com/2010/07/26/do-trenberth-and-kiehl-understand-the-first-law-of-thermodynamics/

on June 26, 2011 at 3:58 pm |MaitPayne, I’m not following you here to be honest. I don’t think I’ve disputed the logic of the reflective sphere around the sun, but I don’t quite understand how this is connected to solar furnaces – if anything it shows that the theoretical limit of such devices is not the surface temperature of the sun.

on June 27, 2011 at 5:53 pm |JorgeMait,

Thanks for arousing my interest in solar furnaces. It certainly is hard to see that one can´t keep on using bigger and more lenses to collect more light/heat energy. However, after looking at several web discussions on the subject of max temperatures I have reached three conclusions.

Firstly, that in practice, temperatures above about 3500ºC are not achieved. Secondly, it is hard to find a convincing explanation for those that don´t want to be convinced. Finally, the standard story line is always the one given by Neal J. King above.

I found a small pdf article that says what he said but includes pictures and equations. It may help you but, perhaps not!

http://onlinelibrary.wiley.com/doi/10.1002/0471791598.app1/pdf

on June 27, 2011 at 7:00 pm |Neal J. KingJorge,

Thanks for providing the equations of interest, which demonstrate that the brightness is invariant through the optical system.

What it doesn’t show is why the brightness is the critical quantity of interest. I am still thinking to see if I can generate a clear explanation. My starting point:

– A patch on the Sun’s surface, dA1, emitting at temperature T1

– A gigantic lens, incredibly large in diameter

– A surface upon which the lens is focusing the image of the patch. Based on the relative distances of the patch dA1 and the surface, the image has area dA2 = dA1*(M^2), where M is the magnification of the image.

Now, half of the total emitted power of dA1 is captured by the lens, and focused down on area dA2. If M = sqrt(1/2), half the power emitted by dA1 is absorbed by area dA2 = dA1*(M^2) = dA1/2. If we assume that the relevant thickness of the patches is the same, then it seems that half the emitted power of dA1 is absorbed into a blob which has half the volume, so crudely speaking, you would guess that the temperature T2 would have to equal T1 to come to steady state. (You could easily argue that the thickness is not the same, so the argument is far from airtight.)

Now, if M is significantly smaller than sqrt(1/2), you could increase the ratio of absorbed power to area of emission (for dA2). So on the basis of raw power, it would seem as if you could get as high a temperature T2 as you want.

So it is not obvious why brightness is the relevant characteristic.

One could also argue that the quality of the radiation will not be the same: the blackbody radiation curve for T1 has a lower proportion of higher-frequency radiation than for higher temperatures. But that is still not a clear argument.

So I’m not really clear yet on WHY it’s true. The 2nd law of thermodynamics makes it pretty clear that it must be true; but the details of mechanism are vague. I think we’re still missing an angle somewhere. (This is quite common with 2nd-law situations, by the way: there is generally some innocuous-seeming assumption that turns out to be fatal.)

on June 27, 2011 at 11:46 pm |Neal J. KingOK, this is evidence that there is a proof, but I can’t get the paper for free:

“Nature 339, 198 – 200 (18 May 1989); doi:10.1038/339198a0

Concentration of sunlight to solar-surface levels using non-imaging optics

Philip Gleckman, Joseph O’Gallagher & Roland Winston

Department of Physics, University of Chicago, Chicago, Illinois 60637, USA

THE flux at the surface of the Sun, ~6.3 kW cm-2, falls off with the square of distance to a value of ~137 mW cm-2 above the Earth’s atmosphere, or typically 80–100 mW cm-2 at the ground. In principle, the second law of thermodynamics permits an optical device to concentrate the solar flux to obtain temperatures at the Earth’s surface not exceeding the Sun’s surface temperature. In practice, conventional means for flux concentration fall short of this maximum because imaging optical designs are inefficient at delivering maximum concentration. Non-imaging light-gathering devices can improve on focusing designs by a factor of four or more, and approach the thermodynamic limit. We have used a non-imaging design to concentrate terrestrial sunlight by a factor of 56,000, producing an irradiance that could exceed that of the solar surface. This opens up a variety of new applications for making use of solar energy.”

Some other references have mentioned Liouville’s theorem, applied to photons in phase space, as proving invariance of brightness. I still don’t see the clear-cut connection to temperature.

on June 28, 2011 at 6:22 am |MaitI’m not quite sure I understand why is radiance the important factor in our current context instead of irradiance.

on June 29, 2011 at 7:34 pm |MaitOn a further note – I can see why you can’t increase the brightness of an image with an optical system – but that is mostly because of what an image is. In the example I provided however – the image of the original is not preserved (it’s sort of folded up on itself), so I don’t see why the brightness argument would be valid here.

I’d most appreciate if someone pointed out where I’m going wrong here (in case I am). And a bit more constructive argument than “It would violate the second law of thermodynamics” would be appreciated – mostly because I don’t quite understand how this violates it to be honest.

on June 29, 2011 at 8:55 pm |Neal J. KingMait,

Radiance is output of heat, irradiance is input of heat. The problem is if your total input from a source at temperature T is less than your total output: This can’t go on. But if your total radiating area is less than that of the source at T, your temperature must be higher than T, because you’re outputting more power per unit area.

Where one runs into 2nd-law problems: As stated before, ad infinitem and ad nauseum, the 2nd law does not prohibit heat transfer from hotter to colder, colder to hotter, or same to same temperatures. However, it does prohibit NET heat transfer from colder to hotter temperatures. If two embers in space were to exchange heat radiation back and forth, and the colder one became colder while the hotter one became hotter as a consequence of that exchange, there would be no violation of conservation of energy, but there would be a violation of the 2nd law, because the amount of entropy would be decreasing.

Various paradoxes have been dreamt up on this point: For example, one could imagine the interior of a perfectly reflecting ellipsoid with two hot embers placed at the two foci. Geometrical optics would lead you to believe that each ray of radiation from one focus would end up at the other, so there would be a perfectly symmetric exchange of energy, equal in both directions. However, what if one of the embers is 1/100-th the radius of the other, so that it has 1/10000-th the surface area? Since it must be radiating exactly as much power as it is absorbing, its radiance must be 10000 times its irradiance; so its temperature must be 10 times the other ember’s temperature. Right?

Of course, this can’t be right; but you have to deconstruct the many idealizations built into the problem to find the issue. I heard somewhere that Feynman gave a hand-waving resolution, having to do with the fact that the finite size of the embers meant that you could not pretend that each beam starts from exactly the focus of the ellipsoid, so it won’t go exactly to the other focus. In brief, each beam from the smaller ember will be absorbed in the larger, but NOT every beam in the larger will be absorbed in the smaller; so in that case it’s OK for the bigger ember to be giving off more total power, even while maintaining the same temperature as the smaller.

The way I heard the story, somebody went off to verify this resolution by doing extensive calculations, and eventually achieved the goal – but it was very ugly and very long.

Unfortunately, I haven’t been able to verify this story; and all of the searches I’ve tried using the terms (thermodynamic limit optics concentration ellipsoid etc.) lead to papers that cost $ to look at. I don’t have academic access to these papers, so I can’t do much until I get to a university. In the meantime, I just note that many of these papers were written in the 1970’s, so it must have been a topic of discussion at that time.

on June 30, 2011 at 1:33 am |Neal J. KingMait,

Here is a reference that is a little bit helpful:

http://www.av8n.com/physics/phase-space-thin-lens.htm

I can’t claim that it is authoritative, and it’s certainly not complete. I will quote a few remarks:

“As for focusing the light of the sun: Based on the discussion so far, you might think you could take all of the solar energy entering the lens and focus it on an arbitrarily small spot, thereby achieving an arbitrarily high temperature. Well, you can’t. Conservation of energy does not forbid it, but conservation of phase space (along with some other basic laws) does forbid it. Long before you achieved a spot that small, you would be violating the paraxial approximation. More to the point, if you do the full analysis, you would find that forming such a small spot would require impossibly large values of dX/dt (greater than c).

“Not coincidentally, such a tight focus would also violate the second law of thermodynamics. If you could focus the sun’s rays more tightly than permitted by Liouville’s theorem, it would be possible to create a focal spot hotter than the surface of the sun. This is perfectly consistent with the first law of thermodynamics (conservation of energy), but would immediately violate the second law of thermodynamics, since you could in principle run a heat engine using the focal spot as the “hot” side of the engine, and the surface of the sun as the “cold” (!) side, thereby producing work using only one heat bath (the sun) rather than the conventional two.

“To say the same thing in more detail: Let’s take a thermally non-conducting object and place it at the focal point of our system (point E in figure 1). A certain amount of energy falls on the image of the sun. The spot will heat up. Eventually it will become hot enough to glow. The temperature will stabilize at some temperature T such that the re-radiated power just matches the incident power. This temperature cannot be hotter than the surface of the sun; otherwise energy would be flowing from a (relatively!) cooler object to a hotter object, in violation of one of the corollaries of the second law of thermodynamics.”

on June 30, 2011 at 6:00 am |MaitI think I understand the ember part – basically the problem is that it is not possible to use a mirror or some other sort of optical device to refocus all the energy that falls on it to any particular point – because the “light” comes at it from all sorta different angles (you’d need a mirror that would reflect differently depending on the angle light hits it). I suppose lasers would be a bit of an exception in that context, but they have a bit of a different set of rules I would imagine.

on July 4, 2011 at 2:56 pm |Neal J. KingWell, I have been cogitating on the thermodynamic limit on the maximum temperature that can be produced by an optical system using the radiation source of a blackbody of temperature T, and I’ve come to a resolution. It’s not completely satisfactory, but I believe it hits the major points.

Let’s consider first a spherical iron ball maintained at absolute temperature T, by some heating mechanism. Now imagine that an inner sphere of this ball is separated from the rest, so you have a ball floating in the original ball. It’s easy to come to the conclusion that this inner ball (separated from the rest by a thin shell of space) will be at thermal equilibrium with the rest, because these two parts will be exchanging heat by blackbody radiation, and so both parts will have temperature T.

Likewise, imagine an iron cube maintained at temperature T, and again imagine a smaller cube separated out of its interior, with a small space between them. In the same way, the exchange of heat radiation will keep them at the same temperature T.

What are the characteristics of the thermal radiation?

Radiation emitted from a small patch on the inner sphere or inner cube: The total emitted power will be given by the Stefan-Boltzmann law, σT^4 * area, and it will be distributed at constant power per unit solid angle, because of Lambert’s cosine law, which applies to blackbody radiation.

Radiation absorbed by that same small patch: If you imagine yourself sitting on the little patch and looking out, every direction you look in will appear the same: this is also one of the characteristics of blackbody radiation, it’s perfectly homogeneous and isotropic.

At equilibrium, the total heat exchanged through emission from and absorption by the little patch is 0. So that means that when the observer on the patch looks in all directions outwards (a solid angle of 2π) and sees blackbody radiation at temperature T, the patch must also be absorbing the amount of heat power of

σT^4 * area.

So now if we look at an optical system, we can put some limits on how it can work. Let’s consider a lens of diameter D, focusing heat radiation from an object at temperature T1 at distance d1 to the left, onto an image at distance d2 to the right. The first object has diameter a1, the second has diameter a2.

What is the magnification = a2/a1? Geometrical optics tells us it is M = d2/d1.

Point 1: What is the ratio of the intensities of radiation absorbed by the image object to that emitted by the object? Assuming every ray emitted towards the lens is absorbed by the image, it is I2/I1 = 1/M^2: If the image is bigger than the object, the intensity is less.

What is the solid angle subtended by the lens as seen by the source? If we restrict consideration to rays that are not far from the axis (paraxial rays), it should be

(D/d1)^2.

What is the solid angle subtended by the lens as seen by the image object? Under the same conditions, it should be (D/d2)^2.

Therefore, the ratio of the solid angles subtended by the lens, as seen by the image object and by the source, is (d1/d2)^2, which is = 1/M^2.

Point 2: Therefore, what we called the brightness = intensity/solid-angle , is the same for the object 1 and the image object 2.

Now I am going to argue that Point 2 is generally true, but Point 1 is not. How can I do that, having demonstrated Point 2 on the basis of Point 1? How can Point 2 be true if Point 1 is not?

The reason is that I didn’t intend to PROVE Point 2 on the basis of Point 1, but to ILLUSTRATE it. Point 2 is generally true, due to Liouville’s theorem as applied to light. So when Point 1 is true, it can be used to derive Point 2 as well; but Point 2 is still true when Point 1 is not.

Why is Point 1 not generally true? Because it omits the fact that optics takes place in 3-dimensional space, so there are optical rays that lie in planes that don’t include the optical axis: skew and sagittal rays. This means that all the light emitted from object 1 does not obediently end up as part of the image object 2, but some of it goes lost.

Another factor that hurts Point 1: When using an optical system with lenses, we have to introduce interfaces with discontinuous changes of index of refraction: this is how lenses work. Whenever we do this, we unavoidably introduce reflections; for a narrow band of frequencies, you can cancel out the reflections by using half-wavelength lens coating, but you cannot do this for the entire thermal radiation band. So there are going to be losses due to reflection.

So why am I going all out to vitiate Point 1? Because Point 1 suggests a paradox:

– If I can capture half of the radiation from object 1, which has area a1^2, and focus it onto image object 2, which has area a2^2, object 2 would have to emit power at the rate of (0.5 * σT^4 * a1^2) over the area (front and back) of 2*a2^2, and would therefore seem to require temperature = [(1/2σ)*(0.5 * σT^4 * a1^2/a2^2)]^(1/4) = [1/(4M^2)]^(1/4) * T .

– So if I choose M < ¼, I could get an image-object temperature higher than T.

So my point is that, in actuality, even if you choose M < ¼, even fully accurate geometrical optics will prevent you from attaining such a high ratio of intensities as to threaten the 2nd law of thermodynamics.

Now let’s go back to Point 2, which I claim is valid on (unfortunately) more complex grounds: Liouville’s theorem.

– If I’m sitting on the patch of image-object 2 and looking towards the lens, I will see radiation of (at most) brightness equal to that of the emitted radiation. (“At most”, because of the reflective losses.)

– If I look away from the lens, I see no hot source of radiation.

– Therefore, I am not getting a full 2π solid angle’s worth of blackbody radiation, as I would have in the interior of the iron cube.

– Therefore, my image patch doesn’t need to radiate as much as did the equivalent patch in the iron cube.

– Therefore, the temperature of the image object will ALWAYS be < T.

You could try to counter this by saying, “What if I gather the light from the other side of the object 1 in a different optical system (maybe using optical fibers) and shine that onto the SAME SOLID ANGLE into the image object? Wouldn’t the intensities from the two different methods of light direction just add up, giving more intensity per solid angle than from blackbody alone?”

That would be a good trick. However, what that depends on is the light from the second pathway being able to be inserted into the same solid angle that is already fed by the lens, without interacting with the lens. Unfortunately, lenses work by interacting with light: You can’t pretend that that the lens is invisible to the light sometimes and guides the light the rest of the time, for the same frequencies of light. And likewise for the fiber-optic system. But nice try.

If you’re frustrated with glass lenses, which have complicated ray-tracing geometry and nasty reflection formulas, then you might try to find a solution using mirrors. Probably the best approach is the one I mentioned before: The ellipsoid of revolution, with a big sphere in one focus and a small in the other. Again, it would seem that:

– If both spheres are at the same temperature, the bigger must emit more power than the smaller; but

– The geometrical optics suggests that all the power emitted by one is absorbed by the other.

– So shouldn’t the smaller sphere be hotter?

The answer is No, for two reasons:

– If the radiation in the ellipsoid is blackbody, it will be homogeneous and isotropic, so someone sitting on either sphere will see it look just the same: The same intensity per solid-angle.

– Some of radiation from the larger sphere will not hit the smaller sphere. Remember that the starting point of light rays from the larger sphere is NOT the focus, but slightly off of the focus. If the ray heads out in exactly the radial direction (from the bigger sphere), it will aim directly for the center of the small sphere; but if it heads out in any other angle, it will be aiming off-center of the small sphere, and sometimes it must miss. If you follow the ray through enough bounces, sometimes it will end up hitting the little sphere, but lots of times it will end up hitting the big sphere again – thus transferring no heat to the other.

So where are we?

– We have shown that attempts to generate a 2nd-law paradox using geometrical optics rely upon overly simple models.

– We have described the significance of Liouville’s theorem for the issue of radiation brightness. We didn’t attempt to prove it here, however: that requires the study of Hamiltonian dynamics and more.

– We have given an argument based on the invariance of brightness as to why the image-object temperature CANNOT exceed the source-object temperature.

P.S. Lasers are a different matter altogether: They are not produced by heat radiation, but by a population inversion that is equivalent to a "negative temperature", which is actually "hotter" than infinite temperature! (As it turns out, the definition of hotness should really have been defined in terms of negative inverse temperature (β = -1/T), because it turns out that negative small temperatures are hotter than negative large temperatures are hotter than positive large temperatures are hotter than positive small temperatures.)

on June 30, 2011 at 8:09 pm |DeWitt PayneNeal J. King,

OT: John Denker, whose article on phase space and thin lenses you link to above, has some good stuff. In spite of Bryan, I think his take on thermodynamics makes a lot of sense. But don’t believe his article on lead acid battery chemistry. He completely missed the boat on potential, charge and ionic composition of the electrical double layer at the electrode surface. Bisulfate is not repelled from the positive electrode when the battery is being discharged.

on June 30, 2011 at 9:28 pm |Neal J. KingMait & DeWitt,

I am still thinking about a really clear-cut explanation of why the obvious concepts for how to break the 2nd-law limit on optical concentration doesn’t work. The fact that no one seems to have recorded Feynman’s explanation suggests that it is somewhat difficult. So I don’t take Denker’s explanation for anything more than suggestive, anyway.

As far as what he says about lead-acid batteries: I flunked freshman chemistry outright, so I have few opinions about chemistry!

on July 1, 2011 at 12:49 am |BryanDeWitt Payne and Neal J. King

I think that what you are finding is is sometimes feels attractive to reinvent the definitions of thermodynamics.

At times they feel awkward and so to simplify them does no great harm perhaps.

Climate Science is a case in point.

Since heat engines seem to have no relevance to the Earths climate why not miss out all that tedious derivation of the Carnot cycle.

Which they obviously seem to do.

Thermal radiation seems like heat, so whats the big fuss, why not cal it heat?

They say;

“everyboby can follow roughly what I mean so why get hung up on a few words!”

Well thermodynamics covers the full range of the physical sciences and engineering applications.

A consistent set of definitions is exactly the goal of physics.

Its not easy but the confusion caused by each multiple vague partially appropriate descriptions should be obvious.

DeWitt who is a respected expert in chemistry finds that Denker seems to have little understanding of some critical aspects.

“He completely missed the boat on potential, charge and ionic composition of the electrical double layer at the electrode surface. Bisulfate is not repelled from the positive electrode when the battery is being discharged.”

Here is a free textbook I have been reading recently.

The author takes great care with introduction to the definitons of thermodynamics.

http://www2.chem.umd.edu/thermobook/thermobk-v2.pdf

I would be most interested in your comments about the book.

on July 10, 2011 at 12:24 am |gnomish“Latent heat release immediately increases the parcel temperature.”

Since when does temperature change as a result of phase transition?

wuwt?

on July 10, 2011 at 3:16 am |scienceofdoomgnomish:

With all other things being equal, if you add heat to a body it will increase in temperature. Otherwise you need a new first law of thermodynamics.

The heat is released from the process of phase transition.

Essentially with adiabatic expansion of moist air due to rising, the temperature reduces by less than with dry air.

on July 20, 2011 at 3:22 pm |Trond ASoD

In a reply to mkelly you say:

“I have now seen many other people with similar mistaken claims.

You are claiming that pressure accounts for the earth’s surface temperature?

You can see an explanation of why that is wrong in Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion – under the heading “Introductory Ideas – Pumping up a Tyre“.

Increased pressure can cause a smaller volume or a higher temperature. Increasing pressure quickly does work on a system and can increase temperature in the short term.

The surface temperature of the earth with no sun and the exact same pressure would be close to absolute zero. Pressure does not cause temperature.”

As you refer to the Venus discussion, you don’t take Leonard Weinsteins arguments with the large room into account. His main argument as I understand it, is that the high temperature of Venus has a lot to do with the pressure. He starts with adding a gas with a certain temperature, 255K, and shows how the temperature will increase in lower parts as the room fills up:

” A dry adiabatic lapse rate forms as the gas is introduced due to the adiabatic compression of the gas at the lower level.”

And what causes this adiabatic lapse rate other than gravity? Take away the gravity and the gas will be equally distributed in the box and the temperature, the measure of the average internal energy of the molecules that make up the gas will still be 255K. But the gravitational field will redistribute the molecules so they will end up closer, with a higher density in the gas, at the bottom than at the top. Even if “each molecule” has the same temperature, the same kinetic energy, the same potential temperature (a bulk of them), the measured temperature will be quite different. Higher at the bottom, lower, or the start temperature at the top. At the bottom there will be more energy per space unit than at the top, which means higher temperature measured. And this will not change within a dry adiabatic lapse rate. Gravity has not created energy, but it has redistributed the mass that carries the energy with changing density according to altitude, and will hold it there unless it is warmed more.

Isn’t that the same situation with the earth? I don’t doubt that the atmosphere is warmed to a large extent due to radiation from the surface, that is where the IR-energy comes from, and the green house gases are of course necessary to withold that energy within the troposphere for some time before lost to space, the insulating radiative effect. But the adiabatic lapse rate, the atmospheric pressure gradient, the temperature gradient are all as good as linear in the tropsphere. I guess it’s like that because the distribution of gas molecules, the density gradient is the same. Well, within the range from the surface to TOA, where outgoing radiation equals incoming radiation, the average number for the surface temperature is 288K as 255K is the number for incoming energy equals outgoing. If all gradients are linear the energy trapped within this part of the atmosphere should equal a temperaure averaged of these two numbers, which is 271,5 (-1,5C) and that would be the temperaure of the surface without gravitation.(?)

on July 20, 2011 at 7:14 pm |Neal J. KingTrond A:

You wrote: “Even if ‘each molecule’ has the same temperature, the same kinetic energy, the same potential temperature (a bulk of them), the measured temperature will be quite different. Higher at the bottom, lower, or the start temperature at the top. At the bottom there will be more energy per space unit than at the top, which means higher temperature measured.”

No, this is not true: If each molecule has the same average kinetic energy (KE), and thus participates in the same temperature, the measured temperature at top/middle/bottom will be the same. It’s just that the gas at the top will be less dense and have lower pressure, whereas the gas at the bottom will be more dense and have higher pressure, according to the perfect gas law,

p = n * (kT)

In other words, for the same temperature T, the pressure will be proportional to the density.

Keep in mind that Temperature is NOT equivalent to energy density. Pressure, however, has the units of energy density; so higher pressure contributes one aspect of total energy density.

on July 21, 2011 at 11:36 am |Trond ANeal J. King, you say:

“If each molecule has the same average kinetic energy (KE), and thus participates in the same temperature, the measured temperature at top/middle/bottom will be the same. It’s just that the gas at the top will be less dense and have lower pressure, whereas the gas at the bottom will be more dense and have higher pressure, according to the perfect gas law..”

I will not argue against the perfect gas law, but isn’t the situation you describe different from what I try to describe? When you say that the temperature will be the same we can look at this equation which contains both pressure, volume and velocities of the molecules:

pV = 1/3 Nm

With different pressures you will get different results for top/middle/bottom for the same volume. The only way to explain this is that , the average of the sum of molecule velocities within this volume, and the total KE’s of the molecules are different because of different densities (number of molecules). If the temperature should be the same the increasing number of molecules from top/middle/ bottom must have different KE, which is not what you assume from the start. The temperature is also connected with KE, as the amount of kinetic energy transferred to the thermometer. Where the pressure is low the KE must be larger than where the pressure is high to transfer the same amount of energy. I guess.

But still, thanks for the reply. There might be flaws in my arguments, but I try to grasp this at a microscopic mechanical level.

on July 21, 2011 at 11:55 amNeal J. King– “pV = 1/3 Nm” : I can’t make out what you mean by “m” here, so I’m very unclear on the entire equation; and thus on the intent of your subsequent argument.

– “the average of the sum of molecule velocities within this volume” : This average is always zero, unless there is a mass motion (a wind), which is not what we are talking about.

– Your thoughts about kinetic energy and temperature are confused. It is a matter of fact, in statistical mechanics, that the average KE of a molecule in a gas of temperature T is:

= (3/2)*k*T,

where k = Boltzmann’s constant. This is entirely compatible with the perfect gas law, which you have agreed to accept.

– In a gas at uniform temperature T, the difference between pressure at top and bottom is entirely comprehensible in terms of the difference in the number density (n) of molecules. This is very elementary.

on July 21, 2011 at 11:47 am |Trond ASorry, have to post this a second time as the formula automatically reduced itself with the posting.

Neal J. King, you say:

“If each molecule has the same average kinetic energy (KE), and thus participates in the same temperature, the measured temperature at top/middle/bottom will be the same. It’s just that the gas at the top will be less dense and have lower pressure, whereas the gas at the bottom will be more dense and have higher pressure, according to the perfect gas law..”

I will not argue against the perfect gas law, but isn’t the situation you describe different from what I try to describe? When you say that the temperature will be the same we can look at this equation which contains both pressure, volume and velocities of the molecules:

pV = 1/3 Nm * in the second power

(pV = 1/3 Nm *(average of the sum of molecule velocities) in the second power)

With different pressures you will get different results for top/middle/bottom for the same volume. The only way to explain this is that , the average of the sum of molecule velocities within this volume, and the total KE’s of the molecules are different because of different densities (number of molecules). If the temperature should be the same the increasing number of molecules from top/middle/ bottom must have different KE, which is not what you assume from the start. The temperature is also connected with KE, as the amount of kinetic energy transferred to the thermometer. Where the pressure is low the KE must be larger than where the pressure is high to transfer the same amount of energy. I guess.

But still, thanks for the reply. There might be flaws in my arguments, but I try to grasp this at a microscopic mechanical level.

on July 21, 2011 at 11:57 amNeal J. KingAlready responded to, see above.

on July 21, 2011 at 1:53 pm |Trond ATo Neal J. King (Doesn’t seem to possible to respond the response a second time)

The N is the number of molecules and m is the mass.

And of course the average velocity is zero, what I meant, and should have written, is the average speed.

And if my thoughts are confused I will welcome you to enlighten me. But when you are saying: “- In a gas at uniform temperature T, the difference between pressure at top and bottom is entirely comprehensible in terms of the difference in the number density (n) of molecules. This is very elementary.” – I totally agree off course. That is amomg the things I intended to express in the first post.

And when you say: ” It is a matter of fact, in statistical mechanics, that the average KE of a molecule in a gas of temperature T is:

= (3/2)*k*T….” I guess that you don’t mean a molecule, but molecules. Here is where the square of the average of the speeds of molecules enters the situation in terms of KE.

But to cut it short: The temperature as a measure with a thermometer is measure of transfer of kinetic energy from, in this case, a gas to the thermometer. Various amounts of kinetic energy transfers from time to time will result in different temperatures. The amount of kinetic energy transferred is a result from both the number and the force of each collision of the molecules with the thermometer. Is that correct? I guess fewer collisions will be the result with lower pressure due to lower density. Less molecules per space unit and less collisions per square unit. So if this should add up for the same temperature as energy tranferred per time unit, I guess each collision must be with a greater force. Correct or….?

So, how is the situation then for the molecules in this big sample with top/middle/bottom, as the temperature is the same and the pressure is different beetween theese levels due to gravity/density. Have the molcules (an average molecule) the same kinetic energy at top/middle/bottom

A good answer to this will make my day wether I am right or wrong.

on July 21, 2011 at 4:02 pm |DeWitt PayneTrond A,

At equilibrium, defined as the gas and the thermometer having the same temperature, the net transfer of energy between the gas and the thermometer is zero by definition. The collision rate, therefore, has no effect on the measured temperature. At low pressure, it will take longer to reach equilibrium than at higher pressure.

on July 22, 2011 at 2:01 am |Neal J. KingTrond A:

First: You are still drawing odd conclusions from your equation. For a volume V,

pV = N * k * T = N * (2/3)*avg(m*v^2/2) = (1/3) * (Nm) * avg(v^2). Therefore, the speed of interest is NOT the average speed, but the root-mean-squared (RMS) speed.

Second: The average, over time, for a single molecule will be the average for all the molecules of the same type, under the assumptions of statistical physics. Check the “ergodic” hypothesis.

Third: The temperature is NOT a measure of the amount of kinetic energy transferred, it is an indication of how much energy each degree of freedom has. The average KE of a molecule is (3/2)kT for a molecule embedded in a substance of temperature T. If one substance has higher T than the other, its molecules will on average give more energy to the other’s molecules. But if you take molecules in a gas at temperature T, the average speed of each molecule will be the same independent of pressure: for a lower-pressure region, the density will be lower, so the time between collisions will be longer. But the impact/energy/bump of each collision will be the same.

So with a gas at uniform temperature, the gas molecules will all have the same average KE, top/middle/bottom of the barrel.

on July 22, 2011 at 11:46 am |Trond ANeal J. King

Thank’s for the response. Actually I know it is RMS, my level of accuracy is not good enough. Your third point was very intersting, I’ll take a closer look at it. And again, thank you for taking your time!

on August 5, 2011 at 12:01 am |willbSoD

Is it too late to post a question to this thread? I’m a newcomer here and I find your site very interesting and informative. I was especially intrigued by the various discussions and differing opinions on the adiabatic lapse rate. Here’s my question (with some extensive preamble):

I have seen derivations of the adiabatic lapse rate similar to yours (using “parcels” of air, volume, pressure and the ideal gas laws). However, it seems to me that a derivation should be possible starting with the kinetic theory of gases, so I gave it a go. To keep things as simple as possible, I am just considering a single air molecule.

From the kinetic theory of gases, for one air molecule at temperature T:

heat energy = kinetic energy

= (3/2)kBT

potential energy due to gravity = -mgz

For an adiabatic process, loss in potential energy due to altitude change must equal gain in heat energy from a temperature change:

(3/2)kB dT – mg dz = 0

dT/dz = 2mg/(3kB)

dz = change in altitude (meters)

dT = change in temperature (Kelvin)

m = mass of air molecule = 29 u

u = 1.66 x 10-27 kg

g = accel. due to gravity = 9.81 m/s2

kB = 1.38 x 10-23 J/K = Boltzmann constant

When I multiply that out, I get a lapse rate of 22.8K/km. As you can see, this is over twice your calculated value of 9.8K/km.

So my question is, where did I go wrong? Have I made a bad assumption somewhere?

on August 5, 2011 at 1:14 am |williamcgWillb,

Your calculation assumes a monatomic molecule where all of the degrees of freedom are translational. But air is primarily made of diatomic molecules (N2 and O2) which also have rotational and vibrational degrees of freedom. That is why the specific heat capacities of monatomic and diatomic gases are so different. Not all of the thermal energy is realized as translational (kinetic) energy. To account for this use (7/2)kB instead of (3/2)kB in your formula. This will give you a lapse rate of 9.77 K/km which is much closer to the actual value. But the kinetic theory of gases does not do a good job of accounting for heat capacity variations in diatomic gases.

Bill Gilbert

on August 5, 2011 at 4:35 am |willbBill Gilbert,

Thanks for clearing that up for me. Although it may not be quite as accurate, I find that the kinetic theory is a much more intuitive way to think about the adiabatic lapse rate. I also find it interesting that, from the kinetic theory perspective, the lapse rate does not seem to depend at all on convection.

on August 5, 2011 at 4:50 am |williamcgWillb,

I agree with both of your statements. The molecular approach is often very useful in conceptualizing the underlying physics. And, yes, the adiabatic lapse rate exists independently of convection.

Bill Gilbert

on August 5, 2011 at 12:34 pm |Neal J. KingWillb & Bill Gilbert,

I’m afraid I must disagree with just about every substantial point you’ve claimed.

OK, I’ll give Bill one point: The appropriate formula IS:

dT/dz = -(mg/kB)(2/7)

not

dT/dz = -(mg/kB)(2/3)

But the reason is NOT due to replacement of energy (3/2)(kB)T by (7/2)(kB)T : that would imply, according to your reasoning, that there would be 7 degrees of freedom in the dynamics of the diatomic molecules, but there are not: Instead, at temperatures of interest, there are 5: 3 translational (x, y, and z) and 2 rotational (both perpendicular to the axis joining the atoms). At higher temperatures, vibrational degrees of freedom show up; see the discussion on the heat capacity of N2 at http://en.wikipedia.org/wiki/Heat_capacity :

“To illustrate the role of various degrees of freedom in storing heat, we may consider nitrogen, a diatomic molecule that has five active degrees of freedom at room temperature: the three comprising translational motion plus two rotational degrees of freedom internally. Although the constant-volume molar heat capacity of nitrogen at this temperature is five-thirds that of monatomic gases, on a per-mole of atoms basis, it is five-sixths that of a monatomic gas. The reason for this is the loss of a degree of freedom due to the bond when it does not allow storage of thermal energy. Two separate nitrogen atoms would have a total of six degrees of freedom—the three translational degrees of freedom of each atom. When the atoms are bonded the molecule will still only have three translational degrees of freedom, as the two atoms in the molecule move as one. However, the molecule cannot be treated as a point object, and the moment of inertia has increased sufficiently about two axes to allow two rotational degrees of freedom to be active at room temperature to give five degrees of freedom. The moment of inertia about the third axis remains small, as this is the axis passing through the centres of the two atoms, and so is similar to the small moment of inertia for atoms of a monatomic gas. Thus, this degree of freedom does not act to store heat, and does not contribute to the heat capacity of nitrogen. The heat capacity per atom for nitrogen is therefore less than for a monatomic gas, so long as the temperature remains low enough that no vibrational degrees of freedom are activated.

At higher temperatures, however, nitrogen gas gains two more degrees of internal freedom, as the molecule is excited into higher vibrational modes which store thermal energy. Now the bond is contributing heat capacity, and is contributing more than if the atoms were not bonded…”

So, as stated therein, the number of degrees of dynamical freedom that contribute to the heat capacity (in other words, that are not “frozen out” by the Planck function) for room-temperature diatomic molecules is 5. How does that give rise to the factor (2/7)? Well, the CORRECT value of the adiabatic lapse rate is:

(mg/kB)(γ – 1)/γ

where γ is the adiabatic gas exponent = (2 + f)/f ; if you put these equations together, you get

(mg/kB)(2/(2 + f))

So when f = 5, you get the desired factor of (2/7).

[Reference also: http://en.wikipedia.org/wiki/Heat_capacity_ratio#Relation_with_degrees_of_freedom ]

So the correct theory of the adiabatic lapse rate leads to the accepted value, with only 5 degrees of freedom, not 7.

The problem, Bill, is NOT that the molecular theory of gases doesn’t account for heat capacity variations: It does a fine job with that. The problem is that you’re trying to employ heat capacity in a simple-minded “energy = constant” argument that is just wrong. When the molecular explanation for heat capacity is plugged into its rightful place, in the standard explanation for the adiabatic lapse rate, it works just fine.

So how did Willb get to within a factor of (3/7) of the correct answer with an incorrect theory? Simple: It’s a matter of dimensional analysis: Given the constants g, m, T, there are only a few ways of concocting a value with the dimensions of (temperature)/(distance). So any theory that is dimensionally correct will give the same answer – to within a purely numerical factor. It doesn’t mean the theory has even a grain of truth, unfortunately.

How does your theory work? Your original theory amounts to the claim that the total energy of a molecule remains constant:

constant = energy = kinetic energy + potential energy = mv^2/2 + mgz

and mv^2/2 = (3/2)(kB)T , so

constant = (3/2)(kB)T + mgz

Nice try, but the fact that this does NOT lead to the correct coefficient implies that the total energy of the molecule is NOT constant.

Bill’s theory attempts to modify it by replacing:

kinetic energy => 3 degrees of translational KE + 4 degrees of rotations and vibrations; but to get this many degrees of freedom he has to go against the known values of the adiabatic gas exponents.

If you further modify it by using the accepted number of degrees of freedom, you would get the factor (2/5) instead of the factor (2/7) from the correct theory.

The problem is that your theory is just wrong: The adiabatic lapse rate does not come from a simple conservation-of-energy argument, neither at the level of gas dynamics, nor at the molecular level. It is simply NOT THE CASE that the energy of a molecule remains constant during a process in which the gas of which it is a part is undergoing adiabatic expansion/compression due to gas flow. This is not the right way to understand adiabatic gas flow at a molecular level, because in gas motions, net energy can be transferred from one set of molecules to another.

The reason you are getting the wrong answer is that you are asking the wrong question. The question is NOT, “What is the rate at which temperature falls with height?” because there is no single answer to that: It depends on a lot of factors, many changing with time. In fact, there can be temperature inversions, in which the temperature rises with height (or at least is roughly constant). The right question, to which the adiabatic lapse rate is the answer, is “What is the maximum rate at which the temperature can fall with height in a stable atmospheric profile?” The answer to this question is the ALR, because if the temperature fell any faster, the atmosphere would be unstable to boiling motions as the warmer air near the ground would rise and rise and rise and rise, bubbling up the atmosphere and mixing it all in – until the rate was decreased to the ALR. That is the physical argument behind the ALR, not a constant-energy/molecule picture.

That being the case, I don’t think there is a simple molecular-based argument that gives the rate, because the ideas motivating the discussion are:

– adiabatic expansion/compression: work done

– buoyancy

– instability of the temperature profile

All of these are ideas that have to do with the molecules acting as a thermodynamic fluid: That is the level at which these arguments operate. What the molecular dynamics can contribute is the value of γ = (2 + f)/f , where f = 5. That’s all. After that, the “baton is passed” to the next level of physical analysis. You don’t try to plan a Moon-shot by solving Schrödinger’s equation.

on August 5, 2011 at 9:15 pm |willbNeal J. King,

That’s an interesting treatise you’ve written and I’m still trying to digest it all. Somewhere in the middle of it you say:

And you’ve also expressed a similar sentiment about my theories elsewhere in your comment. However, I’m sure you’re aware that the only physics I’ve invoked is Newtonian mechanics and the kinetic theory of gases. So, has one of these theories been debunked recently?

on August 5, 2011 at 10:12 pmNeal J. KingWillb:

– The point is not that your starting point is wrong, but that you’re not going after the right question. Just using newtonian physics and the kinetic theory of gases is not enough to specify a temperature profile, because lots of temperature profiles are consistent with that.

– The ALR is defined as the temperature profile which has the steepest fall-off that is still mechanically stable (i.e., doesn’t give rise to unstable upwelling of the gas). My point is that if you use ONLY kinetic theory, you can’t formulate that condition: you need to be able to talk about the air as a thermodynamic fluid, in addition to its molecular constitution. You need to be able to discuss its adiabatic compression exponent: not just the number, but WHY adiabatic expansion plays a role in determining the question of mechanical instability. That question cannot be formulated by thinking about a “1-molecule” gas: How do you distinguish between adiabatic expansion and isothermal expansion with a gas consisting of 1 molecule? The essential definition of adiabatic expansion is that the internal energy reduction is equal to the work done against the rest of the gas: these are fluid/thermodynamic concepts, not molecular/kinetic concepts.

– The final example: Your original calculation assumed that:

constant = (3/2)(kB)T + mgz => dT/dz = – (2/3)(mg/(kB))

This rate is wrong (it’s not the ALR), so the assumption is wrong.

Bill’s modification of your theory:

constant = (7/2)(kB)T + mgz => dT/dz = – (2/7)(mg/(kB))

This rate is right (it is the value of the ALR), but the reasoning is wrong: He’s getting the (7/2) from assuming 7 degrees of dynamical freedom. But if there were 7 degrees of dynamical freedom, statistical mechanics tells us that the adiabatic exponent would be (2 + 7)/7 = 9/7. You can check on google for the exponent for diatomic molecules: It’s not 1.286. It’s just not.

– If you want to construct a kinetic-level explanation for the ALR, try this:

a) assume a monatomic gas, so f = 3 (no rotations or vibrations possible);

b) the correct ALR in this case is: dT/dz = -(mg/(kB))(2/5)

Can you construct a coherent derivation for that, using only kinetic theory?

on August 5, 2011 at 11:49 pmNeal J. Kingwillb,

I can give you a head start: The general ARL result for f degrees of molecular dynamical freedom is:

dT/dz = – (mg/kB)(2/(2 + f))

(2 + f)*(kB/2)*dT/dz = -mg

(2 + f)*(kB/2)T = – mgz + constant

f*(kB/2)T + mgz = – (kB)T + constant

(Average molecular energy) + mgz = – (kB)T + constant

So we can see, working backwards from the ARL, that the total molecular energy + potential energy is NOT constant (because T is not constant).

So can you create a background story for this?

on August 6, 2011 at 9:28 pm |willbNeal J. King,

If you think that your point can be made by working backwards from the adiabatic lapse rate, then I’m game. Although I suppose we will need to be careful not to fall into the logical fallacy trap of begging the question.

To start I think we need to agree on a definition for adiabatic lapse rate. I would like to consider only a dry adiabatic lapse rate and use the following definition from Wikipedia:

Is this agreeable to you? If not, why not and do you have another definition in mind?

on August 6, 2011 at 10:06 pmNeal J. Kingwillb,

– We can try it. The DARL for the monatomic case is:

dT/dz = -(mg/(kB))(2/5)

As a strategy to explain this, I don’t particularly recommend starting from:

3*(kB/2)T + mgz = – (kB)T + constant

although it is true. It’s generally easier to work forward from a clear concept than to work backwards from an expected result. The reason I stated this equation was just to point out that it makes explicit the fact that the DARL is NOT compatible with a value of molecular energy that is constant throughout the gas.

– With regards to defining the DARL: What is missing from that quote is the fact that the temperature/density/pressure profile that is characterized by the DARL matches the changes induced in the parcel by the motion of the gas. In other words, as you move the parcel upwards or downwards, at every point the pressure and the temperature of the parcel are the same as those of the surrounding gas: Even as you move a parcel of gas upwards or downwards, and it changes P and T, it remains in thermal equilibrium with its surroundings. (And it goes without saying that the profile must be physically possible: Pressure cannot increase with height, etc.)

on August 7, 2011 at 2:26 am |willbNeal J. King,

I’m ok with your additions to the definition of the dalr. At this point I am inclined to proceed working directly from the definition, which says that

If you don’t think working directly from the definition is a good idea, please explain why. If it is ok with you, then as a first step, I would like to determine the potential energy of the molecules. I say it is -mgz. Do you agree or disagree?

on August 7, 2011 at 9:16 amNeal J. Kingwillb:

– OK, start from the definition in terms of “adjusting balance between PE and KE of the gas molecules”, where the pressure and temperature of the moving air mass matches that of the surrounding gas.

– For simplicity, assume a gas of identical monatomic molecules.

– But the gravitational PE of a molecule is:

U = + mgz

where:

m = mass of molecule

g = 9.8 meters/sec^2

z = height above ground-level

on August 8, 2011 at 1:06 am |willbNeal J. King,

Ok, so far so good. Thanks for defining the terms and for correcting my error in sign. We therefore agree that the potential energy of the molecule is:

U = + mgz

If the gas molecule rises up in altitude by 1 meter from an altitude of z0 to an altitude of z0+1, then it gains potential energy of ‘mg’. From the definition, to remain adiabatic it must lose ‘mg’ of kinetic energy. From the kinetic theory of gases, kinetic energy is heat energy. So, to remain an adiabatic process, the gas molecule must lose a quantity ‘mg’ of heat energy when it rises 1 meter in altitude.

Is this logic ok with you or not?

on August 8, 2011 at 10:33 amNeal J. KingThe term “adjustment” may need some looking into: Keep in mind that we are not looking at just one parcel of gas, but at the temperature/pressure/density profile of the atmosphere.

So when one parcel of gas (Parcel A) goes up, it does not leave a hole in the atmosphere: some other air from above has to come down and fill in. This has to be taken into account, or we’re not getting the full story.

If we make the assumption that the volume of air (Parcel B) that was displaced by the upward rise of Parcel A goes back to fill the “hole” that would have otherwise been left by Parcel A, then that loss of potential energy can pay for the gain in potential energy by Parcel A.

Indeed, if the two parcels just exchange both altitude and thermodynamic values (T, P, n), there is no net change in the gravitational PE. If however Parcel A arrives at its new height at a higher temperature than its new surroundings, its density must be less (because density = P/((kB)T), and the pressure must be the same); equally, Packet B will be cooler than its new surroundings and have higher density. So if there is a temperature mismatch between the parcels and their new surroundings, there will be a net LOSS of PE to pay; otherwise not.

Whereas, if the new temperature of Parcel A is cooler than surroundings, that means there will be a net GAIN of PE.

To continue with this point, we could either think about small “differential” increases in altitude, dz; or we could consider the situation after Packet A has gained substantially in altitude, an increase of L. Which approach do you prefer?

on August 8, 2011 at 9:44 pm |willbNeal J. King,

Could I get you to clarify one of your points? You say:

Can you explain in more detail what is happening and why? In your reply, please take into account that I’m looking at this from the kinetic theory point of view, not only for the equations but also for visualizing what’s happening.

This is my current view:

The idea of a parcel of gas is an abstract concept that we use for visualization. In reality, there is no physically identifiable object here. There are no inter-molecular bonds in this scenario holding the parcel together as some kind of amorphous, semi-cohesive blob. There is no physical membrane surrounding the parcel. There is no physical boundary separating the parcel of gas from the rest of the atmosphere. It is not possible to push or pull the parcel in any way. Viewed from the kinetic theory, our parcel of gas is simply an arbitrarily defined collection of individual molecules with their individual kinetic energies. So when one molecule from Parcel A rises, another molecule from Parcel B may or may not descend, but that is a completely random, separate and independent process.

on August 9, 2011 at 12:09 amNeal J. Kingwillb:

– I would say that the concept of a parcel of gas is not an “abstract” notion, but rather a useful idealization. The mean free path of a molecule at one atmosphere is about 7e-8 (m), so if you think about a parcel of gas as having the dimensions of of a few meters, the vast majority of the individual molecules that are defined as “part of the parcel” at one moment will still be part of the parcel a minute or two later; so membership in the parcel is reasonably well-defined. If you are going to think about bulk motion of a gas, you can only take into account fluid aspects if you have an averaged-over model.

– If instead you consider that the gas is only individual molecules, you can’t conceptualize such concepts as convection at all. Indeed, in situations of very low density, when the mean free path of molecules becomes comparable to the dimensions of measurement, the conventional equations for convection fail: You have to use a ballistic model for the gas, the behavior is different. But this is not applicable to typical atmospheric conditions.

– Another way to consider the issue of exchanged gravitational PE: If you think about a parcel of air moving upward, the pressure at the bottom of the packet is greater than the pressure at the top. So in the course of moving upwards, the “floor” of the parcel has work done on it, and the “ceiling” of the parcel does work on the rest of the atmosphere; since the floor pressure is greater than the ceiling pressure, there is net work done ON the parcel of gas. This work exactly provides the increase in gravitational PE, as can be shown from the equation of hydrostatic equilibrium: the difference between the pressure at the bottom and at the top is exactly due to the weight of the air between top and bottom. So by just considering the energy exchange between Parcel A and the atmosphere, and between Parcel B and the atmosphere, I don’t have to worry about the two parcels “negotiating” an exchange: the atmosphere converts PE into work and vice versa.

– Plus, you can visualize the way the pressure works: The external pressure is just the ongoing impact of molecules from outside the parcel onto the molecules on the inside. The fact that the pressure is greater at the bottom than at the top just means they’re hitting more often and harder at the bottom. So the “inside parcel” molecules at the bottom are picking up KE from the “outside parcel” molecules and converting it into gravitational PE as the boundaries of the parcel move up.

on August 9, 2011 at 4:18 am |willbNeal J. King,

Thanks for the more detailed explanation of the parcel of gas concept. I’d like to make just a couple of comments before moving on:

1. In the scenario we are discussing, I would still tend to call the parcel a useful abstraction rather than a useful idealization. The problem with calling it an idealization is that you now have a tendency to believe it has physical boundaries. But there are no physical boundaries. You can’t push on the parcel from below and the parcel can’t push on the air above it, except as an abstract concept. But I’ll grant that it’s still useful to analyze the parcel as if it did have physical boundaries.

2. You say:

With this statement, you are more or less saying that it’s not possible to conceptualize convection via the kinetic theory of gases. I don’t accept this. In fact, I’m currently inclined to think it makes a whole lot of sense to use the kinetic theory for conceptualizing the adiabatic lapse rate.

on August 9, 2011 at 11:43 amNeal J. Kingwillb:

– What defines a parcel is not so much boundaries but membership. Since the mean free path of a molecule is about 7e-8 meters, that allows boundaries to be deduced.

– If you don’t talk about parcels, how do you even define such a thing as the volume of a portion of gas? How do you conceptualize convection cells? What does “adiabatic” mean?

But go ahead, see how far you can get. I’m not against trying to understand everything at the lowest level possible.

on August 10, 2011 at 12:42 am |willbNeal J. King,

Ok, thanks for humoring me. So in my comment on August 8, 2011 at 1:06am I said if the gas molecule rises up in altitude by 1 meter, then it gains potential energy of ‘mg’. You said we had to keep in mind the temperature/pressure/density profile of the atmosphere, so to that I’m going to say let’s assume the atmosphere is in hydrostatic equilibrium. I take this to mean there is zero net force everywhere and no parcels of gas are doing work, and specifically the parcel of gas to which our MUT (Molecule Under Test) belongs.

When the gas molecule rises and gains potential energy ‘mg’, it has to lose an equivalent amount of kinetic energy (= heat energy) because: no work is being done on the molecule, the molecule is not doing work on anything, and energy must be conserved. Besides, we know it’s going to lose kinetic energy because it’s decelerating under the force of gravity.

Ok, I’m going to pause here because I think I’ve more or less laid out my basic scenario and I would like to get your feedback at this point.

on August 10, 2011 at 9:58 amPaulMwillb, one small point for you. You say that the lapse rate seems to exist independently of convection. But your argument involves a gas particle rising, which is what happens in convection. Any lapse rate argument requires exchange of particles. This exchange could be due to convection, or to another mechanism.

on August 10, 2011 at 11:28 amNeal J. Kingwillb:

– It’s not a matter of “humoring” you: I’m quite happy to help you see how far down you can chase understanding of this phenomenon. I have my own guess as to how far it will go; but that’s a “side bet”.

– Hydrostatic equilibrium means that there is no push on any portion of the gas, considered as a fluid, to move. But that also means that any external force (like gravity) must be counted by a pressure gradient: otherwise the gas would be dropping to the ground like Newton’s apple. Since pressure is force/area, the total force acting on a parcel of gas, of vertical thickness dz and horizontal area A, whose bottom is at altitude z, is:

net force = A*(P(z ) – P(z + dz)) – Mg

= A*(P(z) – P(z + dz)) – g*m*(dz*A)*n

= A*(-dP – mgn*dz)

= A*dz*(-dP/dz – mgn)

So in hydrostatic equilibrium:

0 = -dP/dz – mgn ; so the traditional statement is:

dP/dz = – mgn

However, looking upon it as a net-force equation has some value:

net-force/area = dz * (-dP/dz – mgn)

That says that the parcel of gas has TWO forces acting on it:

– the gravitational; plus

– the pressure difference between ceiling and floor, due to the pressure gradient

This applies perfectly to a parcel of gas, considered as a macroscopic object. Now if you make the parcel smaller and smaller, it is still true: just as reducing the mass by reducing the scale doesn’t make gravity go away, reducing the pressure difference by reducing the scale doesn’t make the effect of the pressure difference go away, pressure is still a real phenomenon. But if you want to look at it from a molecular level, you have to understand pressure as the average result of bombardment by an assembly of molecules each having momentum, mass, and having an average density.

So, if you consider a situation in which the typical molecule is not drifting downward but is roughly stable in altitude (and this has to be true for a non-moving atmosphere), you need to consider not just the -mg force acting on it, but also the “molecularized” pressure gradient that is also acting on it. Because the net force is 0, the net change in KE due to that net force also has to be zero.

What this boils down to: If you want to add an increase of energy/molecule for additional altitude h equal to dE_g = mg*h, then you also need to add an additional term for the reduction of pressure/molecule. Otherwise, you come to the conclusion that your individual molecules would all respond to the pull of gravity by falling and crashing onto the floor.

How can we do that? Consider that the number density n = 1/V, where V is the amount of spatial volume that we can allocate to a molecule. (In a box of gas of N molecules, V = (Volume of box)/N .)

Then since:

net force = A*dz*(-dP/dz – mgn)

= A*dz*(-dP/dz – mg/V)

= (A/V)*dz*(-V*dP/dz – mg)

If we assume a “parcel” = cube of volume = V = a^3 , and consider that

A = a^2

net force = (dz/a) * (-V*dP/dz – mg)

This is getting a little weird, because this actually isn’t a very natural direction of argument, so there are a lot of directions one can take and get lost in.

So I let you decide what direction you want to take. But two warnings:

– Remember that V = 1/n, and n varies with height: n = P/(kT) .

So a = V^(1/3) also varies with height.

– Don’t think about an energy change due to “one meter”, it will screw up the calculation. Think about a definite change in height of h or dz, and leave the quantity in the calculation explicitly: it will keep the units straight, and avoid confusing the issue.

–

on August 10, 2011 at 8:48 pmwillbPaulM,

With the kinetic theory, the molecule is moving randomly and at some future time it will descend. Averaged over a sufficiently long period of time, the mean position of the molecule remains constant – therefore no convection.

on August 10, 2011 at 9:08 pm |willbNeal J. King,

Based on your feedback, I think that the best direction to take at this point is for us to come to an agreement on what a parcel of gas looks like from the kinetic theory perspective. I’m inclined to think that a good place to start is the graphic shown in the Wikipedia article on Kinetic Theory. Are you ok with that?

on August 10, 2011 at 9:30 pmNeal J. KingThe article at

http://en.wikipedia.org/wiki/Kinetic_theory

looks OK, but the graphic shows a portion of gas that is constrained within a box. A parcel has no rigid walls, it has a shape that is determined by a rough sense of where its member molecules are.

So if you think of the walls of the graphic as being like a balloon made of spider web, it would be OK. The bouncing off the boundaries of the molecules comes from collisions with external (“non-member”) molecules, not from hitting walls.

on August 12, 2011 at 12:03 am |willbNeal J. King,

Ok, I’d like to create a mental picture with the spider web material. I take the material and form it into a cube around some of the gas in the atmosphere. This creates a cubic-shaped parcel of gas with identifiable boundaries. Since you don’t like 1 meter, let’s say that the cube is ‘z’ meters on a side. The spider web material acts as a boundary which we can use later for calculations.

The atmosphere is in hydrostatic equilibrium, so there is no net force upward or downward on the cube of gas. The pressure against each inside wall of the cube exactly matches its outside pressure. The cube remains stationary, suspended in the atmosphere. Inside the cube is our jumble of monatomic gas molecules constantly moving around, colliding with each other and bouncing off the side walls and floor and ceiling of the cube to create the inside pressure.

Because we are in hydrostatic equilibrium in a gravity field, there is a different pressure on the ceiling than on the floor of the cube, which you have analyzed already. I believe that we should be able to create a relationship between the pressure differential you have calculated:

dP/dz = – mgn

to the formula for pressure used in the Wikipedia article on the kinetic theory:

P = (nmv^2)/3

Before I go any further I’m going to pause here so I don’t get too far ahead of myself and again ask for your feedback.

on August 12, 2011 at 12:57 amNeal J. Kingwillb:

So far, so good. Just two points:

– The “v” in the pressure formula has to be understood as the root mean squared (RMS) of the velocity distribution of the molecules in the gas.

– Let’s restrict use of “z” to the spatial variable of altitude/height; and denote the actual length of something by another name; for example, the linear dimension of the cube by “h”. We can use “dz” for a differential of altitude.

on August 12, 2011 at 2:55 am |willbNeal J. King,

Ok, so from your equation the pressure differential between the floor and the ceiling is:

dP = – mgn dz

Let the pressure at the floor be P_floor and the pressure at the ceiling be P_ceiling. Then

P_ceiling – P_floor = – mgn dz

The kinetic theory equation for pressure P is:

P = (nmv^2)/3

Let v_floor be the RMS velocity of the molecules at the floor and v_ceiling be the RMS velocity of the molecules at the ceiling. Then:

P_floor = [nm(v_floor)^2]/3

P_ceiling = [nm(v_ceiling)^2]/3

P_ceiling – P_floor = (nm/3)[(v_ceiling)^2 – (v_floor)^2]

Equating the two forms of pressure differential:

(nm/3)[(v_ceiling)^2 – (v_floor)^2] = – mgn dz

Therefore:

(v_ceiling)^2 = (v_floor)^2 – (3g dz)

So under the condition of hydrostatic equilibrium the kinetic theory shows that for a parcel of gas, the molecules at the top of the parcel are travelling at a slower speed than the molecules at the bottom of the parcel. To be more precise, the mean square of the velocity of the molecules at the cube’s ceiling is less than the mean square of the velocity of the molecules at the cube’s floor by a factor of 3g dz.

Does this look reasonable so far?

on August 12, 2011 at 8:54 amNeal J. Kingwillbe:

Not quite:

P_ceiling – P_floor = (nm/3)[(v_ceiling)^2 – (v_floor)^2]

is not correct, because the value of n, as well as of v_rms, is continuously varying as a function of altitude. Therefore:

P_ceiling – P_floor = (m/3)[n(ceiling)*(v_ceiling)^2 – n(floor)*(v_floor)^2]

So since:

dP = -mg*n_average * dz = -mg*dz*[n(ceiling) + n(floor)]/2

(m/3)[n(ceiling)*(v_ceiling)^2 – n(floor)*(v_floor)^2]

= -mg*dz*[n(ceiling) + n(floor)]/2

[n(ceiling)*(v_ceiling)^2 – n(floor)*(v_floor)^2]

= – (3/2)*g*dz*[n(ceiling) + n(floor)]

[Also: your final equation is not correct; but even so, the statement below is not a correct interpretation of it:

“To be more precise, the mean square of the velocity of the molecules at the cube’s ceiling is less than the mean square of the velocity of the molecules at the cube’s floor by a factor of 3g dz.”

It should be:

“a factor of 3g dz” => “an amount 3g dz”

However, the equation is not correct anyway.]

on August 13, 2011 at 3:39 am |willbNeal J. King,

Ok, thanks for the feedback and I see your point about ‘n’. Both ‘n’ (number of molecules per unit volume) and ‘v’ (molecular RMS velocity) will have different values at the ceiling compared to the floor. Looking at this scenario in a qualitative kind of way:

– The significance of ‘n’ changing means pressure is reduced because there are fewer collisions occurring at the ceiling.

– The significance of ‘v’ changing means pressure is reduced because there is less kinetic energy in each collision.

Both of these changes appear to play a role in the dalr. Since ‘n’ is a function of volume, I see now that this parameter has some issues associated with it when using the concept of a parcel of gas within the kinetc theory. Please give me a moment to think about this.

on August 13, 2011 at 6:27 pm |willbNeal J. King,

After thinking about it for a bit, I am coming to the conclusion that I can’t really use the term ‘n’ in conjunction with the kinetic theory for this analysis:

1) The term ‘n’ represents the number of molecules and is a function of both volume and pressure.

2) I am trying to analyze the pressure differential from bottom-to-top in a parcel of gas that contains a fixed number of molecules.

From 1) above I need to vary the number of molecules as the pressure changes, but from 2) if I want to use the kinetic theory on a parcel of gas then I have to analyze any pressure differential while keeping the number of molecules constant.

By using the term ‘n’ I’ve got a contradiction that I am unable to resolve. Therefore I can’t use the term ‘n’, so I’m going to see if I can avoid using it for now.

on August 13, 2011 at 7:22 pmNeal J. Kingwillb:

I have been using the terms as follows:

N = the total number of molecules (say, in a parcel)

n = the number density of molecules

So if at some time the parcel has N molecules and volume V, then

n = N/V

Following the parcel around, N should not change (if we don’t wait so long that there is too much molecular miscegenation), but V will change; and so will n.

on August 14, 2011 at 1:31 am |willbNeal J. King,

I think perhaps the next step is to try to get a mental picture of what is happening within the parcel (from a kinetic theory standpoint). With respect to pressure, we know there is less pressure at the parcel’s ceiling than at its floor and as far as I can see, there are really only two possibilities to explain this pressure difference:

1) The ceiling pressure is reduced because there are fewer collisions between molecules and the ceiling than between molecules and the floor.

2) The ceiling pressure is reduced because there is less kinetic energy in each of the ceiling collisions compared with the collisions occurring at the floor.

I’d suggest that both of these effects are occurring. The molecules are randomly bouncing around within the parcel, constantly moving. When a molecule travel upwards against gravity it will lose kinetic energy. It seems reasonable to assume that a molecule will therefore have less kinetic energy when striking the ceiling than when striking the floor.

Also, all collisions are elastic. When a molecule strikes the floor it will rebound and head towards the ceiling. However, the velocities of the molecules striking the floor are randomly distributed. Some molecules will not have sufficient kinetic energy to overcome the downward force of gravity and make it to the ceiling. If a molecule doesn’t have sufficient kinetic energy after bouncing off the floor, it will only make it part way to the ceiling before turning around and falling back to the floor. So, while all molecules are always bouncing off the floor, not all molecules are able to travel to the ceiling and bounce off it.

Does this seem like a reasonable kinetic theory “picture”?

on August 14, 2011 at 2:17 amNeal J. Kingwillb:

– It’s reasonable to assume, in general, that both the v_rms and the number density will be varying as a function of altitude.

– However, to attribute the total change in v_rms to travel against the gravitational attraction is to assume too much. Keep in mind that there are many possible pressure-temperature profiles, of which the DALR is just one. For example, take a case in which the entire gas (not just the parcel) is maintained at constant temperature T_o: In this case, the average KE/molecule is independent of altitude, so v_rms is also independent of altitude.

– Also, the picture of molecules bouncing up from the floor all the way to the ceiling doesn’t take into account that the dimensions of the packet need to be much greater than the mean free path of the molecules, an estimate of which I quoted somewhere above as being about e-7 (m). [It’s easy enough to calculate it, but I’m quite sure that the mfp is many many times less than 1 (cm), for gases within 1 (km) of the earth’s surface.]

Whether these points are important depends on where you are going with the calculation.

on August 14, 2011 at 11:23 amNeal J. Kingwillb:

(to continue from previous response)

As DeWitt has reminded, we need to look also at the implications of the equation of hydrostatic equilibrium. That tells us that the pressure profile is related to the density profile, because the pressure at any altitude has to be able to support the weight of the stack of air above it.

But it also tells us that the pressure gradient (increasing downward) is a force pushing upward, and is opposed to the gravitational pull downward. Using the form of it that I stated above, when I applied it to a tiny volume V = a^3, that contains the gas volume for one molecule:

– mg = V*dP/dz = (a^3)*(dP/dz) = (a^2)*(P(z + a) – P(z))

This can be interpreted to say: “The gravitational force on one molecule is countered by the pressure difference on the tiny parcelet that contains that molecule (and its ration of space).”

Further interpreted in molecular terms: That one molecule is getting hit either more often, or harder, or both, by collisions from the bottom than by collisions from the top. So it will experience a push upwards, as well as the pull downward.

So you cannot conclude that the net result will always be that the molecular KE is lower at the higher altitudes. Indeed, if you create a temperature inversion layer, the temperature (and thus the molecular KE) actually increases with altitude, within the layer.

on August 14, 2011 at 11:03 pm |willbNeal J. King,

Regarding your point about other lapse rates: I acknowledge that many other lapse rates are possible, but I am only considering the DALR in an atmosphere of monatomic gas in hydrostatic equilibrium. This assumption means there is no energy entering or leaving the parcel of gas. There is no work done on or by the parcel of gas. From the definition of DALR I believe I am justified in attributing the total change in v_rms to travel against the gravitational attraction.

Regarding your point about the mean free path of the molecule, I tend to agree with you that this might be an issue. I reviewed the Wikipedia article on the kinetic theory to see how the writer handled this issue in deriving pressure. He/she more or less ignored inter-molecular collisions and assumed the molecule travelled unimpeded from one wall to the opposite wall and back. I presume this means that the writer, in analyzing the gas molecule, is considering more the momentum of the molecule and not the molecule itself. I was intending to follow this lead and handle the motion of the gas molecule in my analysis the same way.

Regarding your discussion about the implications of carrying out analysis on a tiny volume that contains the gas volume for one molecule, I was not intending to do this. I believe the kinetic theory requires that you work with enough molecules so that a statistical analysis can be carried out.

on August 15, 2011 at 10:27 amNeal J. Kingwillb:

– You are over-interpreting the definition: There is always an exchange of kinetic and gravitational energy; but that doesn’t mean other factors don’t weigh in. You have to take into account what the impact of the pressure gradient will be on the kinetic energy of the gas. “Adiabatic” does NOT mean “no energy entering or leaving the parcel of gas. There is no work done on or by the parcel of gas.” It means “no heat (= dQ = dU + PdV) leaving the parcel”; which is a very different concept.

– I’ve done the arithmetic for mean free path now: for 1 atm of pressure, at 300 K temperature, a gas has density 2.44*10^25 /(m^3). For a monatomic molecule of radius of 1e-10 (m), this gives a mfp of 3.26e-7 (m). If your parcel size is bigger than that, the graphic is unrealistic.

– The main point of my bringing the equation of hydrostatic equilibrium down to a scale of one molecule is NOT to conduct the calculation that way, but to point out that the pressure gradient issue does not go away even when you consider the gravitational force acting on even just a single molecule: More is acting on that molecule than just the earth’s pull.

The bottom line: to find the DALR, you have to specify what is actually going on during an “adiabatic” process. So far you haven’t really done that.

on August 15, 2011 at 11:00 pm |willbNeal J. King,

If I understand you correctly, you are saying that gravity is not the only force acting on the molecules in the parcel and changing their kinetic energy. Some other force is also in play. I believe you are saying this other force is a function of pressure. Am I interpreting you correctly? If so, could you please clarify what this force is?

on August 15, 2011 at 11:57 pmNeal J. Kingwillb:

The upward force is the pressure gradient: Its magnitude is exactly the same as the gravitational force, on the average. At the molecular level, it consists of collisions with other molecules, which impart a statistically upward momentum.

on August 16, 2011 at 9:49 pm |willbNeal J. King,

If I may paraphrase your comment so that I understand its implications at the molecular level, does the following reflect what you are saying?:

– There is a pressure gradient in the parcel of gas.

– The pressure gradient creates a force on the gas molecules that affects their momentum.

– The effect of this force is to impart an upward change in their momentum (in the +ve z direction).

– The upward change in momentum adds kinetic energy to the molecules in the +ve z direction, counteracting to some extent the decelerating effect of gravity.

– This force due to the pressure gradient manifests itself during inter-molecular collisions.

If this is a correct interpretation of what you are saying, then I’m having a bit of difficulty rationalizing it with the concept of conservation of momentum. In an elastic collision between two ideal gas molecules, the total momentum after the collision should be exactly the same as the total momentum before the collision. The implication of your comment is that, under conditions of a pressure gradient, this is not so. There will be an addition to the momentum in the +ve z direction after the collision occurs, attributed somehow to the pressure gradient.

on August 16, 2011 at 10:49 pmNeal J. Kingwillb:

The total momentum just after and just before the collision are the same, but the molecule that we are focused on (the upper one) has received an upward impact from the one below; and the one below has received a downward impact from the one above. Since these impacts are equal & opposite, Newton’s 3rd law and conservation of momentum are upheld.

Nonetheless, since the pressure is increasing downward, if you look at any one molecule, it’s going to get hit harder (and/or) more often from below than from above. Let’s focus one factor and assume the other is fixed: assume that the pressure below is twice as high because the number density is twice as much, but the rms molecular speeds are the same. In a given period of time, the molecule of interest will get hit twice as many times from below as from above, so there will definitely be a push upwards. Two points:

– Of course there cannot really be a discontinuous jump in pressure by a factor of 2. But the point is that even a gradual pressure gradient creates a difference in the momentum impact per unit time, which is what force is. The gradient has to be taken over a finite dz to give a finite pressure difference; conceptually speaking, a reasonable dz would be one mean free path (mfp), although the final results don’t depend on the exact value of dz.

– Why isn’t the lower molecule being driven down “out of the gas”? Because it has neighbors below it, further kicking upward – and there are more of them. So it’s also feeling a net kick upward. The pressure gradient keeps going all the way to the ground.

////////////////////////////////////////////////

I’m not sure you’re ready to take a further step, but I had an idea about how to bring the DALR closer to the kinetic model. The basic point you will have to come to terms with is what the term “adiabatic” really means, and why it matters.

In terms of thermodynamics, as DeWitt and I have stated before, it means that there is a change in the volume of gas that does not entail exchange of heat with the outside. This leads to the equation:

dU = – P*dV

For our current model of a monatomic gas, the internal energy

U = 3*N *(m(v_rms)^2)/2 = (3/2)N*(kB)*T

So what this is telling me is that the internal energy of the gas is changed solely by the work exchanged with the outside world. This work is exchanged by the walls of the gas’ chamber moving one way or the other against the internal pressure of the gas; without gaining or losing any energy due to temperature differences between the walls and the gas. So my interpretation is that, if you think of all these balls bouncing around all the walls elastically, and then study what happens as one of the walls slowly moves in, you will see that the average kinetic energy of the molecules will be increased; and you will find that the total KE of the gas is bigger, exactly by the amount P*dV.

Conversely, if you watch what happens when one of the walls is moved out, the average KE of the molecules bouncing off that wall will be decreased.

In short, if you can show that:

N * d(KE/molecule) = -(1/3)P*dV

then you will have a kinetic-gas model for adiabatic expansion/contraction. That would be half the way home, maybe a little more.

on August 18, 2011 at 12:22 pmNeal J. KingThis is intended as a correction to my post at: August 16, 2011 at 10:49 pm

“N * d(KE/molecule) = -(1/3)P*dV”

should be:

“N * d(KE/molecule) = – P*dV”

(I should be more careful with my equations at night.)

Hope this ends up in the right place.

on August 17, 2011 at 2:05 am |willbNeal J. King,

If momentum is conserved during inter-molecular collisions, then collisions will not impart a statistically upward momentum and the net force due to collisions will be zero.

on August 17, 2011 at 9:39 amNeal J. Kingwillb:

No, your statement is not correct: Check my reply to DeWitt’s comment somewhere below.

(aaa1)

on August 5, 2011 at 1:32 pm |BryanNeal J. King says

……”I’m afraid I must disagree with just about every substantial point you’ve claimed.”…….

I see only two substantial points here made by wllb & Bill Gilbert.

1 The molecular approach is often very useful in conceptualizing the underlying physics.

2. And, yes, the adiabatic lapse rate exists independently of convection.

Neils long post seems to concern the number of degrees of freedom of the diatomic molecules, ” at temperatures of interest,”

What other “substancial points” are you referring to?

Your tone seemsto be highly polemical and inappropriare.

on August 5, 2011 at 3:45 pm |DeWitt PayneBryan,

The substantial point was that the approach was wrong from the start. pdV work was ignored. The equation of interest was not (or not only) the ideal gas equation. The important equation is that for adiabatic expansion pV^γ = K, where K is a constant.

on August 5, 2011 at 4:11 pm |BryanDeWitt Payne

Right at the top Bill Gilbert writes;

dU = dQ – PdV ….[WG1]

He later adds another work function for the gravitational field.

Surely he does not have to repeat the whole article in order to help clarify a question raised.

That would seem a totally unreasonable demand.

on August 5, 2011 at 7:08 pm |DeWitt PayneBryan,

Neal was not referring to William Gilbert’s derivation in WG2010 (already savaged by SoD) in his critique, but to willb’s derivation and William Gilbert’s correction to willb. willb

doesignore pdV work and WG’s correction to willbiswrong.on August 5, 2011 at 7:42 pm |BryanDeWitt Payne says

WG’s correction to willb is wrong.”

As far as I could see WG was helping willb one bit at a time.

Neal’s contribution was a pedantic discourse on seven degrees of freedom verses five.

Everyone knows that the Kinetic Theory breaks down for the higher degrees of freedom.

Some molecules follow the theory while others do not.

You say

“WG2010 (already savaged by SoD)”

Empty rhetoric, read the thread again.

If my memory is correct Neal, midway through the comments, comes round to agreeing with WG that the Gravitational PE of molecule is not part of the molecules internal energy.

on August 5, 2011 at 8:47 pm |DeWitt PayneBryan,

Failure to adequately explain (hand waving about degrees of freedom isn’t an explanation) exactly why willb should use 2/7 instead of 2/3 is not helping ‘one bit at a time’. It’s not helping at all.

Neal J. King comes round to agreeing? Who said he disagreed in the first place? Here’s what he actually said:

Not that that’s relevant to the point at hand.

I read the thread the first time. Savaged is the appropriate term for such bone headed errors. It reminds me of someone else who I won’t name who can’t do simple algebra either.

on August 6, 2011 at 1:26 amBryanDeWitt Payne

Nit picking about trivia and empty rhetoric is pointless.

The greenhouse hoax is coming apart.

Pseado science does not stand up too even cursory examination.

This paper by Joseph Postma complements the G&T critique of the so called CO2 induced. greenhouse effect disaster.

It is also in agreement with Gilbert & Jelbring.

It focuses on the real physical processes that influence climate.

http://www.tech-know.eu/uploads/The_Model_Atmosphere.pdf

on August 6, 2011 at 7:49 pm |williamcgNeil J. King,

I’ve just finished catching up with the subsequent posts to mine. I wrote 7 sentences and you wrote 3 pages in counterpoint. You could have saved some time if you had just asked me how I arrived at (7/2). You correctly arrived at the degrees of freedom term of (5/2) for a diatomic gas (although this is only an approximation for lighter molecules since the equipartition theorem doesn’t work well for those). But I’ll go along. But you did not take into account that this is only valid for a molar heat capacity at constant volume (read your own quote from Wiki). But the atmospheric system is not a constant volume system; it is a constant pressure system.

Cv = (5/2)R but

Cp – Cv = R thus

Cp = (5/2)R + R = (7/2)R

Doesn’t it ever bother you when theory doesn’t match observation? Using the factor (5/2) gives you a dry adiabatic lapse rate of 13.68 K/km. That should be a warning that something is wrong. The true dry adiabatic lapse rate is 9.81 K/km. The factor of (7/2) yields 9.77 K/km which is close but still not very accurate (because the equipartion theorem doesn’t handle the vibrational and rotational energies well for light diatomic gases).

The rest of your epistle on the dry adiabatic lapse rate (DALR), while long and muddled, is incorrect. The DALR is a special thermodynamic equilibrium condition when the system is under the influence of both an electromagnetic field and a gravitational field. The Helmholtz free energy (A) of the DALR equals zero.

DeWitt,

I assume after reading this you understand that your statement that the kinetic theory of gases does not handle PdV is incorrect. At least I think that is what you said. Remember from my first post that CpT = CvT + PV (this can be derived from Cp – Cv = R and the ideal gas law). If you were saying that my paper does not take PdV into account, then you need to read it again. The entire paper is focused on the role of PdV work in the dynamic troposphere.

As for my paper being “savaged” by SOD, I just feel sorry for him because he showed his complete lack of understanding of basic physics and thermodynamics outside of radiation heat transfer. I suggest you both take a course or two in “combined heat and mass transfer” at a good engineering school.

I’m done with this thread. But thanks Willb for bringing up the kinetic theory of gases – it is definitely pertinent to the subject at hand.

Bill Gilbert

on August 6, 2011 at 8:23 pm |Neal J. KingBill G.,

Two other points:

– “But the atmospheric system is not a constant volume system; it is a constant pressure system.” The pressure of the atmosphere changes continually with height, as does the temperature and density. It is not constant pressure or constant volume.

– “The Helmholtz free energy (A) of the DALR equals zero.” The DALR is a RATE, not a substance or a system. There is no such thing as the Helmholtz free energy of the DALR, any more than there is such a quantity as the Helmholtz free energy of the 55-mile speed limit.

– I don’t know what you’ve been smoking, but it sure must be good.

on August 6, 2011 at 8:06 pm |Neal J. KingBill G.,

You’re not reading straight.

a) I already said that the correct answer is:

dT/dz = – (mg/(kB))(2/(2 + f))

so for f = 5, as for a gas of diatomic molecules,

the ALR = (mg/(kB))(2/(2 + 5)) = – (mg/(kB))(2/7)

How can you disagree with that? If you look at the deviation from reality, the generally accepted explanation, supported by numerical calculations, is that you have to take into account water vapor. But the formula I’ve given is the generally accepted formula for the dry ALR.

If you want to fall back to the reliance on Cp, that’s fine: That’s exactly equivalent to the formula I presented, and is based on exactly the same physics. But note that it CANNOT be derived from anything like willb’s argument from a molecular-kinetic theory: How do you even define a constant-pressure situation in terms of a one-molecule gas?

b) “The DALR is a special thermodynamic equilibrium condition when the system is under the influence of both an electromagnetic field and a gravitational field.”

You are mildly insane: The DALR has absolutely nothing to do with the influence of electromagnetic fields. Were you the same guy that was talking about the impact of convection currents on electricity?

on August 6, 2011 at 9:20 pm |BryanNeal J. King says

“Were you the same guy that was talking about the impact of convection currents on electricity?”

No that was me and its a mistake.

I intended to write conventional current (+ to -) as an alternative to electron flow(- to +) as an analogy as to why some textbooks differ about the sign given to Pdv.

Both work fine as long as you are sure about how you define them.

on August 7, 2011 at 5:00 pm |DeWitt PayneBryan,

Yet more evidence of your propensity for confirmation bias. Did you actually read the Postma paper? He favorably references Lyndon LaRouche on page 25! He conducts a rehash of the Wood greenhouse experiment which proves nothing with respect to the atmospheric greenhouse effect. As far as I can tell, he completely ignores the experimentally measured long wave IR emission of the atmosphere. There is no mention of energy balance diagrams. He spends a lot of time trashing the single slab atmosphere model, a pedagogic tool that nobody claims actually represents the real atmosphere. The paper complements G&T only insofar as both should have been printed on that soft paper that comes in rolls found in bathrooms.

on August 7, 2011 at 5:49 pm |BryanDeWitt Payne

I must admit I had never heard of Lyndon LaRouche, Wikki says he has some kind of synthesis of Marxism and Capitalism.

With two major crisis of the word market system within 3 years and rampant speculation by hedge funds perhaps Lyndon has a point.

…..”He conducts a rehash of the Wood greenhouse experiment”…..

Yes still looking for the phantom greenhouse effect no doubt.

Why do the IPCC adherents feel not try to do the same thing.

Is that like asking a catholic to prove that there is a God?

..”completely ignores the experimentally measured long wave IR emission”…

He takes that for granted so it doesn’t require comment.

….”There is no mention of energy balance diagrams”…….

Well perhaps he feels that they are completely useless.

Does your favourite energy balance diagram have a solar radiation component of 342W/m2 instead of 1370W/m2.

The KT (97) diagram explains the OXO cube planet with four Suns.

Whenever we find one perhaps then it might have some use.

However with our day/night planet it is useless.

……”trashing the single slab atmosphere model”………

The idea here is to see if there is any theoretical proof of the greenhouse effect.

If there was some reality there, then a more elaborate would be justified.

However Postma showed the concept to have no rational basis.

Why would a more slabs help?

In actual fact Postma’s two papers will be more effective than the outstanding G&T paper.

G&Ts paper was aimed at a physics audience.

Postma’s paper is more accessible a more general scientific audience and the public.

on August 7, 2011 at 10:44 pm |DeWitt PayneBryan,

It’s interesting that you trash John Denker as a maverick, but accept without question Postma and G&T who are far more outside the mainstream than Denker. Confirmation bias in spades.

If you want to argue from authority, show me some reputable physicists who publicly defend G&T.

Do you really not understand the geometric argument for an

average dailyinsolation at the top of the atmosphere of 342 W/m²? The surface of the Earth is a sphere, not a plane. What’s the ratio of the area of a sphere to a plane disk with the same radius? You only get 1370 W/m2 at local noon when the sun is directly overhead, like at the equator for the spring and fall equinoxes. At every other time of the day or other latitudes it’s less.on August 8, 2011 at 7:42 am |BryanDeWitt Payne

We are in danger of talkihg past each other

” Do you really not understand the geometric argument for an average daily insolation at the top of the atmosphere of 342 W/m²? ”

The schoolboy type justification for this bit of trivia is easily understood.

But it has no physical reality.

Postma deals quite effectively with the day/night reality that such an approach denies.

Read their paper.

…….”some reputable physicists who publicly defend G&T.”……

Gerhard Kramm, Dr Wolfgang Thune,Paul Bossert Fred Staples and so on….

Show me somewho dont!

An intelligent G&T critic might say something like

Nothing that they have written is actually incorrect, but they dont quite prove their point.

The discrepancy they point out is that G&T say little about the TOA.

G&T on the other hand never claim a climate model of their own.

They set out to demololish the CO2 induced global warming crisis theory.

I think they were pretty effective in this modest task.

By the way what did you think of the chemical thermodynamics textbook that I linked.

on August 8, 2011 at 1:55 am |DeWitt PayneBryan,

Picking at random:

It’s called numerical analysis.

How many slabs, you might ask? As many as are needed. The defining criterion being that the increase in precision of the answer by increasing the number of slabs becomes smaller than the precision you need. Assuming, of course, that going to finer scale does increase the precision (the solution converges). That can be a problem with Navier-Stokes. How do you think spacecraft orbits are calculated? There certainly isn’t an analytic solution as more than two bodies are involved.

You can even solve ill-posed (ill-conditioned) problems like converting x-ray diffraction data into a three dimensional molecular structure using numerical analysis. Many inverse problems are ill-posed, that is there are an infinite number of solutions.

on August 8, 2011 at 7:53 am |BryanDeWitt Payne says

“Why would a more slabs help?……

It’s called numerical analysis.”……

If no rational effect can be detected with one slab why would more help?

The attemp by global warming advocates try to hide behind complications wont work.

I expect SoD will have to feature this paper.

After being badly mauled by this tread he will be on more comfortable ground with Postmas paper , which is largely about radiative physics.

on August 8, 2011 at 2:33 pm |DeWitt PayneBryan,

Any time someone claims to have found something blindingly obvious, like the sun goes below the horizon at night, that climate scientists are supposed to have completely missed, my BS detector starts flashing. SURFRAD, for example, monitors incoming and outgoing radiation 24/7.

Postma completely ignores heat capacity with his flawed day/night analysis. The diurnal temperature variation in the tropical ocean surface temperature, for example, is less than 1 degree. It’s fairly trivial to get a reasonable diurnal temperature cycle for everything from the moon to the tropical ocean with a simple spreadsheet model.

As I’ve stated several times before, G&T are utterly wrong about energy balance diagrams. Their claim that they are somehow scientific fraud is not supported by any evidence in their paper. That alone is enough to put their paper in the trash heap. Their invocation of MHD is simply ludicrous.

on August 8, 2011 at 6:16 pm |BryanDeWitt Payne

As Postma points out on page 19 the plane parallel assumtion which is true for stars is not valid for our planet.

But all is not lost!

Greenhouse enthusiasts can retain the plane parallel assumption if they adopt my proposal for the OXO cube model for our planet.

They will however have to add another two suns making six in all but then the energy diagrams might have some meaning.

I take it that energy diagrams such as KT97 are the most preciuos part of the greenhouse theory

……”The diurnal temperature variation in the tropical ocean surface temperature, for example, is less than 1 degree. It’s fairly trivial to get a reasonable diurnal temperature cycle for everything “……..

What about deserts?

On G&T you say…. “Their invocation of MHD is simply ludicrous.”…

The solar wind effects caused by MHD turbulence can therefore be completely ignored, no influence on our climate whatsoever !

on August 8, 2011 at 9:59 pm |DeWitt PayneBryan,

No he doesn’t. In fact, he says the opposite.

He then creates his ‘irrational model’ (a complete strawman having nothing whatsoever to do with calculating radiative transfer in a planetary atmosphere or a plane parallel geometry) out of whole cloth and proceeds to demolish it. But we know that a 1 dimensional radiative transfer model works in the Earth’s atmosphere because we can use it to calculate IR emission spectra that are nearly identical to measured spectra within the experimental error of the line strengths and the temperature and partial pressure profiles. But like G&T, Postma completely ignores actual observations in favor of his own imaginings.

Lower heat capacity, higher diurnal temperature variation. How much of the planet is desert anyway? Not much. It’s all olds, not news.

You might want to check the comments at RealClimate starting here.

I particularly like deconvoluter’s comment:

on August 8, 2011 at 11:22 pm |BryanDeWitt Payne

I said

As Postma points out on page 19 the plane parallel assumtion which is true for stars is not valid for our planet.

You replied

No he doesn’t. In fact, he says the opposite.

Postma said

The first condition [plane parallel geometry] is the most important and has direct application to the terrestrial case.

I say

Postma USES the direct application to show that the IPCC practice of the terrestrial use of something that is valid for a star and shows that this is an absurd assumption.

At equal distances from a star centre equal temperatures will be found on average at the same time.

At equal distances from the Earth centre equal temperatures will not be anything like the same at the same time because of the day/night condition.

Hence the plane paralel assunmption is absurd in the case of the Earth surface

on August 8, 2011 at 11:31 pmBryanDeWitt Payne

Your real climate quotes were predictably dull and pointless.

How can any rational person can believe in a IPCC style greenhouse theory after the Postma demolition?

on August 8, 2011 at 10:17 pm |DeWitt Paynewillb,

The concept of parcels is fundamental to the Lagrangian framework of fluid flow. OTOH:

http://www.usna.edu/Oceanography/Barrett/SO335/Ch4_Advection.doc

Both have their uses. It seems to be a little easier, though, to derive the adiabatic lapse rate in a Lagrangian framework.

on August 8, 2011 at 11:54 pm |willbDeWitt Payne,

Perhaps my comment was somewhat misleading. I have no problem with the “parcel of air” construction. I was just trying to identify some of its limitations in the particular scenario under discussion, mainly to justify my request for further clarification from Neal.

on August 9, 2011 at 7:54 pm |DeWitt PayneBryan,

I suggest you read section 3.7.4 of G&T and compare that with Postma. They can’t both be correct.

on August 20, 2011 at 10:50 pm |BryanDeWitt Payne

A delayed answer to your question.

This was to read Postmas second paper.

I don’t think that the necessarily contradict one another.

G&T maintained that the average global surface temperature of 15C had no scientific merit.

Postma used 15C in his calculations without any real comment.

His main points were

1. There is an effective radiation level of about 5Km for the Earth/atmosphere ensomble.

2.Thermodynamic calculation of the dry adiabatic lapse rate of – g/Cp.

3.Work out the radiation absorption/emission consequences separately for day and night.

However he mentions throughout the text of the benefits of treating the local actual surface conditions.

So G&T and Postma would see a multitude of local lapse rates throughout the planet surface.

The lapse rates would differ according to local conditions.

There would be a similar thermodynamic link between the local troposphere and the local planet surface temperature

on August 12, 2011 at 3:27 pm |DeWitt PayneNeal J. King,

Isn’t the whole point of parcels to make them small enough that you can ignore any variation within the parcel? Then you let the parcel do work by expanding and reducing the internal pressure and temperature. Then you use energy conservation and hydrostatic equilibrium to convert pressure to altitude. That gives you temperature vs altitude and, if you did your sums correctly, the adiabatic lapse rate. I did that in a spreadsheet a while back and it seemed to work. I even did the moist saturated lapse rate too.

on August 12, 2011 at 4:39 pm |Neal J. KingDeWitt:

The problem with ignoring variation within a parcel is that you have to do it self-consistently. willb is using:

– mng*dz = dP

so that inherently assumes that P is changing from bottom to top. That would be OK if you never have to use a value of P in the interior; but what he goes on to do is to divide by n, so that:

– mg*dz = dP/n

So then the problem is that dP is defined only at the boundaries, but n is defined only in the interior (the center points). If this were a dynamical problem, it would be equivalent to defining the velocities and the coordinate values at the same time-step points, instead of interspersing them. You are stuck doing it with the initial values (x(0), v(0)); but later you want to alternate them or the solution will be off.

Where willb is heading is equivalent to making the assumption that:

dP = d(n(kB)T) (OK)

= (kB)*d(n*T) (OK)

= (kB)*n*dT (nOK)

which is missing the term (kB)*T*dn in the last step.

on August 12, 2011 at 6:35 pm |DeWitt PayneBryan,

I’m still waiting for your reply to the point I raised above.

G&T seem to use the same ‘schoolboy geometry’ you derided.

As a side note, G&T are wrong about the integration problem for the surface temperature behavior for a rotating sphere illuminated on one side being insoluble even by numeric approximation. There are two trivial cases, a superconducting isothermal sphere and a completely non-conducting surface with zero heat capacity that have simple solutions. Even a conducting rotating sphere with finite heat capacity can be approximated in a finite time with current computers. I’ve done it in a spreadsheet with a desktop computer. AOGCM’s are far more complex than that problem. They have to do Navier-Stokes calculations as well as radiative transfer, not to mention a coupled ocean model. That’s not to say AOGCM’s are perfect, just that a far more complex mathematical problem can be solved at least to some degree.

They’re also wrong about LTE not being applicable in the atmosphere. If that were the case, then the kinetic energy distribution of the molecules at a given point wouldn’t be Maxwell-Boltzmann. Then the emission temperature calculated by measuring the emission intensity of a known molecular transition would be less than the average kinetic energy temperature measured by a thermometer. Satellite temperature sensing wouldn’t work. But of course, LTE does apply and remote sensing does work.

on August 12, 2011 at 6:38 pm |DeWitt PayneNeal J. King,

Thanks.

I seem to remember a similar problem with Miskolczi.

on August 12, 2011 at 6:49 pm |Neal J. KingDeWitt,

I don’t recall that specific issue with Miskolczi.

That gives me a guilty conscience: I am still in the middle of a clarificatory discussion with him, which I put on hold to do some paid work. We have come to some degree of common ground – but with lots of major issues still to be worked. Maybe I can get to that again, this weekend.

on August 12, 2011 at 7:45 pmDeWitt PayneNeal,

It relates to the derivation of equation (20) in M2007. As I remember, you haven’t made it that far.

on August 12, 2011 at 8:03 pmNeal J. KingDeWitt,

No, and miles to go before I sleep …

on August 13, 2011 at 8:43 pm |DeWitt PayneNeal,

I was going to suggest looking at the derivation of the adiabatic lapse rate in Caballero’s Physical Meteorology Lecture Notes. But he does it by first defining the potential temperature, Θ, and then setting dΘ/dz equal to zero. Unfortunately, my calculus is so rusty, I haven’t been able to reproduce the intermediate steps in the differentiation.

on August 13, 2011 at 9:21 pm |Neal J. KingDeWitt,

I do not need hints for deriving the DALR: I can do that. But the challenge here is to see how far we can get with just using the lower-level kinetic theory of gases, while avoiding the higher-level fluid/thermodynamic concepts. Caballero does not address things in that way.

(I am not particularly optimistic about this approach; but as long as willb is interested in pursuing the question, I am just making sure no errors are made. And maybe we will find something out.)

on August 13, 2011 at 11:51 pm |BryanNeal J. King, willb, DeWitt,

Its a fascinating topic.

I worked through the maths and calculations about 6 months ago and we briefly discussed it on another thread DeWitt.

I found these sources most helpful.

G&T approach is thermodynamics based

arxiv.org/pdf/1003.1508

From a kinetic theory of gases viewpoint with Maxwell Boltzmann statistics

Fundamentals Of Physics Extended, 8Th Ed

Halliday, Resnick

I used a calculation of a mole of air at STP being transported to a height of 10km

Putting in gravitational PE and KE loss (Temperature drop) and PV work involved

Both approaches gave the same answer and nearly but not quite match up with initial thermal energy at STP

The difference between calculations I would put down to the uncertainty about the exact contributions from the rotational and vibrational modes as discussed earlier by Bill Gilbert,Neal J. King and willb,

on August 14, 2011 at 1:18 am |DeWitt PayneNeal,

The Caballero reference was intended for willb, not you. I’m sorry I didn’t make that clearer. It wasn’t a very good idea anyway.

on August 14, 2011 at 2:22 am |willbDeWitt Payne,

Those are great notes. Thanks for the link.

on August 14, 2011 at 2:20 am |DeWitt Paynewillb,

I think you may be neglecting the hydrostatic part. There’s less pressure at the top of the parcel because there’s less total mass above the top of the parcel than there is at the bottom. The pressure at sea level is ~1E5 Pa because there’s ~1E4 kg of air in the column. At a pressure of 5E4 Pa, there’s only 5E3 kg air above that point.

on August 14, 2011 at 3:34 am |DeWitt Paynewillb,

You’re welcome. They’ve been a great help to me as well. It’s practically a textbook and it’s free. I suspect it will be a textbook eventually. It’s been greatly expanded since I first found it a few years ago.

on August 14, 2011 at 8:31 pm |DeWitt PayneNeal and willb,

Yes you can potentially have any lapse rate less than the adiabatic rate (meteorological convention of positive lapse rate meaning temperature decreases with altitude). But at the adiabatic rate, which is what’s being derived, temperature does decrease with altitude. At some point the subject of buoyancy has to be introduced as the reason for the existence of a maximum stable lapse rate. I think that’s why meteorologists invented the concept of potential temperature.

on August 14, 2011 at 9:14 pm |Neal J. KingDeWitt,

Yes, my point is that one can’t derive the DALR just from kinetic theory and Newton’s laws: You have to specify the thermodynamic/fluid conditions that the temperature profile is to match as well.

However, willb will have to come to that conclusion by exhausting the alternatives.

on August 15, 2011 at 3:51 am |DeWitt Paynewillb,

The adiabatic lapse rate is also isentropic. You might want to consider that.

on August 15, 2011 at 10:33 am |Neal J. KingYes, this is also more or less the same point I make in my latest note, since:

T*dS = dQ = dU + P*dV

so when dQ = 0, dS = dQ/T also = 0.

The trick is going to be trying to visualize that in kinetic-model terms.

A challenge …

on August 17, 2011 at 4:38 am |DeWitt Paynewillb,

If you actually believe that statement then how does a pressure gradient cause flow in a horizontal pipe?

Suppose you could suddenly turn off gravity. Now you have a pressure gradient with no resisting force. Would you not get flow from high pressure to low pressure? Of course you would. Besides, Neal answered that very point in his last comment.

on August 17, 2011 at 9:43 am |Neal J. Kingwillb:

As DeWitt said, I already addressed your question.

But here’s another way of thinking about it:

A bale of hay being pounded by machine-gun fire from right and from left. But there are two machine guns firing from the right, and only one from the left.

All other things being equal, which way is the bale going to move?

(bbb1)

on August 17, 2011 at 9:03 pm |willbDeWitt Payne,

I think I believe it, although I’m not married to the idea. Actually, I’m more than a bit surprised that I may have said anything controversial. Perhaps we are dealing with a misunderstanding. I’ll explain why I said what I did in a step-by-step way:

1. Neal said that momentum is conserved during the inter-molecular collisions. This makes sense to me as well.

2. If momentum is conserved, then the combined momentum of two molecules after they collide is exactly the same as before they collide.

3. If the collision doesn’t change the combined momentum of the two molecules, then it doesn’t change the momentum of their center of mass.

4. If the momentum of their center of mass doesn’t change, its velocity doesn’t change either.

5. Velocity is a vector. If the velocity doesn’t change, the z-component of the velocity doesn’t change.

6. If the velocity doesn’t change and the center of mass of the two molecules was drifting up at some rate before the collision, then it will be drifting up at exactly the same rate after the collision. Similarly, if the center of mass was drifting down before the collision, then it will be drifting down at exactly the same rate after the collision.

7. If the two molecules had magically passed through one another without interacting in any way, then their center of mass would have exactly the same velocity as if the molecules had collided.

8. Therefore, no inter-molecular collision is changing the momentum of the gas molecules in a statistical sense and I felt justified in saying that collisions will not impart a statistically upward momentum.

9. Neal said that force is the change in the momentum per unit time and after reviewing Wikipedia I agree.

10. If the collision does not change the momentum of the center of mass, then the force on the center of mass as a result of the collision is zero. I therefore felt justified in saying that the net force due to collisions will be zero.

Regarding your point about fluid flow in a horizontal pipe, I’m not sure I can address it. The kinetic theory only applies to gases, not liquids, so we would have to assume the pipe contained an ideal gas. I’m not an expert on the kinetic theory, but I imagine the logic would go something like this:

– The gas molecules are travelling at high speed with random velocities.

– In a non-uniform pressure environment, the random high-speed velocities cause the molecules to randomly redistribute themselves in whatever container they occupy, giving the appearance of fluid flow.

– The random distribution of both velocity and position results in a uniform pressure.

on August 17, 2011 at 9:38 pmNeal J. Kingwillb:

To understand the way it works, you need to look beyond one collision. You also have to consider the correlation between vertical direction and momentum impact:

– If a molecule is heading downward, it is more likely to get a kick upwards than a kick downward.

– Correspondingly, a molecule heading upwards is more likely to get a kick downwards; however

– a molecule heading downwards is generally heading into GREATER density, whereas a molecule heading upwards is generally heading into LESSER density; so the number of kicks per unit time experienced by a downward heading molecule is greater than for an upward heading molecule. (This is true despite the fact that every upward kick happens in conjunction with a downward kick. You need to think about the “career” of the molecule.)

– Therefore, your step 8 is not correct.

– Also, step 10 is not correct: Indeed if it were, you would have the question, “Gravity is constantly pulling down on each molecule, so each one is gaining downward momentum. So why is the average momentum of the gas = 0 ? Why isn’t the gas all rushing down?” And the correct answer is, “Because the pressure gradient is imparting the same average force upwards that the gravitational pull is imparting downward, so the net average force = 0, and the average momentum is also 0.”

Indeed, this is exactly the meaning of the equation of hydrostatic equilibrium: No gravitational force => no pressure gradient.

(bbb2)

on August 18, 2011 at 3:36 am |willbNeal J. King,

My explanation is not restricted to a single collision. It applies to each and every inter-molecular collison within the parcel of gas, irrespective of where that collision happens locally inside the parcel. And I believe it is a valid explanation.

With respect to your question and answer about gravity and pressure in hydrostatic equilibrium, I would modify the answer you gave as follows:

“Because the pressure gradient is imparting the same … force upwards

on the parcel of gasthat the gravitational pull is imparting downward, so the net … force = 0on the parcel of gas, and the … momentumof the parcel of gasis also 0.”Since you are claiming steps 8 and 10 of my explanation are not valid, we could look more closely at my train of logic to get to these steps. If steps 8 and 10 are not correct, then there must be something wrong with steps 2 and 3. Do you agree?

on August 18, 2011 at 9:07 am |Neal J. Kingwillb:

The problem is not your explanation of the individual collision; the problem is that you fail to understand the “career” of a molecule proceeding upwards vs. downwards. When it is heading downwards, it will be going into more “hostile” (impactful) territory; when it is heading upwards, it will be doing the opposite.

This is why your points 2 & 3 are OK, but point 8 is wrong; and the starting point of point 10 is wrong, so it is inapplicable.

With regards to your proposed change to my explanation: It is not wrong to say that the net force on the parcel of gas is upwards due to the pressure difference; but then you’re missing the understanding of what this means at the level of the individual molecules (which was the point of this exercise). Indeed, you need to look at the logical implications: Since the parcel of gas is just a collection of molecules (there are no solid container walls for the parcel), a force on the parcel is just the sum of the forces on the individual molecules. So if you admit that the pressure gradient is imparting a force on the parcel, it must be somehow imparting a force on the individual molecules. So how is it doing that? How do you visualize what is going on?

(bbb3)

on August 18, 2011 at 11:19 pm |willbNeal J. King,

I fully admit to, agree to and support the idea that the pressure gradient is imparting a force on the parcel. The reason I say this is because outside of the parcel we are looking at the world of gas only in terms of macroscopic properties: pressure, volume, density, temperature, entropy, etc. In this world, pressure is a force per unit area acting on the parcel of gas and in a direction perpendicular to its walls. We can measure the pressure with instruments. We can easily calculate the force of pressure on the parcel because the parcel has a well-defined box-like shape. From our discussion so far I am reasonably sure we are in pretty good agreement here. It’s when we go inside the parcel that our mental pictures seem to diverge.

You asked:”How do you visualize what is going on?” My view of the kinetic theory is that it explains macroscopic properties (pressure, temperature, etc.) in terms of the mass, position and velocity of the individual gas molecules. That is, everything is explainable in terms of the molecules themselves. Everything is derivable using molecular mass, position and velocity as the inputs. In this view pressure is simply a manifestation of the kinetic energy of the individual molecules. Pressure is a function of kinetic energy, not the other way around.

This is perhaps why I appear to keep balking when you try to explain or impose changes to molecular motion based on pressure. To me, this is counter-intuitive when working in the microscopic gas world. To my mind, the kinetic theory shoud be used to explain or derive pressure. I don’t see pressure as being a good input parameter to the kinetic theory.

As an additional point, I view the walls of the parcel as a bridge between the microscopic world inside the parcel and the macroscopic world outside of it. The outside of the walls are surfaces on which the outside pressure acts to counter gravity and keep the parcel of gas suspended in the atmosphere. The inside of the walls are surfaces which, through elastic collision, contain the molecules inside the parcel.

Regarding the validity of my 10-step explanation, I guess we should probably just “agree to disagree” for now on the relationship of steps 2, 3, 8 and 10. I think we need to understand each other’s mindset better before this issue can be resolved.

on August 19, 2011 at 12:06 am |DeWitt Paynewillb,

There’s your problem. The kinetic theory of gases is all about large ensembles of particles, not individual particles. A strict mechanistic approach will not work.

The Feynmann Lectures on Physics, 39-1, 1963:

You can only predict averages, not individual values.

on August 19, 2011 at 1:15 amNeal J. Kingwillb:

It’s good to try to understand the macroscopic concepts (pressure, work, volume, temperature) in terms of the microscopic. Perhaps what’s confusing you a bit is that I’m working with both levels, because I need to be sure that they are consistent with each other; and so far they are. I already got an interesting insight on the nature of adiabatic expansion from thinking about this problem.

However, I don’t believe you’ve thought through the implications of a kinetic model of a gas. If the model is any good, it has to be able to explain what is observed at macroscopic levels. I think I’ve done about as much explaining as it’s possible to do: If you want to understand this issue, YOU have to “own” this problem, concentrate on it, and come to a conclusion – a conclusion that is compatible with the rest of physics.

EXPLAIN THIS: Assume a macroscopic volume of gas at a constant temperature. Every molecule of gas is pulled down by gravity. Why don’t the molecules all rush to the ground?

(bbb4)

on August 19, 2011 at 3:25 am |willbDeWitt Payne,

I certainly don’t want to argue with Feynman. The man’s a giant. Therefore I will alter my thinking and my statement and say:

My view of the kinetic theory is that it explains macroscopic properties (pressure, temperature, etc.) in terms of the statistics of the mass, position and velocity distributions of the gas molecules.

on August 19, 2011 at 3:58 am |willbNeal J. King,

The short answer to your question is that the molecules have way too much kinetic energy to fall to the ground. Consider helium gas (a monatomic molecule) at room temperature. According to Maxwell–Boltzmann, the molecules have a mean velocity of approximately 1,100 m/s. The mean z-component velocity would then be 635 m/s. If the gas is contained in a 1-meter cube, the molecules will be bouncing rapidly back and forth between the container’s ceiling and floor with a typical transit time of less than 2 milliseconds. In 2 milliseconds, gravity will have virtually no effect on a 635 m/s velocity.

on August 19, 2011 at 9:55 pm |Neal J. Kingwillb:

There are a couple of major problems with your explanation:

– As I calculated somewhere above, at standard temperature & pressure (STP), the mean free path of molecules in the atmosphere is about 3.26e-7 (m), so a random walk will move you 1 (m) after N = (1/(3.26e-7))^2 = 9.4e12 collisions, traveling a linear distance of 9.4e12 * 3.26e-7 = 3e6 (m). At a speed of 1.1e3 (m/s), that would take t = 3e6/1.1e3 = 2786 seconds = 46.4 minutes.

So the molecule is not going to be zipping from one end of the box to the other, it will be moseying along like a bee in a clover field. 46 minutes is plenty of time for the molecule to notice gravity (or, more precisely, for gravity to affect its trajectory).

[Reference: http://en.wikipedia.org/wiki/Random_walk#Properties_of_random_walks ]

– If a molecule were to be able to get from the top to the bottom of the box without colliding with other molecules, it would be losing the z-component of velocity without losing the x- or y-components. That would mean that, for the population of molecules in the gas,

= = (kB)T/2

while:

=> 0

That would amount to a freezing-out of one degree of dynamical freedom through a spontaneous process. I am quite sure that if we could get this to happen reliably, on a macroscopic basis, there would be tons of $$ in it, because it would be possible to consistently violate the 2nd law of thermodynamics. In fact, even in the case of a very thin gas, the interactions with the wall will need to be taken into account (Gedanken experiments need to be analyzed in very picky detail).

This might be an interesting thing to think about in the context of very very thin low-temperature gas; but is not relevant for gases at STP, as discussed above.

(bbb5)

on August 19, 2011 at 10:04 pmNeal J. KingCorrection to equation above:

avg(m*v_x^2/2) = avg(m*v_y^2/) = (kB)T/2

while:

avg(m*v_z^2/2) => 0

as you progress up through the gas.

on August 20, 2011 at 2:58 amNeal J. King– Plus one more that I forgot to mention earlier: Even a fast-moving object doesn’t “escape” being affected by gravity, it’s just not as obvious. Even in the case of a model of molecules as balls bouncing around ballistically, the effect of gravity shows up in the fact that the balls are moving faster at lower heights than at higher heights, because the value of |v_z| is higher, and the values of v_x and v_y are unchanged. The result is that when the balls bounce on the floor, they have more impact than they do when they bounce on the ceiling. So when you average the force on the ceiling and the force on the floor, the floor force will average out greater — by just the amount of the gravitational force on the ball!

Gravity never sleeps.

(bbb6)

on August 20, 2011 at 4:50 amwillbI’m not going to dispute this last image.

on August 20, 2011 at 11:30 amNeal J. Kingwillb:

Well, the point is that your ballistic model of a gas is no good for parcel dimensions larger than 1 mean free path, which is a very microscopic size. For macroscopic distances (and in particular for the earth’s atmosphere), the atmosphere is self-supporting (i.e., higher layers are supported by lower layers, and lower layers by yet lower layers, until you reach the ground).

If you want to understand the atmosphere in a microscopic way, you need a conceputally acceptable understanding of how a pressure gradient supports the atmosphere.

on August 20, 2011 at 3:27 pmwillbNeal J. King,

I don’t really have a model of anything at this point, and I’m certainly not trying to create a new type of ballistic model of gas. I wish to apply the kinetic theory of gas to the DALR in a mutually acceptable way. All I’ve been doing up to now is to try and create a mental picture of what’s going on inside a kinetic theory-based parcel of gas. Actually, I thought your last few comments were creating some good insight into this mental picture.

If there are aspects to my image of a kinetic theory-based parcel of gas that you don’t like, then I am quite willing to make an effort to modify it. And if we do ever manage to arrive at a mutual understanding of what a kinetic theory-based parcel of gas looks like, what the molecules inside are doing, and what internal forces and energies might be in play, then I would say we would have a concept with which to work from. At that point we might be able to do some analysis. But we are clearly not there yet.

To apply the kinetic theory, we can’t have a single-molecule parcel. We have to have a parcel with a large enough number of molecules so that statistical treatment can be applied. There is no give on this. This is a requirement of the kinetic theory.

You say:

To that I say we already have that understanding through macroscopic analysis. What the microscopic analysis can give us is additional insight into these macroscopic properties. My own view is, if we ultimately want to do a proper kinetic theory analysis, then we can use the macroscopic properties (temperature, pressure, etc) as boundary conditions, but internal to the parcel we need to do our own derivation of these properties and see what pops out.

So, do you think the discussion is worth pursuing, or am I being too intransigent?

on August 20, 2011 at 4:17 pmNeal J. Kingwillb:

– “To apply the kinetic theory, we can’t have a single-molecule parcel.” This has never been a proposal: I was just scaling down the macroscopic theory to point out that there is a term reflective of the pressure gradient that doesn’t go away.

– “To that I say we already have that understanding through macroscopic analysis. What the microscopic analysis can give us is additional insight into these macroscopic properties.” Well, we have to understand how these macroscopic parameters reflect microscopic realities. Specifically, we (or rather, you; as I think I have a clear understanding of this issue) need to come up with an understanding of how pressure gradients support the atmosphere. Models that only work on a length-scale of 1e-7 (m) or less won’t do. No progress is possible unless you can do that.

As Samuel Johnson once said, “Sir, I have found you an explanation, but I am not obliged to find you an understanding.”

(bbb7)

on August 20, 2011 at 7:45 pmwillbNeal J. King,

Let me ask you a question. Do you think it’s possible to derive pressure from the kinetic theory? Let’s say there is no gravity. You have a box of helium at room temperature, 1 meter on a side, anchored to a table. The box contains N molecules of helium. Do you think it’s possible to derive the pressure that the helium exerts on the walls of the box, using the kinetic theory of gases?

on August 20, 2011 at 8:18 pmNeal J. Kingwillb:

Yes, of course. You calculate the momentum transferred by the molecules bouncing off a section of wall, per unit time, per unit area. With the Maxwell distribution of velocities, you get

P = N(kB)T/V

You can also do it with the VIrial Theorem, but it’s not as intuitive.

on August 20, 2011 at 8:48 pmwillbNeal J. King,

For this calculation, what do you assume for the momentum, and how do you derive the ‘per unit time’ value?

on August 20, 2011 at 9:23 pmNeal J. Kingwillb:

momentum = m*(v_x, v_y, v_z)

– Find how many molecules have velocity in cell size (dv_x, dv_y, dv_z) around the velocity (v_x, v_y, v_z): Assume the Maxwell distribution with temperature T, mass m.

– Pick area dx*dy on the floor; calculate how many molecules will hit the area in a specific time dt. Multiply by the momentum kick, 2*m*v_z (z^). This is integrated over d^3v. There is a factor of v_z in the integrand, because the faster v_z is, the more volume of gas is taken into account for the given dt.

This is done in a hand-waving fashion at:

http://en.wikipedia.org/wiki/Kinetic_theory

If you search around with the terms: pressure, maxwell distribution, kinetic theory; you might find something more detailed.

(bbb8)

on August 20, 2011 at 9:51 pmwillbNeal J. King,

How do you deduce how many molecules will hit the area on the floor in time dt? Just a conceptual idea will suffice.

on August 20, 2011 at 10:14 pmNeal J. Kingwillb:

If a particle is at position (x, y, z) and has velocity (v_x, v_y, v_z), at time dt later, it will be at position (x + dt*v_x, y + dt*v_y, z + dt*v_z). So the question is, Did it pass through the square dx*dy oriented flat at z = 0 during this time?

If position = (0, 0, hz) and v = (0, 0, -u), at time dt, position” = (0, 0, hz – u*dt); so if

hz – u*dt < 0

the answer is "Yes."

If position = (hx, 0, hz), you need to find the right range of v_x such that the x-coordinate will be in the range (-dx, +dx) at the time that z = 0. This will take some figuring out of the logistics. The idea is simple in principle, but will take some sorting out.

The maxwell distribution tells you what proportion of molecules will have these velocities. However, you get a break: As I recall, when you sort it all out, you don't have to actually do the integrals, because they are integrated over exactly half their entire range (0 to infinity), so the answers turn out to be either (1/2) or related to RMS(v_z).

on August 21, 2011 at 3:17 pmwillbNeal J. King,

Since you are not integrating over time, what value do you assign for dt? Its value must be greater than ‘0’, otherwise u*dt would be ‘0’ and you would calculate no collisions against the wall. Again, just a concept will suffice.

on August 21, 2011 at 3:31 pmNeal J. Kingwillb:

You have to assume a finite (non-zero) value for dt: small enough that a lot of inter-molecular collisions have not occurred, big enough that a reasonable number of molecular-wall collisions HAVE occurred. So you can assume the maxwell distribution of velocities is statistically valid during the period related to the computation.

Within these constraints, the exact value of dt will cancel out when you calculate the pressure, so it doesn’t matter.

(bbb9)

on August 21, 2011 at 4:09 pmwillbNeal J. King,

As you suggested, I did an internet search on this topic. The only value I found for dt is 2*L/v (L being the distance between opposite walls of the container of gas). This is the same value derived and used at the following links:

http://galileo.phys.virginia.edu/classes/252/kinetic_theory.html

http://hyperphysics.phy-astr.gsu.edu/hbase/kinetic/kinthe.html

http://en.wikipedia.org/wiki/Kinetic_theory

http://www.antonine-education.co.uk/Physics_AS/Module_2/Topic_9/topic_9__kinetic_theory.htm

Does this seem like an appropriate value to you?

on August 21, 2011 at 5:46 pmNeal J. Kingwillb:

OK, I’ve done a bit more thinking about how to do the calculation of pressure. I think it works out best if you consider the length-scale of just under 1 mean free path (mfp). Why?

– For distances longer than 1 mfp, you can’t think “ballistically” at all: the molecules get in each others’ way. So you can’t relate the instantaneous velocity to where the molecule is going to be later, because it’s going to be deflected.

– However, when we consider the “patch” of floor on which we’re studying the impinging of the molecules, its dimensions should be many mfp’s, because the fraction of velocity space that contributes towards the pressure on the patch depends on the location of the starting point of the molecules (the dx dy dz above the floor): if it’s located above the interior of the patch, essentially ALL downward molecules will hit the patch; if it’s located well outside the interior (and above), essentially NO molecules will hit the patch; and if it’s located above the edge region of the patch, the fraction depends sensitively on its location relative to the edge. But if the patch is quite large relative to the mfp, the area of the edge region is unimportant relative to that of the interior, so we can do the calculation as if the starting point were either above and in the interior (all downward traveling molecules hit the patch), or if the starting point were above and well away from the patch (NO molecules hit the patch).

– So when you consider the population of molecules that will contribute to the pressure on the patch, you only have to consider the vertical column of air directly above the floor patch, and you only have to be worried about the vertical component of velocity. This approach leads to the correct formula for the perfect-gas law.

– Why are we able to discuss this matter at the sub-mfp scale when we couldn’t discuss the issue of the pressure structure of the atmosphere at that length scale? Because in this case, we are asking about the pressure that is impinging on the floor/wall, not about what is going on in the interior of the gas. For the wall, which is a well-defined hard boundary, we want to be able to use the ballistic visualization of the gas, as it is easy to analyze. In principle, we could work the problem at larger scales, but then we would have to take into account inter-molecular collisions: very messy. The answer has to work out the same anyway, because momentum cannot be absorbed, so neither can force or pressure.

Whereas in the interior of the gas, the pressure is the effect of inter-molecular collisions; so they cannot be ignored.

(bbb10)

on August 21, 2011 at 5:53 pmNeal J. KingI just realized that might be slightly confusing, so I’ll re-summarize:

– The problem should be defined as the calculation of the force exerted upwards by a horizontal floor patch upwards on the gas, due to the bouncing of the molecules off the patch. (Gravity is being ignored.)

– The dimensions of the patch should be X * Y, where both X and Y are lengths much larger than 1 mfp.

– The length scale at which the molecular kinematics should be considered is just below 1 mfp, so you can accurately visualize the momentum-exchange dynamics as due to straight-line trajectories.

on August 21, 2011 at 7:24 pmwillbNeal J. King,

As you suggested, I did an internet search on this topic. The only value I found for dt is 2*L/v (L being the distance between opposite walls of the container of gas). This is the same value derived and used at 4 different websites, including the University of Virginia and Georgia State University, plus Wikipedia.

Does this seem like an appropriate value to you?

(I tried to make this comment with the actual links included, but I think it was trapped by a spam filter.)

on August 21, 2011 at 7:34 pmNeal J. Kingwillb:

It should be OK, as long as it’s only used for the pressure on the wall/floor.

on August 21, 2011 at 7:57 pmwillbNeal J. King,

The good thing about this number for dt is that the timing dimension now meets the kinetic theory requirement for a large enough number of molecules so that statistical treatment can be applied. The bad thing about it is that now there will be inter-molecular collisions occurring within this time span.

The various derivations I saw all seemed to ignore these inter-molecular collisions.

on August 21, 2011 at 8:29 pmNeal J. Kingwillb:

– Whether there are collisions on this timescale depends on the relative sizes of the box dimensions (L) and the mfp. They are implicitly assuming that the mfp > L; otherwise, as I stated before, the analysis would get very ugly.

– Their specific choice of dt = 2L/v is appropriate for the highly simplified 1-dimensional mode. But the argument can easily be made without such special assumptions about the geometry, as I sketched out above. This is a hand-waving argument, but since it gives the same answer as the more correct approach, people tend to accept it.

on August 21, 2011 at 11:03 pmwillbNeal J. King,

If one were to consider a box with dimension L < 1mfp, this would be a box of one molecule, clearly violating one of the fundamental postulates of the kinetic theory. Therefore I think these websites are assuming larger dimensions for the box, large enough to contain many gas molecules.

I think there is another reason that these calculations are able to get away with assuming large dimensions for the container and ignoring inter-molecular collisions. It is because the collisions are elastic and instantaneous and, for each inter-molecular collision, momentum is conserved. Consider the result of stopping time momentarily and then taking a snapshot measurement of all the molecules. At that instant the molecules would be randomly distributed in the box with their velocities distributed according to Maxwell-Boltzmann. At that instant, the molecular momentums would also be distributed according to Maxwell-Boltzmann.

Now consider what would happen when time started again. All of those molecular momentums would have to be conserved regardless of how many inter-molecular collisions subsequently occurred. For each collision, the momentum of one molecule, or some portion of it, would simply be handed off to the other molecule involved in the collision. The original total momentum of each molecule, measured at the moment when time was stopped, would continue to advance in space according to the speed and direction that was measured at the stopped-time moment.

The only collision that could possibly change the momentum would be a collision with one of the container walls. The only effect this collision would have on the molecular momentum would be to reverse the sign of the component direction of the momentum that corresponded to the wall that was struck.

The calculation for pressure starts with molecular momentum, and since inter-molecular collisions don't affect momentum in a statistical sense, therefore the analysis is free to ignore inter-molecular collisions.

on August 22, 2011 at 11:49 amNeal J. Kingwillb:

You’re making too much of a model that is just hand-waving. Think about this:

– If they are ignoring inter-molecular collisions, they are inherently thinking of length scales below 1 mfp, because 1 mfp is the typical distance between collisions! It doesn’t matter whether you call it “L” or “Kalamazoo”, if it’s ignoring collisions, the calculation is only self-consistent if the length scale is 1 mfg/v , it’s not. Without this equation, you can’t relate the current position of the molecule to the issue of whether or not it will hit the floor patch, and contribute to the pressure.

In the end, the pressure is going to be the pressure at whichever length scale you do the calculation, because it is “communicated” to other molecules (that are farther from the walls) by inter-molecular collisions.. But you have to understand the length scale at which it is being evaluated; and that is definitely at the scale of the “ballistic” view, below 1 mfp. Above that scale, the basic equations of the model don’t make sense.

on August 22, 2011 at 11:58 amNeal J. KingI keep running into trouble with the darn “less than”/”more than” sign.

Let’s try again:

.. if it’s ignoring collisions, the calculation is only self-consistent if the length scale is greater than 1 mfp. Otherwise,

x(t + dt) = x(t) + dt * v(t)

will not be valid for dt greater than 1 mfp/v , because a collision will deflect the path of the molecule.

It’s like trying to project the path of football player (quarter-back?) over a 60-second interval: You can’t do it, because he’s going to get tackled within 20 seconds. (I’m not a football fan, so my specific parameters could be off; but I hope you get the point.)

(bbb11)

on August 22, 2011 at 3:25 pmNeal J. Kinggrrrr…

“.. if it’s ignoring collisions, the calculation is only self-consistent if the length scale is greater than 1 mfp.”

=>

“.. if it’s ignoring collisions, the calculation is only self-consistent if the length scale is LESS than 1 mfp.”

SoD, I HATE THE FACT THAT THIS SITE DOESN’T ALLOW POST-POST EDITING!

on August 23, 2011 at 1:20 amwillbNeal J. King,

I don’t think I’m trying to read too much into the model. Rather, I’m trying to obey the rules of the model. L < 1mfp violates the rules of the model. I don't think the intent of the kinetic theory is to do molecule-by-molecule calculations. I think the intent is to do statistical calculations.

Regarding your football player analogy, as you say the quarteback is going to get tackled. But it will be an elastic tackle and the tackler is going to end up with the ball. And he's going to keep running with it in the same direction that the quarterback was running.

on August 23, 2011 at 1:52 amNeal J. Kingwillb:

– The point is that as soon as you consider a length greater than L, the model is conceptually incoherent: a molecule CANNOT be expected to proceed beyond distance L without a collision. So if you want to do the calculation with that length, be my guest: But then show exactly how the collisions affect the calculation, and how they are affected by what time the collisions take place, etc.

– If instead you are honest and just say, “We’re considering timescales shorter than L/v,” you can still do a perfectly valid statistical argument, for two reasons: a) a time average of a typical particle is equivalent to an average over the total population (ergodic hypothesis); and b) if you consider a large floor-patch, even in a time shorter than L/v, many particles, located at different (x,y) positions, hit the patch within that period.

In fact, the model they are using actually IS equivalent to a less-than-1-mfp model, but it doesn’t become visible because, as I said before, the exact value of dt cancels out in the cancellation. But they’re cheating; and this is also evident because the size of their box is 2L which is about 7*10^(-7) meters: a pretty small box, hardly macroscopic!

– Regarding the football analogy: A real gas is 3-dimensional, and a football game is only 2-dimensional. But neither can be properly matched by a 1-dimensional model. Because even in the 2-dimensional case of football, when the quarterback gets tackled, his trajectory IS NOT continued by the tackler: The tackler will either stop him or hit him at an angle and turn the direction. If football were played the way you are suggesting, it would not matter if the quarterback were tackled at all, because the tackler would obligingly complete the touchdown for the downed quarterback! That’s not realistic for football, and it’s not realistic for the perfect gas.

(bbb12)

on August 23, 2011 at 8:57 amNeal J. Kingwillb:

Here’s a specific example of how the timescale longer than (L/v) doesn’t work for the ballistic model:

Imagine that the period dt has just started:

– molecule A is headed upward, with speed u;

– molecule B is headed downward, with speed u, and is close to the floor patch;

– they collide: now A is headed LEFT, and B is headed RIGHT; both have speed u.

So the total momentum, before and after the collision, is zero; and the total kinetic energy, before and after, is 2*m*(u^2/2) = mu^2. So this is an elastic collision.

But you see that molecule B’s “expected” collision with the floor patch has been prevented by the collision with A.

A coherent model at the timescale greater than (L/v) needs to take into account such collisions.

(bbb13)

on August 25, 2011 at 12:28 amwillbNeal J. King,

I don’t really disagree with you that, if you want to do a rigorous molecule-by-molecule analysis, then collisons are happening and they need to be accounted for. However, if you want to opt for this route, I disagree with you if you are saying you can somehow eliminate collisions by assuming a small enough value for L. It doesn’t matter what the mfp is, the free paths are statistically distributed. There will always be a probability that some paths will be less than L, regardless of how small a value you choose for L.

If you don’t accept it, then I’m not going to attempt to defend the analysis on the four websites I linked to. I was looking for a mutually acceptable intuitive understanding of what’s happening. Clearly you think this is not being provided by the analysis on these websites.

on September 15, 2011 at 4:57 pmDeWitt PayneBryan,

What you are measuring with a 20 m length of PVC pipe is the temperature profile of the PVC pipe. If you use thin walled pipe, you’re measuring the temperature profile of the insulation. There is no way that the conductivity of the gas in the tube, absent convection will equilibrate the temperature profile of the wall in three days. The heat capacity of the wall plus insulation is going to be orders of magnitude higher than the gas in the pipe. Any heat leaks will cause convection and invalidate the experiment. Adding something like sand to prevent convection means you’re measuring the properties of the sand not the gas.

on August 25, 2011 at 6:34 am |Neal J. Kingwillb:

– By choosing a length scale much smaller than 1 mfp, you can easily reduce to insignificance the number of inter-molecular collisions: Just as you can in viewing a football game. It’s not necessary to eliminate them entirely to do the analysis, just the proportion of them.

– As I said originally, these arguments are merely hand-waving that happens to give the right answer. I can easily produce a much more valid argument to calculate the pressure, that avoids these problems.

(By the way, this hand-waving argument ends up substituting

avg(|v|)^2 for avg(v^2); so actually the answer is off by a numerical factor. Or another way of looking at it: They’re assuming all molecules travel at exactly the same speed, instead of having a maxwellian distribution; another aspect of the hand-waving.)

(bbb14)

on August 27, 2011 at 10:20 amNeal J. Kingwillb:

So have you given up on trying to understand pressure on a microscopic scale?

on August 27, 2011 at 6:55 pmwillbNeal J. King,

No, I haven’t given up just yet. Sorry for the slow responses, I’ve been multi-tasking.

By doing this, don’t you reduce to insignificance the number of wall collisions? Also, in considering the number of molecules in a space defined by dimensions much less than 1 mfp, don’t you reduce to insignificance the probability that there is even one molecule in the enclosure?

on August 27, 2011 at 7:11 pmNeal J. Kingwillb:

No, because, at standard temperature & pressure, the average distance between molecules is:

a = (1/density)^(1/3) = 3.45e-9 (m)

whereas the mfp is:

mfp = 1/(density*pi*(2e-10)^2) = 3.26e-7 (m)

hence:

mfp/a = 94.7

So if I choose a length scale between them, say (mfp/5), I can neglect inter-molecular collisions. I could even go below the scale a, since I’m calculating a rate, and a rate can be below one event per time segment.

(bbb15)

on August 28, 2011 at 12:31 amwillbNeal J. King,

Ok, I will accept your argument that, to derive pressure, you can legitimately choose a very small value for L, thus reducing the kinetic theory model to a much simpler ballistic model. Can I assume your previous description on the technique to do the derivation still holds? I am referring to your comments on Aug 20 – 21. If I may paraphrase what you said:

a) The objective is to calculate the momentum transferred to the floor from gas molecules bouncing off it, per unit time (dt), per unit area (dA).

b) The momentum is determined by 2 factors: the number of molecules that hit the unit floor area dA in time dt; and the vertical component of each molecule’s velocity when it hits the floor.

c) The number of molecules that hit dA in time dt is derived from N (the number of molecules in the container), V (the volume of the container), and v (the Maxwell-Boltzmann velocity distribution of the gas molecules).

d) The vertical component of the molecular velocity is derived only from v (the Maxwell-Boltzmann velocity distribution).

Am I paraphrasing correctly and did I capture the essential aspects of your derivation?

on August 28, 2011 at 11:12 amNeal J. Kingwillb:

That looks about right.

on August 28, 2011 at 6:06 pmwillbNeal J. King,

If gravity were suddenly switched on, do you think it would be possible to modify your method for deriving pressure so that it is able to take into account the force of gravity? What in your view would be the main issues to overcome?

on August 28, 2011 at 6:34 pmNeal J. Kingwillb:

It doesn’t make much difference, because the additional momentum due to gravity would be:

dp = -mg*dt

Since the typical value of p = sqrt(3(kB)mT),

dp/p = -mg*dt/sqrt(3(kB)mT) = dt*g*sqrt(m/(3(kB)T))

So for dt less than sqrt(3(kB)T/m)/g , we can ignore gravity in doing the calculation of how pressure depends on local thermodynamical variables (equivalent to maxwell parameters). I would assume without doing the arithmetic that this condition is already well-satisfied when we consider the time-scale limits already discussed.

(bbb16)

on August 29, 2011 at 3:43 amwillbNeal J. King,

Maybe I asked the wrong question. Let me ask this one: If gravity were suddenly switched on, do you think there would be a higher pressure from the gas against the floor compared to the pressure against the ceiling of the box?

on August 29, 2011 at 6:24 pmNeal J. Kingwillb:

The question has to be considered as a function of time:

– Instantaneously, no: the additional momentum added to molecules impinging on the floor is only dp = – mg*dt per molecule. Over the very short time dt, the effect on the pressure is only due to the gravitational force on the mass of the molecules that are within the distance of about dt*avg(|v|). That’s not very much.

– However, on a longer time scale, the increased momentum from the very bottom layer of the gas will press down on the next “layer” abpve it; and that increased pressure is communicated upward by the increased internal pressure of the second layer; and so on upward. This increased pressure is a macroscopic view of increased density and increased vertical momentum. There will be a wave of pressure that will travel upward at the speed of sound (because that is what sound is)..

– After a few cycles of the sound waves bouncing around the volume, the pressure gradient will be as specified by the hydrostatic equilibrium equation. Then the difference between pressure at the bottom and pressure at the top will differ by the amount required to support the gravitational mass in-between. The pressure difference will be reflected in the difference in density at top and bottom.

So, in summary: There will be no instantaneous change in pressure. However, a pressure gradient, due to pressure from the floor, will rapidly build upward within the gas and will be normalized within the gas at the speed of sound.

A good way to visualize this is to imagine an elevator car in space, with marbles zipping around. Suddenly, the elevator car begins to move “upward” at constant acceleration g: the bottom-most marbles get a kick from the floor, producing a wave of inter-marble collisions that also moves “upward”. Because the car continues to accelerate, the density of marbles is always higher at the bottom; however, the random energy will equalize (for volumes not too big).

(bbb17)

on August 30, 2011 at 12:16 amwillbNeal J. King,

Let’s say we wait a long enough time so that hydrostatic equilibrium occurs. When this happens, you have already explained in a macroscopic way how the gravitational mass of the gas is supported, that is through a pressure differential.

Do you have an explanation for how the gravitational mass is supported from a microscopic perspective, that is through molecular motion and momentum transfer?

on August 30, 2011 at 11:08 amNeal J. Kingwillb:

A pressure differential is a differential in inter-molecular bombardment. For example, if we assume uniform temperature (which would be about right, in this case), the density of molecules will be greater at the bottom than at the top, with a gradient between. The bombardment, and hence momentum exchange, will therefore be greater towards the bottom and lesser towards the top. Since the molecules in a layer are getting hit more from below than from above, they experience a net average force upwards from these collisions. In equilibrium, this net average force exactly balances the gravitational pull downwards.

The equilibrium is reached because if the pressure differential were too high or too low, the collection of molecules would react to produce a feedback that would “correct” the differential: Think again about a collection of ball-bearings sustained by an ongoing blast of machine-gun fire.

(bbb18)

on September 1, 2011 at 1:47 amwillbNeal J. King,

I gather from your answer that you don’t think it’s possible to come up with a kinetic theory-based expression for the pressure on the floor and ceiling of the box once gravity is added to the scenario. That is, an expression for pressure derived from N (the number of molecules in the container), V (the volume of the container), v (the Maxwell-Boltzmann velocity distribution of the gas molecules), m (the mass of the molecules) and g (acceleration due to gravity). Fair enough. You more or less told me this a while ago, when you said you didn’t think it was possible to conceptualize convection with the kinetic theory.

You do seem to have a mental image of how the gas molecules are interacting in a gravity field and I would like to understand this more clearly. Consider the previous scenario I outlined. You have a box of helium, 1 meter on a side, anchored to a table in a gravity field. The box contains N molecules of helium and the gas is in a state of equilibrium. You have stated that the gravity field will cause a greater molecular density at the floor of the box. Do you believe it’s possible to maintain a uniform temperature inside the box, and at the same time have a slight pressure gradient from floor to ceiling (caused by the density gradient due to the gravity field)? Assume that no energy is entering or leaving the box.

on September 1, 2011 at 12:22 pmNeal J. Kingwillb:

– No, it’s quite possible to calculate the pressure on the floor even when g is “turned on”; it will be the same formula as before it was turned on. The difference is that the pressure will be greater than before, because the local density, and possibly temperature (aka average KE), will be greater than before.

What is difficult to calculate without introducing macroscopic concepts is the ADIABATIC behavior, because the term “adiabatic” implies an understanding of what is going on at a bulk level. Even here, I have developed an interpretation that might do the job: an adiabatic process is one in which the energy transfer is directly through the KE exchange due to work done on/by the gas. But then you have to relate that to why we are talking about an adiabatic lapse rate anyway.

– A box of helium gas 1 m^3 can certainly be kept at a temperature uniform throughout. The density and pressure will be exponential functions of the height (z): ~ exp(-mg*z/((kB)T))

(bbb19)

on September 2, 2011 at 4:05 amwillbNeal J. King,

With g “turned on”, you say that the density of the gas will be an exponential function of the height (z): ~ exp(-mg*z/((kB)T)). May I ask how you develop this expression for density? I’m just looking for a conceptual idea on how this is done.

on September 2, 2011 at 10:50 amNeal J. Kingwillb:

This is the Boltzmann factor, from statistical mechanics. Basically, when I look at how one would “turn on” g, the easiest way to think about it is from the perspective of the equivalence principle, Einstein’s concept that you can understand what’s going on in a (homogeneous) gravitational field by imagining that it is happening in an upwardly accelerating elevator.

When I do that, I see that the degree of random energy/particle in the collection of ball-bearings (my analogy for the molecules) will equalize throughout the box, as the bb’s thrash and churn through the box. Thus, they will have the same temperature. Then when I go back to the gravitational field, statistical mechanics says that you get the exponential dependence on altitude, with that uniform temperature.

(This factor is a very deep result of statistical mechanics. It is rather briefly discussed at: http://en.wikipedia.org/wiki/Boltzmann_factor .)

on September 3, 2011 at 3:12 amwillbNeal J. King,

The Boltzmann factor does seem to point to a molecular density profile that is exponential with altitude for a constant temperature of the gas. But I am having difficulty with your explanation for why the temperature of the gas is constant. With your accelerating elevator analogy, my intuitive view is that a kinetic energy gradient, and hence a temperature gradient, would be the equilibrium condition.

Consider what is happening at the floor of the elevator. As the floor accelerates, it is constantly banging into the gas molecules close to the floor. This results in the floor molecules getting a continual increase in kinetic energy. The thrash and churn of the molecules will attempt to equalize this energy throughout the height of the elevator. But this will take a finite amount of time. While this equalization is occurring, the floor continues to accelerate, adding more kinetic energy to the molecules close to the floor.

As long as the elevator keeps accelerating, I don’t see the ceiling molecules ever catching up to the floor molecules in terms of their kinetic energy.

on September 3, 2011 at 3:34 amNeal J. Kingwillb:

The equilization of the random KE of the molecules proceeds by the small-scale collisions which combine to produce pressure waves, which are also sound waves. The molecules themselves do not need to move very far.

Just as the energy & power of a tsunami gets a lot farther than does any individual water molecule.

If a gas could come to stable equilibrium with non-uniform temperature, it would be flagrant violation of the 2nd law of thermodynamics; to look on the bright side, all of humanity’s energy problems would be solved. The reason there is the adiabatic lapse rate is that air is mostly NOT sitting absolutely still, but is being pushed around by weather: local heating and cooling, evaporation, etc. But a sealed box of gas is not going to be experiencing all that: it will come to thermal equilibrium.

If you can think of a way that it doesn’t, tell me about it, and believe you me, we can make Bill Gates look like a beggar. Own your own island? We’ll be able to own our own continent.

(bbb20)

on September 3, 2011 at 7:08 amBryanNeal J. King says

…..”The reason there is the adiabatic lapse rate is that air is mostly NOT sitting absolutely still, but is being pushed around by weather”

Neal I think you should look again at the derivation of the dry adiabatic lapse rate formula.

Convection is neither assumed or used in the derivation of the formula.

The Hydrostatic Formula and the adiabatic condition applied to Kinetic Theory of Gases in a Gravitational Field is completely sufficient to derive;

DALR = -g/Cp

Conditions where almost still dry air can occur include deserts and ski resorts.

The lapse rate then is very close to -9.8K/km

on September 3, 2011 at 5:29 pmwillbNeal J. King,

Whether the gas is in a gravity field or in an accelerating box, I don’t see why a non-uniform temperature is a violation of the 2nd law of thermodynamics. In an accelerating box, work is constantly being done on the gas by one side (floor) of the box. It seems reasonable to me that this constant asymmetrically applied work would cause a temperature gradient.

In a gravity field, the energy inherent in each gas molecule is a combination of both kinetic and potential energy. If ‘energy per molecule’ evens out over time as you would expect from the 2nd law, then the molecules with more potential energy (the higher molecules) would be expected to have less kinetic energy. It also seems quite reasonable to me that, on a planet with an atmosphere and a warm surface, it would not be unexpected to see a temperature gradient between the warm surface and the absolute zero of deep space.

I’m curious: How do you propose that we become fabulously wealthy with the gas in equilibrium at a non-uniform temperature?

on September 3, 2011 at 6:22 pmNeal J. Kingwillb:

– Work is being done by the floor on the gas, but don’t forget that work is also being done by the gas on the ceiling.

– Also, although the net work is on the gas, remember that the gas is accelerating. In fact, if you remember that the difference between the top & bottom pressures is exactly the amount that sustains the gas against gravity in the elevator’s frame, you can see that the net work being done in Einstein’s frame is exactly what it takes to accelerate the gas AS A BULK, at the rate g. So there is nothing left over to contribute to the random KE.

– The Boltzmann factor applies to the occupation of phase space, as defined by the set of spatial and momentum coordinates for the molecules of the gas. What that implies in this case is that you consider the energy of a molecule as a function of height (z) separately from the energy as a function of momentum (p); and as it turns out, the first is the gravitational potential energy while the second is the kinetic energy. They are additive, but neatly separate, so that you expect the distribution as a function of z and p to give two separate Boltzmann factors, which means that the probability distribution over momentum is independent of the height in a situation like this, where you have thermal equilibrium.

– We are here talking about 1 cubic meter of gas in a sealed box. If you are talking about an entire planetary atmosphere, you have to talk about radiation impinging on the planet and what happens to that atmosphere far out into space. That is a vastly different situation. Specifically, in the case of a real planet warmed by a sun, one can talk about a steady-state situation, but not thermal equilibrium. In thermal equilibrium, the temperature is uniform throughout. (Actually, there is a general-relativistic effect on temperature, which is tied in with the gravitational red-shift of light; but it’s too complicated to get into. In the end, when you properly generalize the concepts of temperature and time, everything cancels out anyway. It doesn’t change the big picture.)

– If we could set a cubic meter of gas on a table and allow it to come to thermal equilibrium (that means not feeding it any energy, just letting it be) and have that be compatible with a temperature difference between top & bottom, we could run a Carnot heat engine cycle between the top and the bottom: essentially free power. I guess the energy would have to come from the gas itself (since it’s not being fed from the outside), so we would be extracting heat energy from the gas and turning it into work. The gas would eventually give up all its KE and freeze; so we would have a combination engine and cryogenic refrigerator. We’d be richer than Croesus, enjoying the life of Riley; actually these phrases don’t begin to express how wealthy we would be. If some great engineers and inventors ever get fusion energy to work, they will still never attain this level of super-efficiency.

Oh well, day-dream over.

(bbb21)

on September 5, 2011 at 4:51 amwillbNeal J. King,

The temperature difference that would develop between the top and bottom of a cubic meter of gas sitting in a sealed box on a table would be governed by the DALR, which in the case of a 1 meter height would be approximately 0.01K. If it were possible to get work out of this temperature difference, why isn’t it being done now? Isn’t the current lapse rate widespread around the world? Doesn’t it extend upwards 10 km or so producing a sizeable temperature difference?

on September 5, 2011 at 10:01 amNeal J. Kingwillb:

– “The temperature difference that would develop between the top and bottom of a cubic meter of gas sitting in a sealed box on a table would be governed by the DALR, which in the case of a 1 meter height would be approximately 0.01K.” No, it wouldn’t. The DALR is a maximum rate, that applies a system that is in rough steady-state (with heating and convection), not to the case of thermal equilibrium. Mathematically, it depends only on hydrostatic equilibrium and the adiabatic relationship between pressure and density; but to understand what that means, you have to understand physically WHY the adiabatic relationship is applicable. This is the point that has to be understood from the viewpoint of gas dynamics. Because there is no mandate from above that the DALR is to be followed; and in fact when you have a temperature inversion, the temperature increases with altitude. In fact, the DALR is applicable because there is always some possibility of bulk motion, so if the DALR is exceeded by the actual temperate gradient, you would get an unstable “avalanche” of air motion that would quickly reduce the gradient.

– “If it were possible to get work out of this temperature difference, why isn’t it being done now?” Because in thermal equilibrium, this temperature difference doesn’t exist. The DALR condition does not apply in thermal equilibrium. Or, if you’re asking why don’t we mint money off the DALR: Ultimately, what do you think wind power is? It’s work derived from temperature differences in different parts of the atmosphere; and it’s ultimately coming from the sun’s radiation output. Without the sun’s ongoing power input to the earth, the atmosphere would be reduced to 3-degree K and be condensed on the ground, “like a patient etherized upon a table.” We are NOT in a situation of thermal equilibrium, merely steady-state; so the best we can do is to tap off the power that is running from the lower-entropy density source to the higher-entropy density sink.

– “Isn’t the current lapse rate widespread around the world? Doesn’t it extend upwards 10 km or so producing a sizeable temperature difference?” No, it’s a mathematical upper limit, not an imposed condition. The existing temperature difference, as explained above, can be used to drive work, but not without effort: You have to build and run windmills, and when the sun turns off, you have to quit.

If you could consistently generate a 0.01-K in a cubic-meter box, without power input, it would be an entirely different ballpark.

(bbb22)

on September 6, 2011 at 1:06 amwillbNeal J. King,

Sorry, I should have said “If it were possible for a temperature difference to develop between the top and bottom of a cubic meter of gas sitting in a sealed box on a table, then in my opinion I would expect the difference to be of the order of the DALR, approximately 0.01K.” Clearly you don’t think this temperature difference is possible.

When I asked why nobody seemed to be making use of the lapse rate as a power source, I was referring to the actual lapse rate in our existing atmosphere. It doesn’t really matter whether the atmosphere is in thermal equilibrium or in steady state, the lapse rate exists and presumably could be exploited. So why isn’t it being exploited using heat engines?

You say it’s possible to run a Carnot heat engine cycle that will produce power efficiently from my imagined 0.01K temperature differential in a 1 cubic meter box. If that’s true, then there must be all kinds of places in the world where our atmosphere’s existing lapse rate could be put to good use. For instance along the Andes mountains in South America, very large temperature differentials exist between the bases and tops of the mountains. Why aren’t these temperature differentials being exploited? Why is no one designing a heat engine that would take the heat from the warm air at sea level near the equator and pump it up to the cold air at the snow-covered tops of the Andes?

on September 6, 2011 at 6:34 amNeal J. Kingwillb:

– With regard to the cubic-meter box, I don’t believe the DALR is applicable, so you won’t get a “free” temperature gradient.

– With regard to exploiting the existing temperature differential that is maintained by the DALR, it is definitely possible. See for example:

http://web.mit.edu/newsoffice/2010/energy-harvesting.html

where they talk about power based on temperature differentials of 1 – 2 degrees K.

However, there is a difference between something that is possible and something that is economically feasible, compared to alternatives. To take advantage of the temperature difference between sea level and the top of the Andes, you need a considerable investment in construction; against that, you have to trade off the zero cost of fuel. Whereas if you buy a gasoline engine, the construction cost is only $100 or so, but the fuel is $3/gallon. It’s a business decision as to which is the better deal.

A similar concept is being explored for the difference between deep and shallow oceanic waters:

http://en.wikipedia.org/wiki/Ocean_thermal_energy_conversion

Again, there are practical construction and maintenance costs.

The basic point is that, right now, fossil fuels are too cheap to make these alternatives the best economical choice. In a 100 years or so, when we’ve run down our stocks of oil, gas & coal, people may very well be using them.

(bbb22)

on September 6, 2011 at 7:39 amBryanNeal J. King

An interesting link to the self powered body monitoring device possibilities

…”The principle was discovered in the 19th century, but only in recent years has it been seriously explored as an energy source. In thermoelectric materials, as soon as there is a temperature difference, heat begins to flow from the hotter to the cooler side. In the process, at the atomic scale this heat flow propels charge carriers (known as electrons or electron holes) to migrate in the same direction, producing an electric current — and a voltage difference between the two sides.”…

The authors don’t seem to be aware of advances made in climate science where heat can move spontaneously from cold to hot surfaces.

Why not use both heat flows.

A much more efficient device could be made.

Its unfortunate that nobody outside climate science is aware of this “other heat flow”.

I’m sure if you were to inform the MIT researchers they would be mightily impressed.

on September 8, 2011 at 12:06 pmNeal J. Kingwillb:

Have you given up your microscopic quest?

(bbb23)

on September 9, 2011 at 2:22 amwillbI haven’t given up but I’m not really sure how to proceed. At this point, I am somewhat convinced that, for the monatomic gas in a 1 cubic meter box in a gravity field, a lapse rate is the equilibrium condition. I think there will be both a changing density profile and a changing kinetic energy profile with z. You seem convinced that, given enough time, the gas will ultimately become isothermal. The only gravity effect will be to create a changing density profile with z. The kinetic energy, at a molecular level, remains constant with z.

The idea of a constant kinetic energy profile is extremely counter-intuitive for me. If you consider starting with a vacuum and adding one molecule of gas at a time to the box, then it seems to me there is an obvious changing temperature profile with z as long as the mfp between collisions remains much larger than 1 meter. Under this condition the molecules will be very directly trading kinetic energy for potential energy as a function of z. At some point, as molecules are continually added, the mfp starts to shrink below 1 meter. And at some point after this I believe you are saying that this trade-off between kinetic energy and potential energy somehow disappears. I can’t really see why this happens.

on September 9, 2011 at 4:05 amDeWitt Paynewillb,

Look at the problem the other way around. Suppose you have your 1 m³ cube of monatomic gas that is initially isothermal. What is the mechanism to establish a temperature gradient?

on September 9, 2011 at 2:06 pmNeal J. Kingwillb:

Feynman explains your issue in clever way: If you look at the upward-traveling molecules at height (z = 0), the ones that have v_z less than sqrt(2gh) won’t make it up to height (z = h). So only the faster-moving molecules make it up to the upper altitude; at which time they will be reduced in average speed. So the combination of:

a) Selection effect: only the faster-moving molecules “graduate” to the higher level h; and

b) The speed-reduction effect: each molecule is moving more slowly (v_z) than it was before,

imply that the statistical distribution over speeds remains the same (for v_x and v_y, neither is there any selection effect nor any speed reduction). However, the total number is reduced — which is exactly the reduction in number density due to the Boltzmann factor. It all works out because the Boltzmann factor is exp(-(mv_z^2)/(2*kB*T)), so a) chops off low end of the exponential; and b) slides the whole distribution back towards 0 to fill in the gap! The new average KE is the same as the old average KE; just for a fewer number of molecules.

Clever of Nature to do that.

Reference: p. 40-5 of Chapter 40, “The principles of statistical mechanics”, Vol. 1 of the Feynman Lectures: you can locate this by a search on “Feynman Lectures Volume 1 Chapter 40”

(bbb24)

on September 10, 2011 at 4:23 amwillbNeal J. King,

Thanks for the pointer to the Feynman lectures. That looks like a really good reference. Give me some time to read it and mull it over.

on September 14, 2011 at 12:27 amwillbI digested a couple of chapters of the Feynman lectures (39 and 40). Although I found them very interesting, to tell the truth I was disappointed in the way he addressed thermal equilibrium in a column of gas. Before he even begins his analysis he assumes that the gas temperature is isothermal throughout the column. His justification for this assumption is a very weak and brief thought experiment. However the analysis that follows from this assumption is very clear, interesting and informative.

I have more or less concluded that these chapters in the Feynman lectures don’t really say very much about what the equilibrium temperature profile of a column of gas in a gravity field would be. However, Feynman seems convinced that the the equilibrium condition is isothermal, and I don’t wish to argue against Feynman. It is an unsatisfactory conclusion for me, though.

I found it curious that, in justifying his assumption that the gas is isothermal, Feynman didn’t reference any experiments that may have attempted to show that a vertical column of gas in a sealed, isolated container would be isothermal. And up to this point I have been unable to find a good reference that might indicate that such an experiment has even been carried out. Are you aware of any?

on September 14, 2011 at 7:25 amBryanwillb

Here’s a paper outlining an experiment to test the temperature distribution of a gas constrained in a vertical column in a gravitational field.

The historical background to the question is also discussed.

As you can see some of the greatest figures in thermodynamics could not agree on the situation

I would like to see the experiment repeated independently before endorsing the authors conclusions.

One other point I am thinking about that might have some bearing on the dispute is the Maxwell Boltzmann distribution.

A fundamental assumption built into the distribution is that it is isothermal.

By using the distribution then you are locked into a circular argument.

I agree with Lochschmidt that an experiment should decide the matter.

http://www.firstgravitymachine.com/descript%20372_dec6.pdf

on September 14, 2011 at 9:47 amNeal J. Kingwillb:

– He assumes that temperature is constant throughout the gas because that is a general result of statistical mechanics. Why? Because temperature (T) is defined as:

dQ = T*dS , or

dS = dQ/T

where dS = change in entropy

and dQ = transfer of heat energy

Therefore, if two objects at different temperature are exchanging heat energy, the change in entropy of the cooler object is greater than the change in entropy of the hotter object. Since, in a spontaneous bulk process, entropy must increase (2nd Law of Thermodynamics), the spontaneous transfer of heat will proceed from the hotter to the cooler:

dS_total = dS_c + dS_h = dQ * (1/T_c – 1/T_h) is greater than 0

But thermal equilibrium is attained when further exchange of heat energy within the system cannot further increase the entropy: That means that T_h and T_c must be the same.

So, within statistical mechanics, this is a theorem based on the 2nd Law; within thermodynamics, it is fundamental to the definition of temperature that two objects are in thermal equilibrium with each other only if they have the same temperature; it is related to the empirical fact that if object A is in thermal equilibrium with object B; and object B is in thermal equilibrium with object C; then object A is also in thermal equilibrium with object C. This is sometimes known as the zeroth Law of Thermodynamics.

This applies to two separate objects, or to two objects that are parts of one object (like the upper and lower halves of the box of gas).

– The reason I referred to Feynman’s argument was that you asked, “How can it be POSSIBLE that the temperature is constant throughout the gas, when it looks to me as though a simple examination of how gravity slows down upward-moving molecules will naturally FORCE the temperature to be reduced with height?” So the answer is, Feynman’s argument shows how it is POSSIBLE for a gas to have a constant temperature throughout, consistent with microscopic understanding of molecules bouncing off each other and off walls, and moving down and up, with/against gravity: the filtering effect and the slow-down together keep the distribution in velocity space the same throughout.

Feynman’s argument does not INTEND to prove that constant-T is the ONLY temperature profile; and in fact I told you a month ago or earlier that MANY temperature profiles are possible. But the constant-T profile is the only one that is IN THERMAL EQUILIBRIUM. And the DALR is the one that is the steepest that is not subject to convective instability.

[Aside for experts: In the context of general relativity, what is constant is not the temperature T, but T * sqrt(-g00) {but maybe the sqrt factor is in the denominator}. This is insignificant if you’re willing to neglect the gravitational red-shifting of light (Pound-Rebka effect).]

bbb25

on September 15, 2011 at 2:32 amwillbNeal J. King,

I disagree with your assessment of what Feynman’s analysis shows. It does not show how it is possible for a gas to have a constant temperature throughout. It cannot show this when at the same time he is using a constant temperature profile as an initial condition for his analysis. Rather, his argument shows what the kinetic energy and density profile in the column would look like IF it were possible for the gas to have a constant temperature throughout. The only evidence from Feynman to support a constant temperature profile is the fact that he didn’t encounter a contradiction during his analysis, so he wasn’t able to prove that any of his initializing assumptions were false.

I’ll ask again if you know of any experiment that supports the contention that a vertical column of gas in a sealed, isolated container would be isothermal. If thermal equilibrium means a constant temperature profile, someone somewhere must have proved this experimentally. I mean, really, we are all aware of the existing lapse rate on Earth. We are aware of lapse rates on Mars and Venus and other planets. To make the claim that the lapse rate would disappear without atmospheric turbulence surely deserves some experimental support. Otherwise I would think this assertion would be the most blatant violation of Occam’s razor since the invention of equants, deferents and epicycles.

on September 15, 2011 at 4:21 amDeWitt Paynewillb,

Why would anyone do the experiment? If you get the expected results, the work is unpublishable. It’s also a very difficult experiment. It’s on the order of measuring the universal gravitational constant, G. But for G, the result is interesting and publishable. You would need a nearly perfectly insulated column and very precise, very sensitive thermometers spaced along the column. But the thermometers themselves create heat leaks. In a column of any length, the thermal conductivity of a gas is so low that it takes a very long time to establish thermal equilibrium by conduction. If I were designing the experiment, I would want to make measurements at different angles with respect to the gravitational field as well.

For your own interest, you should calculate the temperature profile over time of a column of gas 1 m long and 1 cm in diameter with initial conditions the the first cm of the column has a temperature 0.1 C higher than the rest of the column. Assume the column is horizontal and ignore convection. Also assume the walls are perfect insulators with zero heat capacity. Calculate how long it will take for the temperature profile to become less than 0.002 C/m.

on September 15, 2011 at 7:39 amNeal J. Kingwillb:

You are not being logical. Feynman’s argument shows that it is self-consistent, from the viewpoint of microscopic dynamics, to assume constant temperature throughout the gas. Proving lack of contradiction proves self-consistency.

What he does NOT prove is that this is the unique solution. Good thing, too, because it’s NOT. Many different temperature profiles are possible (as I have said at least three times above), as can be determined by imposing a temperature profile on a vertical gas-filled pipe. For that reason, it would be difficult to do a “test” of thermal equilibrium, as deWitt mentioned: you would have to be checking very hard to make sure there was no heat leaking into the pipe at any height, and so on.

I think you are uncomfortable with the concept of temperature. Fair enough; in fact, I don’t believe Feynman develops it from the ground up. But for that you need to exert yourself and read a real text book. (Feynman’s lectures, although exceptionally insightful, are not a textbook: Often he does NOT develop things from the ground up. For example, he never describes the derivation of the Lorentz transformation in relativity, even though he derives a lot of other interesting relativistic concepts and formulae.) There are two I can mention:

– Frederick Reif, Fundamentals of Statistical and Thermal Physics

– Charles Kittel, Thermal Physics

Kittel’s book is rather shorter, with a kind of lean & mean development of the concepts; however, for that reason, if you want more clarification on basic concepts, he doesn’t provide them.

Reif’s book spends more time on basic motivations and tricky conceptual points. He shows better “where the bodies have been buried”. I think it is by far the deeper book.

What you may have to understand is that thermal equilibrium is a CONCEPT that is an idealization from laboratory reality and practical operations. The power of thermodynamics lies in being able to come to conclusions based on the fact that thermal equilibrium is POSSIBLE in principle; because once it is possible, all sorts of other things (like violation of 1st Law / 2nd Law of Thermo) become IMPOSSIBLE in principle, as a logical or mathematical consequence.

I have tried to give you a pocket summary of the concept of temperature above; but that is my point of view, based on having studied and played with these concepts for years. If that’s not satisfactory to you, the best thing you can do is to read Reif’s book, and study the topic from the ground up. Physics is not an easy subject to “jump into”, because:

– It has a logical structure, with later topics built onto earlier; but also

– Sometimes earlier topics are understood in terms of assumptions and concepts that are only clarified or justified at a later stage.

So it has the general structure of mathematics, but not always the same degree of clarity of development: This is because even theoretical physics is based on empirical results and empirical concepts; so physics as a whole is logical, but also has some “logical loops”; so that the ultimate justification of physics is:

– agreement with laboratory results, and

– internal consistency.

This “internal consistency” requirement can be relied upon to an extreme (e.g., Some would argue that it has gotten into a pathological situation with the recent string models, that have arcanely sophisticated mathematics, consistency with all previous known physics – but zero contact with experimental results.), but it is the primary tool which theoretical physicists use in convincing themselves that they understand what is going on, before this understanding is put to the test in the laboratory.

bbb25

on September 15, 2011 at 8:22 amBryanDw Witt says

….”Why would anyone do the experiment? If you get the expected results, the work is unpublishable.”……

This question caused disagreement between Clausius and Loschmidt.

There are a number of papers of far more obscure and some might think pointless investigations

“also a very difficult experiment.”

“You would need a nearly perfectly insulated column and very precise, very sensitive thermometers spaced along the column. But the thermometers themselves create heat leaks. In a column of any length, the thermal conductivity of a gas is so low that it takes a very long time to establish.”

Its not a particularly expersive experiment to stage.

Cost of materials < £ 1000

A 20 metre length of thick plastic tube of mains water supply say 30cm diameter with 4 temperature probes.

Top, bottom ,1/3 and 2/3 up.

The tube wouldbe heavily insulated.

The temperature probes chosed for least possible interference with thermal state of thevertical column of air.

The arrangement left vertical and left for 3 days or so to allow for diffusion to have established a steady state.

The temperature meaureing system nead only be swithed on momentarily while readings are being taken

If Loschmidt right a differenceof nearly 2K top to bottom observed.

If Clausius right a no difference top to bottom is observed.

Remember that sometimes experiments have results that are counterintuitive and for that reason they are always worth doing.

A good example is the Mpemba_effect

en.wikipedia.org/wiki/Mpemba_effect

on September 15, 2011 at 8:41 amBryanCorrection to my last post

If Loschmidt right temperature difference of 0.02K observed , top to bottom.

on September 15, 2011 at 5:00 pmDeWitt PayneSorry for the double post, but I clicked on the wrong reply button.

Bryan,

What you are measuring with a 20 m length of PVC pipe is the temperature profile of the PVC pipe. If you use thin walled pipe, you’re measuring the temperature profile of the insulation. There is no way that the conductivity of the gas in the tube, absent convection will equilibrate the temperature profile of the wall in three days. The heat capacity of the wall plus insulation is going to be orders of magnitude higher than the gas in the pipe. Any heat leaks will cause convection and invalidate the experiment. Adding something like sand to prevent convection means you’re measuring the properties of the sand not the gas.

on September 15, 2011 at 9:22 pmDeWitt PayneBryan,

Take your 20 m tube and ignore wall effects, for air at 300 K initially isothermal with a controlled temperature plate on one end, a change in temperature of 0.02 degrees at the end plate takes a long time to propagate through the tube. After ~42 days, the temperature at 10 m has only dropped 0.01 degrees. And that’s assuming perfect insulation. With any sort of real insulation, the normal fluctuations in room temperature would overwhelm that on that sort of time scale. Diffusion is a really slow process even with air, which has relatively high thermal diffusivity. Any heat flux through the tube because of heat leaks is going to cause a temperature gradient. In fact, the existence of a temperature gradient is diagnostic of heat flux.

A temperature gradient of 0.01 K/m has a diffusive heat flux of 2.57E-04 W/m² in air at 20 C. That’s 1.84E-06 W in your 30 cm pipe. So it only takes a tiny heat leak to create a significant temperature gradient. I seriously doubt you could establish isothermal conditions in a 20m horizontal pipe at that level, much less a vertical pipe.

The thermal diffusivity of water is two orders of magnitude lower than for air, so for Graeff’s experiment, multiply the equilibration time by 100. The thermal conductivity of water is higher, but the surface area of Graeff’s pipes are smaller too. the heat flux implied by a gradient of 0.05 K/m is 15.7E-06 W. Graeff’s experiment would have been more interesting if he could have changed the orientation. Inverting would have been interesting. That way any systematic error from the temperature measuring instruments could have been detected.

Still, he’s basically trying to invent a perpetual motion machine based on gravity. It won’t work.

on September 16, 2011 at 3:16 amwillbNeal J. King,

Well, if you think I’m not being logical because I don’t accept proof by circular reasoning, then I guess we’ll just have to agree to disagree on that one.

Speaking only for myself, I think at this point a physical experiment is really the only sure way to settle the question of whether a vertical column of gas in a sealed, isolated container could ultimately become isothermal. Although I can certainly sympathize with DeWitt Payne’s point that it is probably a difficult experiment to conduct. It must be, because apparently no one seems to have actually attempted it except for the one somewhat dubious experiment by Roderich Graeff that Bryan pointed to.

For what it’s worth, if such an experiment were to be conducted in a rigorous way, then as it stands now I’d probably bet that the equilibrium condition would be a non-zero lapse rate. But since Feynman, Boltzmann, Maxwell and Clausius all seem to be taking the other side, I wouldn’t bet a lot.

on September 16, 2011 at 8:03 amNeal J. Kingwillb:

Proof by circular reasoning is exactly what a self-consistency test IS, and that is ALL that it is: a proof that the assumptions do not lead to an implication that contradicts the original assumptions themselves.

You might think that is trivial, but lots of attempts to use circular reasoning fail. As an example of a failed proof: I claim that the number of prime numbers is finite. That means that every number greater than the largest prime is divisible by one or more of the list of primes. But oops, Euclid showed that you can easily create a number (by multiplying all the primes together, and then adding 1) that is NOT divisible by any of the primes on your list. Hence, the claim & assumption that the list of primes is finite leads to a contradiction. Hence the number of primes is not finite => infinite.

In the argument at hand, Feynman assumes that the molecules of the gas have constant temperature throughout. By a clever analysis, he shows that the idea you originally had (that gravity would slow down the higher-altitude molecules and therefore make it impossible for the temperature to be constant) did not have to be true, because a constant-T distribution combined with a Boltzmann-factor density profile, exp(-(mgz/(kT)), “takes care” of itself. So a constant-T profile is NOT self-inconsistent. QED.

So now that you have painted yourself into a corner, where do you want to go now? It’s your move.

bbb27

on September 16, 2011 at 12:40 pmNeal J. Kingwillb:

Further thoughts:

1) I looked at Graeff’s write-up. Although I would be the last person to describe myself as skilled in experimental physics, I do see a serious problem in that he records changes in the top & bottom temperatures. Although he is only interested in the gradient (he subtracts them and divides by the height separation), the fact that these temperatures are changing is proof that the liquid under test is NOT thermally isolated. That by itself invalidates the measurement in my view, because once you admit that the liquid is not thermally isolated, you have lost the right to claim that the temperature difference between top and bottom is meaningful at all – even if it is constant. It could be constant for some other reason, having to do with the way that heat is leaking into your tubes. So the measurements don’t prove a damned thing.

2) If you see that you’re betting against Clausius, Maxwell, Boltzmann and Feynman, a smarter move than making a smaller bet is to stop and ask yourself, “Am I thinking about this in the wrong way? Maybe I don’t have my head screwed on right.” That would be a lot more sensible place to start.

on September 17, 2011 at 12:22 amwillbNeal J. King,

I think you have misunderstood what I meant by circular reasoning. Circular reasoning is considered a formal logical fallacy. Your argument as to the meaning of Feynman’s analysis is an example of this. On the other hand the logic of your prime number example is just the opposite of circular reasoning. If you look up “Circular reasoning” and “Begging the question” on Wikipedia you will see what I was trying to say.

There is another critical difference between the prime number example and the Feynman argument. The prime number example is a mathematical analysis. In mathematics, all premises are based on axioms and proven theorems and all required premises are invariably included in the proof. The premises are therefore indisputably true and the premise ‘set’ is complete.

On the other hand the Feynman argument is a scientific analysis. The premises used during the conduct of a purely theoretical scientific analysis are based on models that are only approximations of the real world. They are our current best assessment of how the world works, but there are no guarantees that the models are precisely right. They also represent only a partial picture of the world and it’s always possible that some vital component may be missing, resulting in a skewed conclusion. And because we don’t yet know everything, we are never completely sure that we have included the full set of all necessary premises that are required for whatever conclusion we arrive at.

In science, controlled experiments go a long way in resolving uncertainty inherent in theoretical analysis, by providing direct supportive evidence for (or against) the theoretical conclusion.

on September 17, 2011 at 1:21 amNeal J. Kingwillb:

I think I’ve given a pretty good description of Feynman’s analysis of the constant-T gas. If you think you can out-logic Feynman, be my guest.

If you think you understand the relationship between experimental reality and physical reality better than Feynman, ditto.

If you think you need a better understanding of statistical mechanics, get Reif and a pad of paper, and work through the problems. For a typical student, I guess it would take a year.

Good luck.

bbb28

on September 22, 2011 at 10:58 amNeal J. KingWell, it looks as though willb has given up on the search for microscopic reality; or else (just possibly) gone on a vision quest to understand the foundations of statistical mechanics via the study of Reif’s book.

I can’t say what willb got out of it. I suspect I got more, because in trying to shed some light on his concerns, I:

– pinned down an easy-to-understand derivation for gas pressure in terms of the microscopic view of a perfect gas;

– clarified the distinction between adiabatic and non-adiabatic compression, in terms of the microscopic view of a perfect gas; and

– streamlined the derivation of the dry adiabatic lapse rate (DALR).

None of it new, of course, just recall of stuff I’d learned before, and was sure of anyway; but where the details had faded. I haven’t had the opportunity to present it, because willb has kept getting stuck further and further back into the fundamental ideas of thermal physics; now to the point that he doesn’t understand how the concept of temperature fits in properly.

I don’t want to make fun of willb, because there is a certain intellectual honesty in admitting to yourself that you don’t understand something, and in not wanting to be satisfied just by the assurance that “This is what the experts say.” But it does mean that you’re kind of on your own, and that it’s up to you to go to the beginning level and learn the subject the way that it’s normally taught; and not to expect people you’re talking with to be able to boot-strap you into that level of understanding when you haven’t done the basics yourself.

I hope that willb eventually understands the topic of temperature better, and we might be able to finish the discussion, as I’ve outlined above.

Until then, I have nothing further to say on this topic.

on September 23, 2011 at 3:23 amwillbNeal J. King,

I must say I got quite a lot out of my conversation with you. I did find many of the points you brought up to be informative and worth investigating and I think I have a broader perspective now on the subject matter than I had at the start of this conversation. I think perhaps what was most interesting and (for me at least) most thought-provoking: The level of acceptance in the theory that an enclosed, isolated column of (monatomic) gas in a gravity field can be isothermal. At the start of this conversation I hadn’t the slightest idea that this theory was so widely accepted.

I am now curious as to why this is the prevailing theory, despite the fact that there does not appear to be any corresponding experimental support and despite the fact that an isothermal condition does not occur in nature, in the atmosphere. However, I like to think I have an open mind on this and in our conversation you have provided some clues for the reasons for this widespread acceptance (2nd Law, entropy, Feynman Lectures). I plan to continue looking into this but I think the best process for me now is to get some good textbooks and do some reading.

on September 23, 2011 at 10:43 amNeal J. Kingwillb:

OK, good luck!

on August 19, 2011 at 7:23 pm |dieta.This article is about Bernoullis principle and Bernoullis equation in fluid dynamics. .Bernoullis principle can be applied to various types of fluid flow resulting in what is loosely denoted as Bernoullis equation. In fact there are different forms of the Bernoulli equation for different types of flow.

on August 22, 2011 at 3:20 am |DeWitt PayneBryan,

As usual, you have completely missed the point. It has nothing to do with a surface temperature of 15C. It’s how the radiative temperature, physical and effective, of a half illuminated, non-rotating, non-conductive sphere in a vacuum is calculated. G&T’s approach is correct, ignoring the temperature of the cosmic microwave background. Postma’s isn’t. The physical temperature average will always be less than the effective temperature. No hand waving about day and night will produce an average surface temperature higher than ~255 K for a sphere with an albedo of 0.3 at a distance of 1 AU from our sun. In fact, you only get 255 K if the sphere is superconducting and therefore isothermal.

on August 22, 2011 at 2:06 pm |BryanDeWitt Payne says

….”Bryan, As usual, you have completely missed the point”..

Well sorry, but I thought that was the main point you were referring to.

G&Ts belief that local temperatures determined the radiative response.

3.7.5 and 3.7.6 Covers your point above and as usual G&T are correct.

I don’t know if you looked at the Climate Etc thread on Postma.

Judith Curry landed the most telling argument against Postma based on this question.

She pressed the point a second time but Postma has yet to address the issue.

She made the same point as G&T

Calculating the local temperature from the flux density sometimes gives answers that do not correspond to reality.

G&T also make the point that this is an all to frequent method in Climate Science.

I must say I have a lot of sympathy for the JCs approach.

Some pretty shoddy work is produced by lazy reliance on a formula.

Because you can calculate a number to three significant figures sometimes means nothing.

On the other hand Postma gave a good account of himself.

He stays pretty close to the radiative transfer orthadoxy for most part .

However some were not too happy about has main conclusions.

Perhaps he did not want to open up too many side issues.

This would allow Judith’s readers to get over the shock of finding the greenhouse theory is irrelevant.

on August 22, 2011 at 5:20 pm |DeWitt PayneBryan,

Nobody disagrees with that. It’s the calculation of the

magnitudeof the local temperatures where G&T and Postma part company.Only to others as deluded as he is.

on September 16, 2011 at 12:08 am |BryanDeWitt Payne

It looks as if I will have to come up with some hard numbers to counter your scepticism about an whether an experiment to resolve the Loschmidt / Clausius disagreement is possible without undue expense.

From memory I had a faster rate of diffusion of air than you are quoting but I will need to substantiate this.

I think that willb is right to pursue this rather interesting point and certainly an experiment will settle it one way or another.

I know that you have worked with these numbers in the past and your views have all the weight that this implies.

However I still think that you are being rather pessimistic about the possibilities of using the temperature readings to tease out the gravitational effect on the temperature distribution in still air.

If the four thermometers I suggested had another four on the opposite side of the tube then, by analysing the horizontal diffusion (if any) and vertical diffusion(if any) it might be possible to isolate the effect, if it exists.

There are always other factors to obscure the effect.

For instance the Van Der Waal (?) correction will effect the higher density base more than the lower density top.

…..”Still, he’s basically trying to invent a perpetual motion machine based on gravity. It won’t work.”…….

I think its possible that Loschmidt might be right about the experiment but any idea that this means the second law is falsified is of course wrong.

If Loschmidt is proved correct then the result will be interpreted in such away as to be consistent with the second law.

on September 17, 2011 at 11:57 am |BryanDeWitt Payne

You were quite right about the insulating properties of air.

I knew it was a good insulator but had guessed wrongly that it would not be as effective as expanded polystyrene.

The smart money must be with Clausius and Boltzmann but Loschmidt was no fool.

Its interesting to look at the approaches to the derivation of the barometric formulas.

Any of the barometric formulas derived using Maxwell Boltzmann distribution have the isothermal assumption build in.

Some like G&T approach it via thermodynamics.

Some using Maxwell Boltzmann distribution .

A very common approach is to use the isothermal atmosphere to get the density or pressure distribution.

Then to combine this with the ideal gas equation and the hydrostatic equation to arrive at the adiabatic formula and the DALR which they then claim is “near enough.”

It does seem a bit unsatisfactory to use an isothermal condition to arrive a at a non isothermal atmosphere.

The troposphere almost always favours the adiabatic route even for still air such as is found in mine shafts and other enclosed structures such as inside wind turbine housings.

The surroundings of course may determine this.

I think that the question is interesting and will keep an open mind as to the outcome.

I would like to see some experimental evidence to settle the matter one way or another.

So back to the vertically fixed 20m pipe.

The pipe when filled with gas will be quite close to steady state anyway as the much more powerful density distribution will be set almost instantly.

It will also be not too far from the adiabatic distribution.

I think after 3 days a steady state will have arrived but a subsequent reading will confirm this.

Because of the many competing processes going on it would be better to go down the route of the control experiment much favoured in the biological sciences because of the many variables found there.

I will make some further changes.

The 20m pipe encased in a one metre thick concentric jacket with top and bottom one metre slabs all made from expanded polystyrene. .

The whole structure placed in a room with a controlled temperature environment.

The eight high resolution thermometers placed as before just protruding into the 20m pipe.

Three experiments spring to mind and for each a set results taken twice.

1 The 20metre pipe filled with close fitting rings of expanded polystyrene.

2. The 20metre pipe filled dry air at stp.

3.The 20metre pipe filled with argon at stp.

Hopefully after analysis of the data

If Loschmidt right a difference of nearly 0.02K top to bottom observed for dry air and 0.04K for argon.

If Clausius right a no difference top to bottom is observed or no overal pattern that could be detected in support of Loschmidt’s conjecture.

on January 17, 2012 at 10:57 pm |Andrejs VanagsCould some one enlighten me and what drives the AMOUNT of water vapor in the atmosphere?

I understand that the maximum limit can be calculated from the saturation pressure which depends on temperature. But the actual amount is less. How much less? what are the drivers for equilibrium?

I am tryng to work out the non radiative heat rate need by the atmosphere to maintain a lapse rate. But thinking about it, the rate doesnt mean anything it just means more evaporation and resutling more condensation and rain, and perhaps higher albedo. But what about the equilibrium amount of water vapor remaining in the atmosphere? (which determines the optical thickness)

An answer, suggestion or links on the above would be greatly appreciated.

on January 18, 2012 at 7:42 pm |scienceofdoomAndrejs Vanags,

This is a complex problem.

There is some detailed explanation in Clouds and Water Vapor – Part Two and followup in Part Three.

Have a read and feel free to post more questions.

on January 24, 2012 at 12:04 am |FrankSOD: I think I understand what the DALR is when one follows a parcel of rising or sinking air and assesses if it will continue moving in the same direction. When the DALR is derived as you have shown above – with no tangible parcel of air to follow, what does the DALR “mean”? If it were the maximum lapse rate consistent with stability (using buoyancy considerations), wouldn’t you end up with an inequality (the lapse rate is less than or equal to the DALR value)? If this is an equation, then for what conditions (for example, equilibrium) does the equality apply? The equation for hydrostatic equilibrium appears to be the only one of your five equations that might not hold all the time.

In a comment to another post, I abstracted Feynman’s discussion of an isothermal column of atmosphere: https://scienceofdoom.com/2010/08/16/convection-venus-thought-experiments-and-tall-rooms-full-of-gas/#comment-15523 He discusses hydrostatic equilibrium in an isothermal setting.

If I am correct, an isothermal column of air is stable. Neal King above argues that a variety of lapse rates may be consistent with stability.

on January 24, 2012 at 3:30 am |scienceofdoomFrank:

If there is no tangible parcel of air to follow then it can’t easily be derived. Because in that case we have to follow one molecule and the analysis will be completely different. I assume it can be done, but haven’t been interested to find out or try and derive it.

Given that “masses of air” move about, fronts exist and tangible measurements of wind velocity can be taken the parcel concept is sound.

I don’t understand this question. If, for example, the DALR = 10 K/km, and the environmental lapse rate = 6 K/km then we have an atmosphere stable to dry convection. Is this what you are getting at?

I was having a look at this the other day.

In fact, the equation for vertical motion (unit mass) is:

∂w/∂t + 1/ρ.∂p/∂z + g = Fz

where w = vertical velocity, ρ = density, p = pressure, z = height, g = acceleration due to gravity and Fz = frictional force

Typical vertical velocities (not accelerations) are a few cm/s, in convection of cumulus clouds 1-2 m/s and for cumulonimbus clouds can be up to a few 10s of m/s.

So it is pretty difficult to get ∂w/∂t > 0.5 m/s

^{2}, or > 5% of gravity, but occasionally it will affect the calculation.The Coriolis term should be in the equation as well if we want to be picky but that is very small compared with gravity.

How big is the frictional term?

Also there is a little complication with the hydrostatic equilibrium equation as far as density is concerned because in the derivation the density of the parcel is assumed to be the same as the density of the atmosphere at that point, which isn’t really true, but again is pretty close if you work through the derivation and see what conditions you need for the result not to be correct.

And unless you have dry air, this equation is not correct – and given that air does have some water vapor we need that to get the correct result for the real lapse rate.

If you see my point – the DALR is a useful starting point to understand atmospheric stability (in the vertical direction) and get a value for the expected maximum lapse rate. But it doesn’t get you to 1% accuracy because it isn’t the complete solution.

Of course. It’s just mechanics – you move some air and what happens? Is there a restoring force back to where it came from? Or is there an acceleration in the direction of the original impulse? If there is a restoring force the atmosphere is stable, and if there is an acceleration in the direction of the impulse then the atmosphere is unstable.

The buoyancy frequency is calculated from these considerations:

N = [g/θ.dθ/dz]

^{1/2}, where θ = potential temperatureAnd the motion is an oscillation with period 2π/N – so long as N is real. So the change in potential temperature with height determines the strength of the restoring force, not whether there is one.

I am working on an article on potential temperature with some material about atmospheric stability.

on January 24, 2012 at 3:54 am |scienceofdoomJust a note that when I said “I don’t understand this question.” I meant – “

as you seem to understand this quite well I don’t understand why you are asking this question“on January 24, 2012 at 3:12 pm |FrankSOD: What I think I understand (presumably because I have seen it somewhere before):

I raise a parcel of air from h1, P1, and T1 (which determines a density rho1) to h2, P2, T2, and rho2. The air surrounding my parcel has h3, P3, T3 and rho3, but surrounding means that h2 = h3 and the parcel will expand or contract until P2 = P3. T3 and T2 aren’t required to be equal and neither are the proportional rhos. If rho2 g/Cp (another inequality), the atmosphere is unstable.

I suspect that you have done this derivation with differentials rather than “deltas”, but I’m still looking for an inequality somewhere. Your derivation is not concerned with the possibility that rho2rho3 and T3T2. The “conditions” under which your derivation is “true” now seem to be that a parcel will be buoyancy-neutral compared with the surrounding air where ever it goes, but you don’t explicitly discuss buoyancy or density. You’ve derived some properties of air “moving” adiabatically in an atmosphere, but when expressed in differential terms, it is hard to “see” any motion. Does this make the problem any clearer?

on January 24, 2012 at 7:19 pm |scienceofdoomJust a note that (annoyingly), WordPress doesn’t publish > or < signs in comments because it thinks they are html tags. To get these to appear in your comment you need to use the following, but without the spaces & gt ; and & lt ;

on January 24, 2012 at 7:27 pm |scienceofdoomFrank,

Any clearer? Not sure.

So I’ll plough on..

Hard to see any motion? Yes, because the equation for adiabatic expansion with height just tells us the new temperature of a parcel of air. It is only about forced motion.

However, once the parcel has its new temperature and therefore its new density F=ma steps in.

Perhaps this next point is the key in your comment:

So the next article on potential temperature will also attempt to address buoyancy and density.

on January 25, 2012 at 5:24 am |FrankSOD: Trying to explain the difficulty helped bridge the gap. The adiabatic lapse rate tells me how the temperature – and the density (which is more directly connected to convection) – of a parcel would change IF it changed altitude and the environmental lapse rate tells me how the nearby atmosphere does vary with height.

It suddenly dawns on me that the adiabatic lapse rate is a differential eqn: dT/dz = f(z). Normally we’d convert to dT = f(z).dz and integrate from z1 to z2 (the missing motion I’ve been looking for), but the simplicity of the answer and the fact that buoyancy wasn’t directly involved made it a little harder to connect the purely mathematical derivation to real parcels of air.

Inequality sign practice: h2>h3

on January 25, 2012 at 5:26 am |Franksecond try at inequalities h2 & gt ; h3

on January 25, 2012 at 5:27 am |Frankthird try h2 > h3

on September 12, 2014 at 4:56 am |remove google plus androidAwesome post.

on October 10, 2015 at 8:44 pm |FrankSOD: At an untrustworthy source, a modification of this derivation for lapse rate to include turbulent dissipation, a subject I know nothing about.

dU = dQ – pdV = 0 for all adiabatic processes? or just isentropic adiabatic processes?

They have added a D (or dD) to this equation

dQ – pdV + D = 0

D looks like energy not entropy. If I wanted to concert entropy to energy, in chemistry I’d multiply by temperature: D = TdS, but I can’t find this substitution anywhere in the mess.

An isentropic flow presumably means there is no entropy change during flow. I’d guess that if entropy increases we are talking about turbulent dissipation , but is it dissipation of energy or entropy.

The final struggle came with the assertion that when the lapse rate is not adiabatic, then there is turbulent dissipation. So, if I let an isolated column of gas equilibrate in a gravitational field and end up with an isothermal gradient (instead of adiabatic), turbulent dissipation is responsible.

dKE/dt + dPE/dt = W – D

Can you or someone else shed some light on this.

on October 10, 2015 at 10:34 pm |Mike M.Frank,

“dU = dQ – pdV = 0 for all adiabatic processes? or just isentropic adiabatic processes?”

In general, dU = TdS -pdV assuming that there is no change in composition or external fields. So they have assumed dQ = TdS, i.e. a reversible process. An adiabatic process would have dQ = 0. dU = 0 could be a process in an isolated system, or an isothermal process for an ideal gas, or any number of other possibilities.

“They have added a D (or dD) to this equation dQ – pdV + D = 0”

I guess D is some sort of dissipation term? This seems extremely cavalier.

“The final struggle came with the assertion that when the lapse rate is not adiabatic, then there is turbulent dissipation.”

A lapse rate greater than adiabatic would be unstable, creating strong convection that would reduce the lapse rate toward adiabatic. A lapse rate less than adiabatic would be stable, so no convection unless externally induced by something like wind shear.

An isothermal gradient would be the result of thermodynamic equilibrium, which would require no dissipation.

Sounds like your untrustworthy source should be ignored. Sounds like it might be some clown arguing that the adiabatic lapse rate is a consequence of thermodynamic equilibrium in a gravitational field.

on October 12, 2015 at 9:43 pmFrankMike and DeWitt: Thanks for the replies, but I’m probably still missing something. Copying and pasting from Wikipedia on isentropic (flow):

For a closed system, the total change in energy of a system is the sum of the work done and the heat added,

dU = dW + dQ

The reversible work done on a system by changing the volume is,

dW = -pdV

where p is the pressure and V is the volume. The change in enthalpy (H = U + pV) is given by,

dH = dU + pdV + Vdp

Then for a process that is both reversible and adiabatic (i.e. no heat transfer occurs), dQ_{rev} = 0, and so dS = dQ_{rev}/T. All reversible adiabatic processes are isentropic. This leads to two important observations,

dU = dW + dQ = -pdV + 0 and [Equation 1]

dH = dW +dQ + pdV + Vdp = -pdV + 0 + pdV + Vdp = Vdp

Next, a great deal can be computed for isentropic processes of an ideal gas. For any transformation of an ideal gas, it is always true that

dU = nC_vdT, and dH = nC_pdT

Using the general results derived above for dU and dH, then

dU = nC_vdT = -pdV, and [Equation 2]

dH = nC_pdT = Vdp

Equation 2 is used by SOD above in deriving the adiabatic lapse rate. So how do we modify this derivation when dS is greater than dQ_{rev}/T – when some part of the process is not reversible. It was my understanding that dQ = 0 for an adiabatic process and therefore that there would be no change in entropy for any adiabatic process – which may be wrong. It looks like the proper rule is dQ = 0 only for a reversible adiabatic process. Mike tells me:

dU = TdS – pdV

which appears to be equivalent to saying that dQ is TdS (or perhaps in other formulations a term, D, for dissipation).

dU = dW + dQ = -pdV + TdS and [Equation 1′]

nC_vdT = TdS – pdV [Equation 2′]

Copying and pasting from SOD’s math section above:

Vdp + pdV = (Cp-Cv)dT ….[8]

Vdp = -Mgdz ….[9]

pdV = TdS – CvdT Frank’s modification

-Mgdz + TdS – CvdT = (Cp-Cv)dT

-Mgdz + TdS = CpdT

At this point, SOD divides by M to get the specific heat capacity cp and I can do so to get the specific entropy change ds. Rearranging terms:

-gdz + Tds = cpdT

(-g/cp)*dz + (T/cp)*ds = dT

-g/cp + (T/cp)*ds/dz = dT/dz

All of which is useless at first glance. (My intuition doesn’t even tell be the sign of ds/dz.) If the atmosphere is isothermal, dT/dz = 0, and I apparently know something about ds/dz.

on October 12, 2015 at 10:15 pmDeWitt PayneFrank,

You don’t know the sign of dS/dz because it depends on dT/dz. For an isothermal atmosphere, dS/dz = g/T Entropy increases with altitude. This is, by the way, exactly the same as saying the potential temperature increases with altitude

on October 13, 2015 at 12:43 amMike M.Frank,

Not sure where the question is. But I think I can clear up one source of confusion. You wrote: ” It looks like the proper rule is dQ = 0 only for a reversible adiabatic process.”

For closed systems (no mass in or out) of constant composition:

dQ = 0 for any adiabatic process, and only for adiabatic processes.

dQ = TdS for reversible processes and only for reversible processes.

dW = -pdV for reversible processes and only for reversible processes.

dU = TdS – pdV always

dU = dQ + dW always

The individual equivalences of the terms on the r.h.s of the last two equations only applies for reversible processes.

on October 11, 2015 at 1:41 am |DeWitt PayneFrank,

Entropy doesn’t dissipate. It only increases. An isolated isothermal column of gas in a gravitational field has maximum entropy. No turbulent atmosphere, however, and any planetary atmosphere must be turbulent, can be isothermal. A less than adiabatic lapse rate may not generate convection, but that doesn’t mean it’s stable. I had this discussion with Nick Stokes a long time ago.

Note that by planet, I mean an object of sufficient size orbiting a star closely enough to have a gaseous atmosphere. Isolated bodies in intergalactic space aren’t planets.

on October 13, 2015 at 9:18 am |FrankMike: You wrote:

dU = TdS – pdV always

dU = dQ + dW always

Earlier I thought this meant dQ = TdS and dW = -pdV. Now you have clarified that this is accurate only for reversible processes. So it looks like the above math is correct despite my confusion.

DeWitt: Yes I realize that entropy doesn’t “dissipate”. If I understand correctly, entropy always increases when dissipation occurs. I’m not sure whether turbulent flow implies dissipation.

on October 13, 2015 at 2:59 pmDeWitt PayneIt does. In fact, I suspect that all flows in the atmosphere involve dissipation at some point. Otherwise, wind speeds would increase without limit as the solar energy input that drives them is continuous.