During a discussion about Venus (Venusian Mysteries), Leonard Weinstein suggested a thought experiment that prompted a 2nd article of mine. Unfortunately, it really failed to address his point.
In fact, it took me a long time to get to grips with Leonard’s point and 500 comments in (!) I suggested that we write a joint article.
I also invited Arthur Smith who probably agrees mostly with me, but at times he was much clearer than I was. And I’m not sure we are totally in agreement either. I did offer Leonard the opportunity to have another contributor on his side, but he is happy to write alone – or draw on one of the other contributors in forming his article. The format here is quite open.
The plan is for me to write the first section, and then Arthur to write his, followed by Leonard. The idea behind it is to crystallize our respective thoughts so that others can review them, rather than wading through 500+ comments. What was the original discussion about?
It’s worth repeating Leonard Weinstein’s original thought experiment:
Consider Venus with its existing atmosphere, and put a totally opaque enclosure (to incoming Solar radiation) around the entire planet at the average location of present outgoing long wave radiation. Use a surface with the same albedo as present Venus for the enclosure. What would happen to the planetary surface temperature over a reasonably long time? For this case, NO Solar incoming radiation reaches the surface. I contend that the surface temperature will be about the same as present..
Those who are interested in that debate can read the complete idea and the many comments that followed. During the course of our debate we each posed different thought experiments as a means to finding the flaws in the various ideas.
At times we lost track of which experiment was being considered. Many times we didn’t quite understand the ideas that were posed by others.
And therefore, before I start, it’s worth saying that I might still misrepresent one of the other points of view. But not intentionally.
Introductory Ideas – Pumping up a Tyre
This first section is uncontroversial. It is simply aimed at helping those unfamiliar with terms like adiabatic expansion. Unfortunately, it will be brief with more Wikipedia links than usual.. (if there are many questions on these basics then I might write another article).
So let’s consider pumping up a bicycle tire. When you do pump up a tire you find that around the valve everything can get pretty hot. Why is that? Does high pressure cause high temperature? Let’s review two idealized ways of compressing an ideal gas:
- isothermal compression – which is so slow that the temperature of the gas doesn’t rise
- adiabatic compression – which is so fast that no heat escapes from the gas during the process
Pressure and volume of a gas are inversely related if temperature is kept constant – this is Boyle’s law
.
Imagine pumping up a tire very very slowly. Usually this isn’t possible because the valve leaks.
If it was possible you would find that the work done in compressing the gas didn’t increase the gas temperature because the heat increase in the gas would equalize out to the wheel rims and to the surrounding atmosphere.
Now imagine pumping up a tire very quickly. The usual way. In this case, you are adding energy to the system and there is no time for the temperature to equalize with the surroundings, so the temperature increases (because work is done on the gas):
The ideal gas laws can be confusing because three important terms exist in the one equation, pressure, volume and temperature:
PV = nRT or PV = NkT
where P = pressure, V = volume, T = absolute temperature (in K), N = number of molecules and k = Boltzmann’s constant
So the two examples above give the two extremes of compression. One, the isothermal case, has the temperature held constant because the process is very slow, and one, the adiabatic case, has the energy leaving the system being zero because the process is so fast.
In a nutshell, high pressures do not, of themselves, cause high temperatures. But changing the pressure – i.e., compressing a gas – does increase the temperature if it is done quickly.
Introductory Ideas – The “Environmental Lapse Rate” and Convection
Equally importantly, adiabatic expansion reduces the temperature in a gas.
If you lift air up in the atmosphere quickly then it will expand and cool. In dry air, some simple maths calculates this expansion as a temperature drop of just under 10K per km. In very moist air, this temperature drop can be as low as 4K per km. (The actual value depends on the amount of moisture).
Imagine the (common) situation where due to pressure effects a “parcel of air” is pushed upwards a small way, say 100m. Under adiabatic expansion, the temperature will drop somewhere between 1K (1°C) for dry air and 0.4K for very moist air.
Suppose that the actual atmospheric temperature profile is such that the temperature 100m higher up is 1.5K cooler. (We would say that the environmental lapse rate was 15K/km).
In this case, the parcel of air pushed up is now warmer than the surrounding air and, therefore, less dense – so it keeps rising. This is the major idea behind convection – if the environmental lapse rate is “more than” the adiabatic lapse rate then convection will redistribute heat. And if the environmental lapse rate is “less than” the lapse rate then the atmosphere tends to be stable against convection.
Note – the terminology can be confusing for newcomers. Even though temperature decreases as you go up in the atmosphere the adiabatic lapse rate is written as a positive number. Just imagine that the temperature in the atmosphere actually decreases by 1K per km and think what happens if the adiabatic lapse rate is 10K per km – air that is lifted up will be much colder than the surrounding atmosphere and sink back down.
Now imagine that the temperature decreases by 15K per km and think what happens if the adiabatic lapse rate is 10K per km – air that is lifted up will be much warmer than the surrounding atmosphere (so will expand and be less dense) and will keep rising.
All of this so far described is uncontentious..
The Main Contention
Armed with these ideas, the main contentious point from my side was this:
If you heat a gas sufficiently from the bottom, convection will naturally take place to redistribute heat. The environmental “lapse rate” can’t be sustained at more than the adiabatic lapse rate because convection will take over. This is the case with the earth, where most of the solar radiation is absorbed by the earth’s surface.
But if you heat a gas from the top (as in the original proposed thought experiment) then there is no mechanism to create the adiabatic lapse rate. This is because there is no mechanism to create convection. So we can’t have an atmosphere where the environmental lapse rate is greater than the adiabatic lapse rate – but we can have one where it is less.
Convection redistributes heat because of natural buoyancy, but convection can’t be induced to work the other way.
Well, maybe it’s not quite as simple..
The Very Tall Room Full of Gas
Leonard suggested – Take an empty room 1km square and 100km high and pour in gas at 250K from the top. The gas doesn’t absorb or emit any radiation. What happens?
The gas is adiabatically compressed (due to higher pressure below) and the gas at the bottom ends up at a much higher temperature.
Another way to think about adiabatic compression is that height (potential energy) is converted to speed (kinetic energy) because of gravity – like dropping a cannon ball.
We all agree on that – but what happens afterwards? (And I think we were all assuming that a lid is placed over the top of the tall room and the lid effectively stays at a temperature of 250K due to external radiation – however, no precise definition of the temperature of the room’s walls and lid was made).
My view – over a massively long time the temperature at the top and bottom will eventually reach the same value. This seemed to be the most contentious point.
However, in saying that, there was a lot of discussion about exactly the state of the gas so at times I wondered whether it was fundamental thermodynamics up for discussion or not understanding each other’s thought experiments.
In making this claim that the gas will become isothermal (all at the same temperature), I am assuming that the gas will eventually be stationary on a large scale (obviously the gas molecules move as their temperature is defined by their velocity). So all of the bulk movements of air have stopped.
Conduction of heat is left as the only mechanism for movement of heat and as gas molecules collide with each other they will all eventually reach the same temperature – the average temperature of the gas. (Because external radiation to and from the lid and walls wasn’t defined this will affect what final average value is reached). Note that temperature of a gas is a bulk property, so a gas at one temperature has a distribution of velocities (the Maxwell-Boltzmann distribution).
The Tall Room when Gases Absorb and Emit Radiation
We all appeared to agree that in this case (radiatively-absorbing gases) that as the atmosphere becomes optically thin then radiation will move heat very effectively and the top part of the atmosphere in this very tall room will become isothermal.
Heating from the Top
The viewpoint expressed by Leonard is that differential heating (night vs day, equatorial vs polar latitudes) will eventually cause large scale circulation, thus causing bulk movement of air down to the surface with the consequent adiabatic heating. This by itself will cause the environmental lapse rate to become very close to the adiabatic lapse rate.
I see it as a possibility that I can’t (today) disprove, but Leonard’s hypothesis itself seems unproven. Is there enough energy to drive this circulation when an atmosphere is heated from the top?
I found two considerations of this idea.
One was the Sandstrom theorem which considered heating a fluid from the bottom vs heating it from the top. More comment in the earlier article. I guess you could say Sandstrom said no, although others have picked some holes in it.
The other was in Atmospheres (1972) by the great Richard M. Goody and James C. Walker. In a time when only a little was known about the Venusian atmosphere, Goody & Walker suggested first that probably enough solar radiation made it to the surface to initiate heating from below (to cut a long story short). And later made this comment:
Descending air is compressed as it moves to lower levels in the atmosphere. The compression causes the temperature to increase.. If the circulation is sufficiently rapid, and if the air does not cool too fast by emission of radiation, the temperature will increase at the adiabatic rate. This is precisely what is observed on Venus.
Venera and Mariner Venus spacecraft have all found that the temperature increases adiabatically as altitude decreases in the lower atmosphere. As we explained this observation could also be the result of thermal convection driven by solar radiation deposited at the ground, but we cannot be sure that the radiation actually reaches the ground.
What we are now suggesting as an alternative explanation is that the adiabatic temperature gradient is related to a planetary circulation driven by heat supplied unevenly to the upper levels of the atmosphere. According to this theory, the high ground temperature is caused, at least in part, by compressional heating of the descending air.
In the specific case of the real Venus (rather than our thought experiments), much more has been uncovered since Goody and Walker wrote. Perhaps the question of what happens in the real Venus is clearer – one way or the other.
What do I conclude?
I’m glad I’ve taken the time to think about the subject because I feel like I understand it much better as a result of this discussion. I appreciate Leonard especially for taking the time, but also Arthur Smith and others.
Before we started discussing I knew the answers for certain. Now I’m not so sure.
_____________________________________________________________________
By Arthur Smith
First on the question of convective heat flow from heating above, which scienceofdoom just ended with: I agree some such heat flow is possible, but it is difficult. Goody and Walker were wrong if they felt this could explain high Venusian surface temperatures.
The foundation for my certainty on this lies in the fundamental laws of thermodynamics, which I’ll start by reviewing in the context of the general problem of heat flow in planetary atmospheres (and the “Very Tall Room Full of Gas”). Note that these laws are very general and based in the properties of energy and the statistics of large numbers of particles, and have been found applicable in systems ranging from the interior of stars to chemical solutions and semiconductor devices and the like. External forces like gravitational fields are a routine factor in thermodynamic problems, as are complex intermolecular forces that pose a much thornier challenge. The laws of thermodynamics are among the most fundamental laws in physics – perhaps even more fundamental than gravitation itself.
I’m going to discuss the laws out of order, since they are of various degrees of relevance to the discussion we’ve had. The third law (defining behavior at zero temperature) is not relevant at all and won’t be discussed further.
The First Law
The first law of thermodynamics demands conservation of energy:
Energy can be neither created nor destroyed.
This means that in any isolated system the total energy embodied in the particles, their motion, their interactions, etc. must remain constant. Over time such an isolated system approaches a state of thermodynamic equilibrium where the measurable, statistically averaged properties cease changing.
In our previous discussion I interpreted Leonard’s “Very Tall Room Full of Gas” example as such a completely isolated system, with no energy entering or leaving. Therefore it should, eventually at least, approach such a state of thermodynamic equilibrium. Scienceofdoom above interpreted it as being in a condition where the top of the room was held at a given specific temperature. That condition would allow energy to enter and leave over time, but eventually the statistical properties would also stop changing, and then energy flow through that top surface would also cease, total energy would be constant, and you would again arrive at an equilibrium system (but with a different total energy from the starting point).
That would also be the case in Leonard’s original thought experiment concerning Venus if the temperature of the “totally opaque enclosure” was a uniform constant value. The underlying system would reach some point where its properties ceased changing, and then with no energy flow in or out, it would be effectively isolated from the rest of the universe, and in its own thermodynamic equilibrium. However, Leonard allows the temperature of his opaque enclosure to vary with latitude and time of day which means that strictly such a statistical constancy would not apply and the underlying atmosphere would not be completely in thermodynamic equilibrium. I’ll look at that later in discussing the restrictions imposed by the second law.
In a system like a planetary atmosphere with energy flowing through it from a nearby star (or from internal heat) and escaping into the rest of the universe, you are obviously not isolated and would not reach thermodynamic equilibrium. Rather, if a condition where averaged properties cease changing is reached, this is referred to as a steady state. Under steady state conditions the first law must still be obeyed. Since internal statistical properties are unchanging, that means the system must not be gaining or losing any internal energy. So in steady state you have a balance between incoming and outgoing energy from the system, enforced by the first law of thermodynamics.
If such an atmospheric system is NOT in steady state, if there is, say, more energy coming in than leaving, then the total energy embodied in the particles of the system will increase. That higher average energy per particle can be measured as an increase in temperature – but that gets us to the definition of temperature.
The Zeroth Law
The zeroth law essentially defines temperature:
If two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other.
Here thermal equilibrium means that when the systems are brought into physical proximity so that they may exchange heat, no heat is actually exchanged. A typical example of the zeroth law is to make the “third system” a thermometer, something that you can use to read out a measurement of its internal energy level. Any number of systems can act as a thermometer: the volume of mercury liquid in an evacuated bulb, the resistance of a strip of platinum, or the pressure of a fixed volume of helium gas, for example.
If you divide a system “A” in thermodynamic equilibrium into two pieces, “A1” and “A2”, and then bring those two into physical proximity again, no heat should flow between them, because no heat was flowing between them before separating them since neither one’s statistical properties were changing. I.e. Any two subsystems of a system in thermodynamic equilibrium must be in thermal equilibrium with each other. That means that if you place a thermometer to measure the temperature of subsystem “A1”, and find a temperature “T” for thermal equilibrium of the thermometer with “A1”, then subsytem “A2” will also be in thermal equilibrium with “T”, i.e. its temperature will also read out as the same value.
That is, the temperature of a system in thermodynamic equilibrium is the same as the temperature of every (macroscopic) subsystem – temperature is constant throughout. The zeroth law implies temperature must be a uniform property of such equilibrium systems.
This means that in both “Very Large Room” examples and for my version of Leonard’s original thought experiment for Venus (with a uniform enclosing temperature), the thermodynamic equilibrium that the atmosphere must eventually reach must have a constant and uniform temperature throughout the system. Temperature in the room or in the pseudo-Venus’ atmosphere would be independent of altitude – an isothermal, not adiabatic, temperature profile.
The zeroth law can actually be derived from the first and second laws – this is done for example in Statistical Physics, 3rd Edition Part 1, Landau and Lifshiftz (Vol. 5) (Pergamon Press, 1980) – Chapter II, Thermodynamics Quantities, Section 9 – “Temperature” – and again the conclusion is the same:
Thus, if a system is in a state of thermodynamic equilibrium, the [absolute temperature] is the same for every part of it, i.e. is constant throughout the system.
The Second Law
The first and zeroth laws tell us what happens in the cases where the atmosphere can be characterized as in thermodynamic equilibrium, i.e. actually or effectively isolated from the rest of the universe after sufficient period of time that quantities cease changing. Under those conditions it must have a uniform temperature. But what about Leonard’s actual Venus thought experiment, where there are constant fluxes of energy in and out due to latitudinal and time-of-day variations in the temperature of the opaque enclosure? What can we say about the temperatures in the atmosphere below given heating from above under those conditions? Here the second law provides the primary constraint, and in particular the Clausius formulation:
Heat cannot of itself pass from a colder to a hotter body.
A planetary atmosphere is not driven by machines that move the air around, there are no giant fans pushing the air from one place to another. There is no incoming chemical or electrical form of non-thermal energy that can force things to happen. The driving force is the flux of incoming energy from the local star that brings heat when it is absorbed. All atmospheric processes are driven by the resulting temperature differences. Thanks to the first law of thermodynamics each incoming chunk of energy can be accounted for as it is successively absorbed, reflected, re-emitted and so forth until it finally leaves again as thermal radiation to the rest of the universe. In each of these steps the energy is spontaneously exchanged from a portion of the atmosphere at one temperature to another portion at another temperature.
What the second law tells us, particularly in the above Clausius form, is that the net spontaneous energy exchange describing the flow of each chunk of incoming energy to the atmosphere MUST ALWAYS BE IN THE DIRECTION OF DECREASING TEMPERATURE. Heat flows “downhill” from high to low temperature regions. The incoming energy starts from the star – very high temperature. If it’s absorbed it’s somewhere in the atmosphere or the planetary surface, and from that point it must go through successfully colder and colder portions of the system before it can escape to space (where the temperature is 2.7 K).
There can be no net flow of energy from colder to hotter regions. And that means, if the atmosphere below Leonard’s “opaque enclosure” is at a higher temperature than any point on the enclosure, heat must be flowing out of the atmosphere, not inward. The enclosure, no matter the distribution of temperatures on its surface, cannot drive a temperature below it that is any higher than the highest temperature on the enclosure itself.
So even in the non-equilibrium case represented by Leonard’s original thought experiment, while the atmosphere’s temperature will not be everywhere the same, it will nowhere be any hotter than the highest temperature of the enclosure, after sufficient time has passed for such statistical properties to stop changing.
The thermodynamic laws are the fundamental governing laws regarding temperature, heat, and energy in the universe. It would be extraordinary if they were violated in such simple systems as these gases under gravitation that we have been discussing. Note in particular that any violation of the second law of thermodynamics allows for the creation of a “perpetual motion machine”, a device legions of amateurs have attempted to create with nothing but failure to show for it. Both the first and second laws seem to be very strictly enforced in our universe.
Approach to Equilibrium
The above results on temperatures apply under equilibrium or steady state conditions, i.e. after the “measurable, statistically averaged properties cease changing.” That may perhaps take a long time – how long should we expect?
The heat content of a gas is given by the product of the heat capacity and temperature. For the Venus case we’re starting at 740 K near the surface and, under either of the “thought experiment” cases, dropping to something like 240 K in the end, about 500 degrees. Surface pressure on Venus is 93 Earth atmospheres, so in every square meter we have a mass of close to 1 million kg of atmosphere above it. [Quick calculation: 1 earth atmosphere = 101 kPa, or 10,300 kg of atmosphere per square meter, or 15 pounds per square inch. On Venus it’s 1400 pounds/sq inch.] The atmosphere of Venus is almost entirely carbon dioxide, which has a heat capacity of close to 1 kJ/kgK (see this reference). That means the heat capacity of the column of Venus’ atmosphere over 1 square meter is about one billion (109) J/K.
So a temperature change of 500 K amounts to 500 billion joules = 500 GJ for each square meter of the planetary surface. This is the energy we need to flow out of the system in order for it to move from present conditions to the isothermal conditions that would eventually apply under Leonard’s thought experiment.
Now from scienceofdoom’s previous post we expect at least an initial heat flow rate out of the atmosphere of 158 W/m² (that’s the outgoing flow that balances incoming absorption on Venus – since we’ve lost incoming absorption to the opaque shell, this ought to be roughly the initial net flow rate). Dividing this into 500 GJ/m² gives a first-cut time estimate for the cooling: 3.2 billion seconds, or about 100 years. So the cool-down to isothermal would be hardly immediate, but still pretty short on the scale of planetary change.
Now we shouldn’t expect that 158 W/m² to hold forever. There are four primary mechanisms for heat flow in a planetary atmosphere: conduction (the diffusion of heat through molecular movements), convection (movement of larger parcels of air), latent heat flow (movement of materials within air parcels that change phases – from liquid to gas and back, for example, for water) and thermal radiation. The heat flow rate for conduction is simply proportional to the gradient in temperature. The heat flow rate for radiation is similar except for the region of the atmospheric “window” (some heat leaves directly to space according to the Planck function for that spectral region at that temperature). Latent heat flow is not a factor in Venus’ present atmosphere, though it would come into play if the lower atmosphere cooled below the point where CO2 liquefies at those pressures.
For convection, however, average heat flow rates are a much more complex function of the temperature gradient. Getting parcels of gas to displace one another requires some sort of cycle where some areas go up and some down, a breaking of the planet’s symmetry. On Earth the large-scale convective flows are described by the Hadley cells in the tropics and other large-scale cells at higher latitudes, which circulate air from sea level to many kilometers in altitude. On a smaller scale, where the ground becomes particularly warm then temperature gradients exceeding the adiabatic lapse rate may occur, resulting in “thermals”, local convective cells up to possibly several hundred meters. If the temperature difference between high and low altitudes is too low, the convective instability vanishes and heat flow through convection becomes much weaker.
So as temperatures come closer to isothermal in an atmosphere like Venus’, except for the atmospheric “window” for radiative heat flow, we would expect all the heat flow mechanisms to decrease, and convection in particular to almost cease after the temperature difference gets too small. So we might expect the cool-down to isothermal conditions to slow down and end up much longer than this 100-year estimate. How long?
Another of the thought experiment versions discussed in the previous thread involved removing radiation altogether; with both radiation and convection gone, that leaves only conduction as a mechanism for heat flow through the atmosphere. For an ideal gas the thermal conductivity increases as the 2/3 power of the density (it’s proportional to density times mean free path) and the square root of temperature (mean particle speed). While CO2 is not really ideal at 93 atmospheres at 740 K, using this rule gives us a rough idea of what to expect – at 1 atmosphere and 273 K we have a value of 14.65 mW/(m.K) so at 93 atmospheres and 740 K it should be about 500 mW/(m . K). For a temperature gradient of 10 K/km that gives a heat flux of 0.005 W/m². 500 GJ would then take about 1014 seconds, or 3 million years.
So the approach to an isothermal equilibrium state for these atmospheres would take between a few hundred and a few million years, depending on the conditions you impose on the system. Still, the planets are billions of years old, so if heating from above was really the mechanism at work on Venus we should see the evidence of it in the form of cooler surface temperatures there by now, even if radiative heat flow were not a factor at all.
The View From a Molecule
Leonard in our previous discussion raised the point that an individual molecule sees the gravitational field, causing it to accelerate downwards. So molecular velocities lower down should be higher than velocities higher up, and that means higher temperatures.
Leonard’s picture is true of the behavior of a molecule in between collisions with the other molecules. But if the gas is reasonably dense, the “mean free path” (the average distance between collisions) becomes quite short. At 1 atmosphere and room temperature the mean free path of a typical gas is about 100 nanometers. So there’s very little distance to accelerate before a molecule would collide with another; to consider the full effect you need to look at the effect of collisions due to gas pressure along with the acceleration by gravity.
An individual molecule in a system in thermodynamic equilibrium at temperature T has available a collection of states in phase space (position, momentum and any internal variables) each with some energy E. In the case of our molecule in a gravitational field, that energy consists of the kinetic energy ½mv² (m = mass, v = velocity) plus the gravitational potential energy = gmz (where z = height above ground). The Boltzmann distribution applies in equilibrium, so that the probability of the molecule being in a state with energy E is proportional to:
e(-E/kT) = e(-(½mv² + gmz)/kT).
So the Boltzmann distribution in this case specifies both the distribution of velocities (the standard Maxwell-Boltzmann distribution) and also an exponential decrease in gas density (and pressure) with height. It is very unlikely for a molecule to be at a high altitude, just as it is very unlikely for a molecule to have a high velocity. The high energy associated with rare high velocities comes from occasional random collisions building up that high speed. Similarly the high energy associated with high altitude comes from random collisions occassionally pushing a molecule to great heights. These statistically rare occurences are both equally captured by the Boltzmann distribution. Note also that since the temperature is uniform in equilibrium, the distribution of velocities at any given altitude is that same Maxwell-Boltzmann distribution at that temperature.
Force Balance
The decrease in pressure with height produces a pressure-gradient force that acts on “parcels of gas” in the same way that the gravitational force does, but in the opposite direction. At equilibrium or steady-state, when statistical properties of the gas cease changing, the two forces must balance.
That leads to the equation of hydrostatic balance equating the pressure gradient force to the gravitational force:
dp/dz = – mng
(here p is pressure and n is the number density of molecules – N/V for N molecules in volume V). In equilibrium n(z) is given by the Boltzmann distribution:
n(z) = c.e(-mgz/kT);
for the ideal gas pressure is given by p = nkT, so the hydrostatic balance equation becomes:
dp/dz = k T dn/dz = k T c (-mg/kT) e(-g m z/kT) = – mg c e(-mgz/kT) = – m n(z) g
I.e. the Boltzmann distribution for this ideal gas system automatically ensures the system is in hydrostatic equilibrium.
Another approach to this sort of analysis is to look at the detailed flow of molecules near an imaginary boundary. This is done in textbook calculations of the thermal conductivity of an ideal gas, for example, where a gradient in temperature results in net flow of energy (necessarily from hotter to colder). In our system with gravitational force and pressure gradients both must be taken into account in such a calculation. Such calculations are somewhat complex and depend on assumptions about molecular size and neglecting other interactions that would make the gas non-ideal, but the net effect must always satisfy the same thermodynamic laws as every other such system: in thermodynamic equilibrium temperature is uniform and there is no net energy flow through any imagined boundary.
In conclusion, after sufficient time that statistical properties cease changing, all these examples of a system with a Venus-like atmosphere must reach essentially the same isothermal or near-isothermal state. The gravitational field and adiabatic lapse rate cannot explain the high surface temperature on Venus if incoming solar radiation does not reach (at least very close to) the surface.
_____________________________________________________________________
By Leonard Weinstein
Solar Heating and Outgoing Radiation Balances for Earth and Venus
The basic heating mechanism for any planetary atmosphere depends on the balance and distribution of absorbed solar energy and outgoing radiated thermal energy. For a planet like Earth, the presence of a large amount of surface water and a relatively optically transparent atmosphere to sunlight dominates where and how the input solar energy and outgoing thermal energy create the surface and atmospheric temperatures. The unequal energy flux for day and night, and for different latitudes, combined with the planet rotation result in wind and ocean currents that move the energy around.
Earth is a much more complex system than Venus for these reasons, and also due to the biological processes and to changing albedo due to changes in clouds and surface ice. The average energy balance for the Earth was previously shown by Science of Doom, but is shown in Figure 1 below for quick reference:
The majority of solar energy absorbed by the Earth directly heats the land and water. Some water evaporation carries this energy to higher altitudes, and is released by phase change (condensation). This energy is carried up by atmospheric convection.
In addition, convective heat transfer from the ground and oceans transfers energy to the atmosphere. It is the basic atmospheric temperature differences from day and night and at different latitude that creates pressure differences that drive the wind patterns that eventually mix and transport the atmosphere, but the buoyancy of heated air from the higher temperature surface areas also aids in the vertical mixing.
This energy is carried by convection up into the higher levels of the atmosphere and eventually radiates to space. The combination of convection of water vapor and surface heated air upwards dominates the total transported energy from the ground level. In addition, some of the ground level thermal energy is radiated up, with a portion of the thermal radiation passing directly from the ground to space. Water vapor, CO2, clouds, aerosols, and other greenhouse gases also absorb some of this radiated energy, and slightly increase the lower atmosphere and ground temperatures with back radiation effectively adding to the initial radiation, resulting in a reduced net radiation flux. This results in a higher temperature atmosphere and ground than without these absorbing materials.
Venus, however, is dominated by direct absorption of solar energy into the atmosphere (including clouds) rather than by the surface, so has a significantly different path to heat the atmosphere and ground. Venus has a very dense atmosphere (about 93 times the mass as Earth’s atmosphere), which extends to about 90 km altitude before the tropopause is reached. This is much higher than the Earth’s atmosphere.
Very dense clouds, composed mostly of sulfuric acid, reach to about 75 km, and cover the planet. The clouds have virga beneath them due to the very high temperatures at lower elevations. The clouds and thick haze occupy over half of the main troposphere height, with a fairly clear layer starting below about 30 km altitude. Due to the very high density of the atmosphere, dust and other aerosol particles (from the surface winds and possibly from volcanoes) also persist in significant quantity.
The atmosphere is 96.5% CO2, and contains significant quantities of SO2 (150 ppm), and even some H2O (20 ppm), and CO (17 ppm). These, along with the sulfuric acid clouds, dust, and other aerosols, absorb most of the incoming sunlight that is not reflected away, and also absorb essentially all of the outgoing long wave radiation and relay it to the upper atmosphere and clouds to eventually dump it into space.
A sketch is shown in Figure 2, similar to the one used for Earth, which shows the approximate energy transfer in and out of the Venus atmosphere system. It is likely that almost all of the radiation out leaves from the top of the clouds and a short distance above, which therefore locks in the level of the atmospheric temperature at that location.
The surface radiation balance shown is my guess of a reasonable level. The top portion of the clouds reflects about 75% of incident sunlight, so that Venus absorbs an average of about 163 W/m², which is significantly less than the amount absorbed by Earth. About 50% of the available sunlight is absorbed in the upper cloud layer, and about 40% of the available sunlight is captured on the way down in the lower clouds, gases, and aerosols.
Thus an average of solar energy that reaches the surface is only about 17 W/m², and the amount absorbed is somewhat less, since some is reflected. The question naturally arises as to what is the source of wind, weather, and temperature distribution on Venus, and why is Venus so hot at lower altitudes.
Venus takes 243 days to rotate. However, continual high winds in the upper atmosphere takes only about 4 days to go completely around at the equator, so the day/night temperature variation is even less than it would have otherwise been. Other circulation cells for different latitudes (Hadley cells) and some unique polar collar circulation patterns complete the main convective wind patterns.
The solar energy absorbed by the surface is a far smaller factor than for Earth, and I am convinced it is not necessary for the basic atmospheric and ground temperature conditions on Venus. Since the effect on the atmospheres of planets from absorbed solar radiation is to locally change the atmospheric pressure that drive the winds (and ocean currents if applicable), these flow currents transport energy from one location and altitude to another. There is no specific reason the absorption and release of energy has to be from the ground to the atmosphere unless the vertical mixing from buoyancy is critical. I contend that direct absorption of solar energy into the atmosphere can accomplish the mixing, and this along with the fact that the top of the clouds and a short distance above is where the radiation leaves from, is in fact the cause of heating for Venus.
We observed that unlike Earth, which had about 72% of the absorbed solar energy heat the surface, Venus has 10% or less absorbed by the ground. Also the surface temperature of Venus (about 735 K) would result in a radiation level from the ground of about 16,600 W/m².
Since back radiation can’t exceed radiation up if the ground is as warm or warmer than the atmosphere above it, the only thing that can make the ground any warmer than the atmosphere above it is the ~17 W/m² (average) from solar radiation. The ground absorbed solar radiation plus absorbed back radiation has to equal the radiation out for constant temperature. If the absorbed solar radiation were all used to heat the ground, and the net radiation heat transfer was near zero (the most extreme case of greenhouse gas blocking possible), the average temperature of the ground would only be about 0.19 K warmer than the atmosphere above it, and the excess heat would need to be removed by surface convective driven heat transfer. The buoyancy would be extremely small, and contribute little to atmospheric mixing.
However, the net radiation heat transfer out of the ground is almost surely equal to or larger than the average solar heating at the ground, from some limited transmission windows in the gas, and through a small net radiation flux. The most likely effect is that the ground is equal to or a small amount cooler than the lower atmosphere, and there is probably no buoyancy driven mixing. This condition would actually require some convective driven heat transfer from the atmosphere to the ground to maintain the ground temperature. Since the measured lower atmosphere and ground temperature are on the dry adiabatic lapse rate curve projected down from the temperature at the top of the cloud layer, the net ground radiation flux is probably close to the value of 17 W/m². This indicates that direct solar heating of the ground is almost certainly not a source for producing the winds and temperature found on Venus. The question still remains: what does cause the winds and high temperatures found on Venus?
The main point I am trying to make in this discussion is that the introduction of solar energy into the relatively cool upper atmosphere of Venus, along with the high altitude location of outgoing radiation, are sufficient to heat the lower atmosphere and surface to a much higher temperature even if no solar energy directly reaches the surface. Two simplified models are discussed in the following sections to support the plausibility of that claim. This issue is important because it relates to the mechanism causing greenhouse gas atmospheric warming, and the effect of changing amounts of the greenhouse gases.
The Tall Room
The first model is an enclosed room on Venus that is 1 km x 1km x 100 km tall. This was selected to point out how adiabatic compression can cause a high temperature at the bottom of the room, with a far lower input temperature at the top. This is the type of effect that dominates the heating on Venus. While the first part of the discussion is centered on that room model, the analysis is also applicable for part the second model, which examines a special simplified approximation of the full dynamics on Venus.
The conditions for the tall room model are:
1) A gas is introduced at the top of a perfectly thermally insulated fully enclosed room 1 km x 1 km x 100 km tall, located on the surface of Venus. The walls (and bottom and top) are assumed to have negligible heat capacity. The walls and bottom and top are also assumed to be perfect mirrors, so they do not absorb or emit radiation.
2) The supply gas temperature is selected to be 250 K. The gas pours in to fill all of the volume of the room. Sufficient quantity of gas is introduced so that the final pressure at the top of the room is at 0.1 bar at the end of inflow. The entry hole is sealed immediately after introduction of the gas is complete.
3) The gas is a single atom molecule gas such as Argon, so that it does not absorb or emit thermal radiation. This made the problem radiation independent. I also put in a qualifier, to more nearly approximate the actual atmosphere, that the gas had a Cp like that of CO2 at the surface temperature of Venus [i.e., Cp=1.14 (kJ/kg K) for CO2 at 735 K]. Cp is also temperature independent.
The room height selected would actually result in a hotter ground level than the actual case of Venus. This was due to the choice of a room 100 km tall. The height to 0.1 bar for Venus is only about 65 km, which would give a better temperature match, but the difference is not important to the discussion. A dry adiabatic lapse rate forms as the gas is introduced due to the adiabatic compression of the gas at the lower level. The value of the lapse rate for the present example comes from a NASA derivation at the Planetary Atmospheres Data Node:
http://atmos.nmsu.edu/education_and_outreach/encyclopedia/adiabatic_lapse_rate.htm
The final result for the dry adiabatic lapse rate is:
Γp = -dT/dz|a = g/Cp (1)
In the room at Venus, this results in:
TH =Ttop +H * 8.9/1.14 (2)
Where H is distance down from the top.
Ttop remains at 250 K since it is not compressed (not because it was forced to be at that temperature by heat transfer), and Tbottom=1,031 K due to the adiabatic compression.
Two questions arise:
1) Is this dry adiabatic lapse rate what would actually develop initially after all the gas is introduced?
2) What would happen when the system comes to final equilibrium (however long it takes)?
The gas coming in would initially spread down to the bottom due to a combination of thermal motion and gravity, but the converted potential energy due to gravity over the room height would add considerable downward velocity, and this added downward velocity would convert to thermal velocity by collisions. Once enough gas filled the room to limit the MFP, added gas would tend to stay near the top until additional gas piled on top pushed it downwards, and this would increasingly compress the gas below from its added mass. The adiabatic compression of gas below the incoming gas at the top would heat the gas at the bottom to 1,031 K for the selected model. The top temperature would remain at 250 K, and the temperature profile would vary as shown by equation (2). Thus the answer to 1) is yes.
Strong convection currents may or may not be introduced in the room, depending on how fast the gas is introduced. To simplify the model, I assume the gas flows in slowly enough so that the currents are not important. It is quite clear that the temperature profile at the end of inflow would be the adiabatic lapse rate, with the top at 250 K and bottom at 1,031 K. If the final lapse rate went toward zero from thermal conduction, as Arthur postulated, even he admits it would take a very long time. The question now arises- what would cause heat conduction to occur in the presence of an adiabatic lapse rate? i.e., why would an initial adiabatic lapse rate tend to go toward an isothermal lapse rate if there is no radiation forcing (note: this lack of radiation forcing assumption is for the room model only). The cause proposed by Arthur and Science of Doom is based on their understanding of the Zeroth Law of Thermodynamics. They say that if there is a finite lapse rate (i.e., temperature gradient), there has to be conduction heat transfer. This arises from not considering the difference between temperature and heat. This is discussed in:
http://zonalandeducation.com/mstm/physics/mechanics/energy/heatAndTemperature/heatAndTemperature.html (difference between temperature and heat)
When we consider if there will be heat conduction in the atmosphere, we need to look at potential temperature rather than temperature. This is discussed at: http://en.wikipedia.org/wiki/Potential_temperature
The potential temperature is shown to be:
This term in (3) is general, and thus valid for Venus, if the appropriate pressures are used.
A good discussion why the potential temperature is appropriate to use rather than the local temperature can be found at:
https://courseware.e-education.psu.edu/simsphere/workbook/ch05.html
This includes the following statements:
- “if we return to the classic conduction laws and our discussion of resistances, we note that heat should be conducted down a temperature gradient. Since we are talking about sensible heat, the appropriate gradient for the conduction of sensible heat is not the temperature but the potential temperature. The potential temperature is simply a temperature normalized for adiabatic compression or expansion”
- “When the environmental lapse rate is exactly dry adiabatic, there is zero variation of potential temperature with height and we say that the atmosphere is in neutral equilibrium. Under these conditions, a parcel of air forced upwards (downwards) will stay where it is once moved, and not tend to sink or rise after released because it will have cooled (warmed) at exactly the same rate as the environment”.
The above material supports the claim that there would be no movement from a dry adiabatic lapse rate toward an isothermal gas in the room model.
If the initial condition was imposed– that the lapse rate was below the dry adiabatic lapse rate, it is true that the gas would be very stable from convective mixing due to buoyancy, and the very slow thermal conduction, which would drive the temperature back toward the dry adiabatic lapse rate, could take a very long time (in the actual case, it would be much faster due to even small natural convection currents generally present). However, there is no reason for any lapse rate other than the dry adiabatic lapse rate to initially form, as the problem was posed, so that issue is not even relevant.
The final result of the room model is the fact that a very high ground temperature was produced from a relative cool supply gas due to adiabatic compression of the supply that was introduced at a high altitude. This is actually a consequence of gravitational potential energy being converted to kinetic energy. Once the dry adiabatic lapse rate formed, any small flow up or down stays in temperature equilibrium at all heights, so this is a totally stable situation, and would not tend toward an isothermal situation.
If there were present a sufficient quantity of gas in the present defined room that radiated and absorbed in the temperature range of the model, the temperature would tend toward isothermal, but that was not how the tall room example was defined.
Effect of an Optical Barrier to Sunlight reaching the Ground
The second model I discussed with Science of Doom, Arthur, and others, relates to my suggestion that if an optical barrier prevented any solar energy from transmitting to the ground, but that the energy was absorbed in and heated a thin layer just below the average location of effective outgoing radiation from Venus, in such a way that the heat was transmitted downward through the atmosphere, this could also result in the hot lower atmosphere and surface that is actually observed. The albedo, and the solar heating input due to day and night, and to different latitudes, was selected to match the actual values for Venus, and the radiation out was also selected to match the values for the actual planet.
This problem is much more complicated than the enclosed room case for two reasons.
The first is that it is a dynamic and non-uniform case. The second is due to the fact that the actual atmosphere is used, with radiation absorption and emission, and the presence of clouds. The lower atmosphere and surface of the planet Venus are much hotter than for the Earth, even though the planet does not absorb as much solar energy as the Earth. It is closer to the Sun than Earth, but has a much higher albedo due to a high dense cloud layer composed mostly of sulfuric acid drops. The discussion will not attempt to examine the historical circumstances that led up to the present conditions on Venus, but only look at the effect of the actual present conditions.
In order to examine this model, I had postulated that if all of the solar heating was absorbed near the top of Venus’s atmosphere, but with the day and night and latitude variation, the heat transfer to the atmosphere downward would eventually be mixed through the atmosphere and maintain the adiabatic lapse rate, with the upper atmosphere held to the same temperature as at the present. Since the atmosphere has gases, clouds, and aerosols that absorb and radiate most if not all of the thermal radiation, this is a different problem from the tall room.
However, it appears that the radiation flux levels are relatively small, especially at lower levels, so the issue hinges on the relative amount of forcing by convective flow compared to net radiation up (which does tend to reduce the lapse rate). I use the actual temperature profile as a starting point to see if the model is able to maintain the values. Different starting profiles would complicate the discussion, but if the selected initial profile can be maintained, it is likely all reasonable initial profiles would tend to the same long-term final levels.
The assumption that the solar heating and radiation out all occur in a layer at the top of the atmosphere eliminates positive buoyancy as a mechanism to initially couple the solar energy to the atmosphere. However, the direct thermal heat-transfer with different amounts of heating and cooling at different locations causes some local expansion and contraction of different locations of the top of the atmosphere, and this causes some pressure driven convection to form.
The pressure differences from expansion and contraction set a flow in motion that greatly increases surface heat transfer, and this flow would become turbulent at reasonable induced flow speeds. This increases heat transfer and mixing to larger depths. The portions cooler than the local average atmosphere will be denser than local adiabatic lapse rate values from the average atmosphere, and thus negative buoyancy would cause downward flow at these locations.
As the flow moved downward, it compressed but initially remains slightly cooler than the surrounding, with some mixing and diffusion spreading the cooling to ever-larger volumes. At some level, the flow down actually passes into a surrounding volume that is slightly cooler, rather than warmer, than the flow, due to the small but finite radiation flux removing energy from the surrounding. At this point the downward flow stream is warming the surrounding. The question arises: how much energy is carried by the convection, and could it easily replace radiated energy, so as to maintain a level near the dry adiabatic lapse rate?
A few numbers need to be shown here to best get an order of magnitude of what is going on. Arthur has already made some of these calculations. The average input energy of 158 W/m² applied to Venus’s atmosphere would take about 100 years to change the average atmospheric temperature by 500 K. This means that to change it even 0.1 K (on average) would take about 7 days. Since the upper atmosphere at low latitudes only takes about 4 days to completely circulate around Venus, the temperature variations from average values would be fairly small if the entire atmosphere were mixed.
However, for the model proposed, only a very thin layer would be heated and cooled under the absorbing layer. Differences in net radiation flux would also help transfer energy up and down some. This relatively thin layer would thus have much higher temperature variation than the average atmosphere mass would (but still only a few degrees). The pressure variations due to the upper level temperature variations would cause some flow circulation and vertical mixing to occur throughout the atmosphere. The circulating flows may or may not carry enough energy to overcome radiation flux levels to maintain the dry adiabatic lapse rate. Let us look at the near surface flow, and a level of radiation flux of 17 W/m².
What excess temperature and vertical convection speed is needed to carry energy able to balance that flux level. Assume a temperature excess of only 0.026 K is carried and mixed by convection due to atmospheric circulation. Also assume the local horizontal flow rate near the ground is 1 m/s. If a vertical mixing of only 0.01 m/s were available, the heat added would be 17 W/m², and this would balance the lost energy due to radiation, thus allowing the dry adiabatic lapse rate to be maintained.
This shows how little convective circulation and mixing are needed to carry solar heated atmosphere from high altitudes to lower levels to replace energy lost by radiation flux levels in the atmosphere, and maintain a near dry adiabatic lapse rate. It is the solar radiation that is supplying the input energy, and adiabatic compression that is heating the atmosphere. As long as the lapse rate is held close to the adiabatic level with sufficient convective mixing, it is the temperature at the location in the atmosphere where the effective outgoing radiation is located that sets a temperature on the adiabatic lapse rate curve and adiabatic compression determines the lower atmosphere temperature.
Since the exact details of the heat exchanges are critical to the process, this optically sealed region near the top of the atmosphere is a poorly defined model as it stands, and the question of whether it actually would work as Arthur and Science of Doom question is not resolved, although an argument can be made that there are processes to do the mixing. However, the real atmosphere of Venus absorbs almost all of the solar energy in its upper half, and this much larger initial absorption volume does clearly do the job. I have shown the ground likely has little if any effect on the actual temperature on Venus.
Some Concluding Remarks
The initial cause for this discussion was the question of why the surface of Venus is as hot as it is. A write-up by Steve Goddard implied that the high pressure on Venus was a major factor, and even though some greenhouse gas was needed to trap thermal energy, it was the pressure that was the major contributor. The point was that an adiabatic lapse rate would be present with or without a greenhouse gas, and the major effect of the greenhouse gas was to move the location of outgoing radiation to a high altitude. The outgoing level set a temperature at that altitude, and the ground temperature was just the temperature at the outgoing radiation effective level plus the increase due to adiabatic compression to the ground. The altitude where the effective outgoing radiation occurs is a function of amount and type of greenhouse gases. Steve’s statement is almost valid. If he qualified it to state that enough greenhouse gas is still needed to limit the radiation flux, and keep the outgoing radiation altitude near the top of the atmosphere, he would have been correct. Thus both the amount of atmosphere (and thus pressure and thickness), and amount of greenhouse gases are factors. Any statement that greenhouse gases are not needed if the pressure is high enough is wrong (but this was not what Steve Goddard said).
Another issue that came up was the need for the solar energy to heat the ground in order for the hot surface of Venus to occur. I think I made reasonable arguments that this is not at all true. While there is a small amount of solar heating of the ground, the ground is probably actually slightly cooler that the atmosphere directly above it due to radiation, and so there is no buoyant mixing and no heating of the atmosphere from the ground other than the small radiated contribution. The main part of solar energy is absorbed directly into the atmosphere and clouds, and is almost certainly the driver for winds and mixing, and the high ground temperature.
The final issue is what would happen if most (say 90%) of the CO2 were replaced by say Argon in Venus’s atmosphere. Three things would happen:
1) The adiabatic lapse rate would greatly increase due to the much lower Cp of Argon.
2) The height of the outgoing radiation would probably decrease, but likely not much due to both the presence of the clouds, and the fact that the density is not linear with altitude, and matching a height with remaining high CO2 level would only drop 10 to 15 km out of the 75 to 80, where outgoing radiation is presently from. If in fact it is the clouds that cause most of the outgoing radiation, there may not be any drop in outgoing level.
3) The radiation flux through the atmosphere would increase, but probably not nearly enough to prevent the atmospheric mixing from maintaining an adiabatic lapse rate. Keep in mind that Venus has 230,000 times the CO2 as Earth. Even 10% of this is 23,000 times the Earth value.
The combination of these factors, especially 1), would probably result in an increase in the ground temperature on Venus.
LW: “However, it appears that the radiation flux levels are relatively small, especially at lower levels,”
DeWitt and I discussed that here. A characteristic absorption length of about 10m at the Venus surface seems reasonable, in which case the IR flux induced by the lapse rate is about 8.5 W/m2 – not much less that downward insolation (before we blocked it).
My view continues to be that this is an irreversible flow from hot to cooler, which is a constant loss of free energy (gain of entropy). It is an inevitable consequence of a lapse rate. If there were to be a steady state, this free energy would have to be replaced from an external process, since it is no longer supplied by insolation (as it was). This is equivalent to saying that there must be a heat pump, which would set up a counter flux. Forced convection in the stable lapse rate regime (lapse less than dry adiabat) can do that, but again needs some kind of heat engine to do the forcing. In here I did some rough estimates of power available from diurnal and latitude differences, but they aren’t large enough.
Thank you all for the effort of writing this up and providing some interesting food for thought.
Leonard:
“[…] it is the temperature at the location in the atmosphere where the effective outgoing radiation is located that sets a temperature […]” seems to imply that you assume that the ‘heating layer’ is always at the effective temperature, i.e. the temperature at the height where the effective outgoing radiation is located and further that this is the height as we find it for the actual Venus. Is that correct?
D.
Nick Stokes,
The thin shell at the top is probably not a good model, but look at the actual atmosphere of Venus. Keep in mind that most (>90%) of the solar energy absorbed is absorbed in the upper half of the atmosphere. Look at the actual case and consider what would happen if all of the input was absorbed before it reaches the ground. The day/night and different latitude variation of direct solar atmospheric heating and cooling is the cause of the high speed upper atmosphere wind and that is enough by itself to mix the atmosphere all the way to the surface. Since the lapse rate is very close to the dry adiabatic lapse rate, it takes very little energy for flow to be convected downward as well as upward. I showed how little delta T and vertical mixing was needed to compensate for the small radiation flux (your heat flux is even lower than the 17 W/m2 I stated). The heat pumping is due to the pressure driven atmospheric circulation caused by temperature differences. I have a pdf (I don’t know how to put it in here) describing so called tidal circulation, which is what happens on Venus. The tidal circulation does mix vertically as well as horizontally.
dessoli:
Yes to your question. The point of outgoing radiation determining an effective height is in fact approached the following way. The radiation level out needed to match incoming radiation is used to calculate a black body temperature, and the location in the atmosphere where that temperature occurs is then called the effective location of outgoing radiation. The effective location is used as a single point on the environmental lapse rate curve, and if the ground temperature is the extension of that curve to the ground. If the environmental lapse rate is in fact the dry adiabatic lapse rate, the ground temperature is the temperature at the effective outgoing radiation altitude plus the increase due to adiabatic compression, and the ground is hotter. The transport of energy to maintain the energy lost by radiation comes from solar energy heating the upper atmosphere and the excess energy being brought through the entire atmosphere by induced circulation from day/night and latitude variation.
I would like to point out that it is not necessary to compress or expand a gas quickly to obtain effective adiabatic compression or expansion. It is only necessary that conduction that occurs through the edges of the vertically moving gas stream is limited. In large scale convective flows, there is some mixing and thermal conduction around the edges, but if the flow scale is large enough, the edge effects do not necessarily dominate the flow, and adiabatic heating and cooling effects can occur over significant atmospheric distances. I also want to point out that differences in atmospheric density occur from cooling cause negative buoyancy, which is also a vertical forcing along with positive buoyancy from heating.
This is truly excellent! I’m so pleased! It will take me a little while to digest this properly.
One question.
“There can be no net flow of energy from colder to hotter regions. And that means, if the atmosphere below Leonard’s “opaque enclosure” is at a higher temperature than any point on the enclosure, heat must be flowing out of the atmosphere, not inward. The enclosure, no matter the distribution of temperatures on its surface, cannot drive a temperature below it that is any higher than the highest temperature on the enclosure itself.”
Thought experiment… We have a box containing a heat pump, like a refrigerator in reverse, that pumps heat from the room-temperature exterior or the box to the cavity inside. So long as power is supplied to the pump, the inside will settle at a temperature warmer than the room.
The heat pump is powered by a thermally isolated heat engine, also inside the box, that is itself powered by two small patches on the outside of the box, one maintained at room temperature, and the other at close to absolute zero. (Say, a fast flow of liquid helium pumped past it. We assume very high thermal conductivity/capacity at both patches.)
The only energy flows across the outer boundary of the box are thermal. The outside of the box is at room temperature or absolute zero. The inside of the box is warmer than room temperature.
Clearly, this contradicts “The enclosure, no matter the distribution of temperatures on its surface, cannot drive a temperature below it that is any higher than the highest temperature on the enclosure itself.” So what goes wrong?
Or is that the wrong question?
If I understand your question – the issue is the introduction of mechanical and electrical devices into the situation. Those *are* capable of subverting the thermal constraints – obviously, we’re able to create extremely high temperatures (billions of degrees) in particular accelerators powered by electrical devices that in turn receive their power via steam turbines at no more than a few hundred degrees. So yes, it’s possible – but under very restricted conditions.
The strictest form of the second law is not the thermal one, but the entropy one – the total entropy of the universe always increases. If you create a mechanical or electrical or chemical process that does something of the sort mentioned, it is reducing the entropy of the parcel of energy concerned, and therefore there must be an accompanying *increase* (the same or larger in total) in the entropy of an accompanying parcel of energy.
In the case of a steam turbine, or a heat pump, you are splitting the input thermal energy into two parts – one which has low or zero entropy (the portion that goes to the higher temperature room or the portion that becomes electrical energy) and another with high entropy (the portion that ends up as low temperature “waste heat”).
So, under the conditions of such a mechanical engine, a device that can split input heat into low-entropy and high-entropy components, then yes, the strict “downward” flow of energy can be at least briefly averted. In a sense you are pushing a bunch more energy way down, in order to bump a bit of energy higher up (lower entropy, higher temperature).
Humans have developed many devices to do this sort of thing. Life itself also is in essence a bucking of entropy’s relentless upward trend (the process of photosynthesis is the essential engine there). But there are no non-living natural processes that behave in that fashion, that I’m aware of. At least not within traditional planetary atmospheres…
It is a good question though.
Thanks. That’s clear enough.
What is the fundamental difference between “mechanical and electrical devices” and atmosphere that makes one capable of subverting thermal constraints and the other not?
A heat pump is simply a means by which a gas/fluid cycles round a loop in which it is compressed and expanded, condensed and evaporated, and exchanges heat with warm and cold reservoirs. If this could be achieved by non-mechanical means…
It’s a good question. I think an analogous question would be, what’s the difference between living and non-living matter? It’s not something with a straightforward answer, but I believe what’s required is some degree of internal organization and differentiation and partitioning (the various pipes and insulation of a heat pump etc). The sort of organization and differentiation needed for an efficient engine like that doesn’t seem to arise in the non-living natural world, at least not readily.
Thinking about it, there are actually at least two components of Earth’s system that act a bit as (inefficient) mechanical devices of this sort: wind, and the water cycle. A small fraction of incoming solar energy turns into pressure differences that drive winds, and those in turn drive waves. A considerably larger fraction of incoming solar energy evaporates water and brings that vapor pretty high up; the falling rain, the rivers and high lakes, the waves and the wind itself can be used as mechanical power sources, and we use some of this already of course.
But the second law (strict entropy form) still rules – while some incoming energy is diverted into mechanical power, the bulk of the energy has to go to lower temperature thermal so that total entropy increases. What Leonard is proposing would require an atmosphere to act not as such an inefficient engine that sends most of its energy to low-temperature waste, but as a tremendously super-efficient engine that somehow keeps powering lower (and exponentially denser!) layers of the atmosphere up higher and higher on the temperature and internal energy curve. That’s just not possible.
“If this could be achieved by non-mechanical means…”
I suppose if you had a big moon, expanding the space on its side o a planet, and the atmosphere being compressed on the far side, it could kinda do that…. but the gains on the compression would be exactly equal to the loses on expansion.
I dont know if atmospheric tides could perhaps achieve some form of mixing, of higher energy gas at higher altitude gas with lower altitudes, i would assume not…
I suppose with the likes of venus, where you have the direct interaction of the solar winds on its upper atmosphere, this could also cause a planetary atmospheric pressure differential.. with the lee ward side o the planet at a lower pressure vrs altitude than the bow shocked side. Im sure this would have been documented if it had a substantial effect. And it would also basically work the same as the moon, with gains equaling loses.
If you’re right that there isn’t enough (negative) entropy to drive the formation of an adiabatic lapse rate, then surely it would mean there wasn’t enough to drive wind and evaporation? The wind moves huge masses of air halfway across the globe. Evaporation lifts trillions of tons of water high into the air each day. They all sound equally impressive.
There is also the thing that convection does not have to recreate the adiabatic lapse rate anew each day. If it adds a little at a time, building the circulating loops up in size, the resulting adiabatic lapse is stable against an immediate collapse. (Especially if you ignore radiation, as in the thought experiment here.) Like rainwater filling a lake in the mountains – the potential energy is built up (and maintained) gradually.
And in many cases there are counterweights. Compressing air in one place requires letting it expand in another, and the entropy released by the latter can supply a large proportion of the entropy reduction inherent in the former. If your crane has a counterweight on the other side of the pivot, you can lift a ton into the air with one hand. Or to put it another way, the high pressure at ground level compared to altitude is, when considered alone, another massive embodiment of lowered entropy – do we need a constantly supplied external source of negative entropy to maintain it? To push air around a cycle (like a Hadley cell), all the internal forces of compression and expansion balance, and you only have to worry about the dissipative ones.
So it isn’t immediately obvious to me that tremendous efficiency is required. For Venus, with its super-rotation, perhaps. (Although in that case I wonder which is cause and which effect. Does super-rotation merely drive the heat pump faster?) But generally speaking, the forces inherent in the weather seem large enough to me to accomplish the task.
“If the initial condition was imposed- that the lapse rate was below the dry adiabatic lapse rate, it is true that the gas would be very stable from convective mixing due to buoyancy, and the very slow thermal conduction, which would drive the temperature back toward the dry adiabatic lapse rate, could take a very long time (in the actual case, it would be much faster due to even small natural convection currents generally present).”–From Leonard Weinstein’s portion of the post, in the discussion of the “tall room” thought experiment.
It’s likely that I’m confused about the adiabatic lapse rate and why it exists, but I don’t understand why, in the example given, the lapse rate would eventually shift towards the adiabatic lapse rate. That would involve an increase in the lapse rate, in seeming defiance of the 2nd law of thermodynamics.
I would have thought that, in the conditions given (and assuming that the gas column is not heated from below), the entire column would eventually become isothermal due to conduction between colliding molecules. There doesn’t seem to be any mechanism that would lead to an increased temperature gradient. Perhaps I’m missing something. If so, though, what? Is there some portion of the system whose entropy is increasing, to counteract the decrease in entropy brought about by the approach to the adiabatic lapse rate?
I agree. If you only have a single heat reservoir, then conduction would eventually render the air column isothermal. Potential temperature only applies when air masses are moving and carrying the heat vertically.
(Although I am not quite sure – conduction in a gas is by diffusion, which is still a mass flow on a microscopic scale. Does this count?)
Where it gets interesting is when you have two very tall rooms stood next to each other, one heated at the top to a different temperature to the other, and the two rooms connected at top and bottom by narrow pipes.
With the pipes closed off, two rooms approach isothermal, but at different temperatures, and hence different pressures. (Even if you specify the same 0.1 atm at the top, the pressures at the bottom are unequal.)
So when you open the valves on the pipes, air will circulate. Heated from the top in one room, it rises. Cooled from the top in the other, it descends. So long as the temperature difference at the tops is maintained, the air will circulate, rising and descending, compressed and expanded.
Is there any limit to how small a temperature difference is required to maintain circulation? (However slow.) Under the conditions stated, with no radiation and perfectly insulated walls and floor, I can only see one: that the mass rate of flow has to be faster than diffusive conduction. (When heat will diffuse through stationary gas from one heat reservoir to the other.)
Given even such a slow circulation, would the adiabatic lapse rate still apply?
My gut tells me that the cold equilibrium will dominate. I think my gut might be wrong, though.
Similar thought experiment: 1 Pipe with a top which is split in half, one half maintained at a colder temperature than the other half.
At the very top of the pipe, there will be a temperature gradient: warmest at the far end of the warm half, coldest at the far end of the cold half. I think that much is straightforward.
However, at the bottom of the pipe, I feel that the temperature has to be uniform: if there was a colder side, it would inexorably flow/conduct into the warm side.
If the whole pipe was isothermally equal to the coldest air at the cold side of the top, except for a tiny, ramp-shaped warmish layer at the very top with a horizontal gradient, and a fast vertical gradient*, it seems to me that the system could be stable.
I think the only way to have constant convection is to have the hotter plate at a lower altitude than the colder plate. Not only that, but the hot-cold differential would have to be greater than the adiabat, otherwise you just get an constant, non-convecting gradient.
-M
*Okay, when I start thinking about “how fast is fast”, my theory might run into problems: if the vertical layer is more than infinitesimal, then it isn’t obvious that it can’t be equal to the whole height of the pipe. And yet, I feel like it should be greater than negligible, and yet very small. But in that very small vertical space, it seems like there would be some amount of horizontal convection/conduction… hrm… complicated thought experiments.
We should do some computational modeling of all these problems to better explore our intuitions… pull up a text book that talks about the most basic of climate models based on the equations of state, and make a model with a bunch of boxes…
There is a mechanism for restoring the lapse rate, which I’ve talked about here. When the lapse rate is below the adiabat, the air is stable to convection. That means that it actually takes work to move air up or down – an amount proportional to the difference between actual lapse rate and adiabat. That work does something – it pumps heat against the gradient. The stability reduces, and the heat pump weakens, as the adiabat is approached.
However, it takes energy, which in Nature comes from the general heat engine processes of the atmosphere – heat flowing toward the poles, for example. If there’s no motion, there’s no pump.
“The final issue is what would happen if most (say 90%) of the CO2 were replaced by say Argon in Venus’s atmosphere. ”
Leonard: what if you were to do the thought experiment of replacing 90%+ of the CO2 with a magic-gas replacement, with exactly equivalent properties to CO2 except for a total lack of absorption of longwave radiation. What would happen then?
My intuition is that temperatures would have to drop, a lot. I agree that Venus is an odd case in that solar radiation is being absorbed in the mid-atmosphere rather than the surface, but… hrm… I need to think about this.
I remember having an argument on RealClimate ages ago with Fred Staples about what would happen if an atmosphere were replaced with its radiatively transparent doppleganger, but given his lack of sanity it wasn’t really a very fruitful discussion. I ran into many of the same issues that people are struggling with here, with half of me wanting the adiabat to remain dominant and the other half arguing for an isothermal atmosphere, and a third half trying to think about the diurnal cycle and concluding that there would be weird thing where you’d get daytime surface heating which would set the adiabat at high noon*, and then in the evening the surface would cool due to radiation along with a tiny layer of atmosphere adjacent to the surface that would cool by conduction… leaving the rest of the nighttime atmosphere at its old daytime adiabat.
But any argument about atmospheric adiabats for magic non-radiative atmospheres would run into boundary problems: what happens at high altitudes? In the earth system, we have a tropopause, above which temperatures rise because of O3 absorption of incoming radiation.** But in the magic atmosphere, I can’t come up with a reason for a tropopause to exist. So the atmosphere should keep cooling at the adiabat until… well, until it can’t cool at the adiabat any longer. Depending on the exact profile, perhaps there’s an altitude at which the pressure and temperature of the magic gas would be appropriate for it to liquify, or perhaps it just asymptotes towards absolute zero???
Anyway, those are my, not completely formed thoughts…
-M
*Technically, the adiabat would be set by the highest possible surface temperature, which would be some time after high noon as determined by the heat capacity of the surface.
**Hey… you need to make sure your theory is compatible with a system that can switch directions at the tropopause, thermopause, etc… the O3 layer is warmer at higher altitudes than lower altitudes – of course, part of this behavior may require CO2 to be around to turn hot O3 into cooling radiation… but this kind of suggests that a pure O3 atmosphere would be hottest at the some high altitude, and cool both below and above that point…
Leonard,
It is not physically possible for the Venusian surface to be colder than the atmosphere directly above it. That would mean that the surface would absorb not only solar radiation, but also excess radiation from the atmosphere and sensible heat transfer from the atmosphere. Where does all this heat then go? I gave you my argument on this in the other thread. I have seen nothing but hand waving in reply. Show me the numbers that justify your conclusion. They certainly aren’t in your Figure 2 where the surface is clearly warmer than the atmosphere above it.
I did a multilayer atmosphere calculation using absorption data from SpectralCalc. If you replaced half the CO2 with a transparent gas (VMR reduced from 0.96 CO2 to 0.48), the surface temperature would have to drop by about 9 K to restore radiative balance. The net radiative flux from the surface to the atmosphere is about 5 W/m2. The rest of the 17 W/m2 of solar radiation that isn’t reflected must be transferred to the atmosphere by sensible heat transfer. Both these facts require the surface to be warmer than the atmosphere above it.
So yes, the thickness of the atmosphere is important. If you think about this in sensitivity per doubling, even if you reduced the total amount of CO2 in the Venusian atmosphere to the level in the Earth’s atmosphere for a VMR of about 0.000004, the surface temperature of Venus would still be several hundred degrees hotter than the surface of the Earth.
This implies that if one started out with a perfectly transparent atmosphere with a surface pressure of 93 bar, the rate of change of surface temperature with the addition of even a small amount of CO2 would be very large. The behavior would be quite different if the mass of the atmosphere weren’t constant. If one removed half the CO2 from the Venusian atmosphere and didn’t replace it with a transparent gas, the surface temperature would drop a lot more than 9 K, although I haven’t actually done that calculation.
I hesitate to comment because (1) I’m a non-scientist and (2) I’ll have to read all comments through several more times before I come to my own conclusion.
But at this stage in my struggle with the material Weinstein’s explanation strikes me as more compelling. The reason is that his opponents make statements such as that the thermodynamic laws may be more fundamental than gravity, whereas to this layman it more plausible that the zeroth and second laws are merely the results of statistics applied to large numbers, so analysis at the single-molecule level should guide our determination of their applicability.
Particularly troubling is his opponents’ basing dismissal of gravitational deceleration on the shortness of a molecule’s mean free path. True, the amount of deceleration sustained in a few nanometers is small. But the collision that thereafter occurs is with a molecule that on average has similarly suffered deceleration to reach the collision height. And there are many, many mean free paths between the bottom and the top of the room. They are small individually, but the minuscule velocity losses add up over a large number indeed.
Or that’s how this layman sees it. Now I’ll shut up and read it all again.
Siince I haven’t read anything I’ve been able to recognize as a satisfactory answer to my criticism above of the short-mean-free-path argument for uniform tall-room temperature, I’ll attempt my own.
True, you can’t ignore gravitational acceleration just because it acts over a short mean free path; the molecular travel times are short, but there are a lot of them. Therefore, gravity implies a significant impulse (force time time) collectively on the molecules.
But, since no bulk downward momentum results (once steady state is reached), the molecules must on average be subjected to a contrary upward force–and they are, since the concentration gradient results in their encountering more collisions from below than from above. It can probably be shown that this contrary force is the same per molecule at high altitudes as low–and that it exactly counters gravity. If so, gravity is a wash as far as affecting molecular velocities: it doesn’t make temperatures lower at higher altitudes.
Another way of saying that is that the mean impulse applied by gravity between collisions is the mean (net-upward) impulse applied by a collision and that this is true independently of height.
That’s the plausibility argument. I haven’t tried to put math to it.
Sam and Nullius,
Nick is correct. Also please reread my write up, as I addressed these issues. Especially note:
https://courseware.e-education.psu.edu/simsphere/workbook/ch05.html
The only question is if the induced convection mixing from day/night and latitude differences in heating is large enough to do the heat pumping needed. It is on Venus. The adiabatic lapse rate is the no conduction heat transfer condition, not constant temperature. Even diffusion causes the lapse rate to move toward the adiabatic lapse rate. The pressure gradient due to gravity causes rising molecules or globs of gas to cool as they rise and heat as they fall (movement due to diffusion or convection), and this exactly is the cause of the adiabatic temperature lapse rate. The only reason any heat pumping is needed at all is to replace energy lost from a small radiation flux upwards and thus the pumping from convective mixing maintains the adiabatic lapse rate.
I agree with Nick, and I definitely agree more with you over SoD and Arthur, with a few tweaks. (Although with the greatest respect to you all.)
Have you also considered temperature inversions and the lapse rate of the stratosphere? Are they stable?
Yet another real-world temperature gradient (in addition to the temperature gradient above the tropopause) that should be explained by whatever theory is arrived at:
The ocean. Warm at the surface, cold at the depths. A lecture I found (http://earth.geology.yale.edu/~avf5/teaching/ResourcesGG535/Lecture5.PotTemp.Thermodyn.LapseRate.pdf) suggests that below about 4500 m, potential temperature is fairly constant, and the ocean warms at the adiabat. However, above this altitude, temperature warms quickly. Why? I would argue it is because there is a heat source at the top of the ocean, and that the temperature of the top of the ocean does not propagate downwards at the adiabat: rather, what we see at 4500 m is (and I’m wildly hypothesizing here) the altitude where the adiabat determined by the temperature at the ocean floor and the temperature that is convected/conducted from the surface are equal: above this height, ocean water is getting its heat from the surface. Below this height, ocean water is getting its heat from the ocean floor.
Now, of course, the ocean is complicated because the Arctic and the Antarctic are busy pumping cold water down into the depths… and here I think we approach Nullius in Verba’s model: what happens when you have a cold and a warm top to your system? And given that Venus has poles, and a nightside, and based on the ocean example, I might suggest that a top warmed system would mostly be isothermal with the coldest part of the opaque enclosure, with a temperature gradient below the warm parts of the enclosure but cooling with depth…
Thoughts?
-M
Water is not particularly compressible, so you don’t get quite the same temperature gradient, and of course water is most dense above freezing point (ice floats).
But even then, as you note, you get the thermohaline circulation driven by the large scale ‘convection’ due to hot tropical and cold polar waters, with all the heat driving it coming from on top.
An interesting example, and one I hadn’t considered before.
My thanks!
DeWitt,
Venus does have some solar radiation reaching the ground and being absorbed. The exact level is not known but the average is almost certainly less than 17 W/m2. I would guess about half of this is absorbed. The hot ground radiates a huge amount of radiation, but the hot atmosphere radiates a nearly equal back radiation, so the net radiation flux is small (a few watts). For this reason, the absorbed solar energy is a very small fraction of the energies involved. If the radiated flux is more than the absorbed solar energy, the ground will be cooler than the atmosphere above it, and the atmosphere will conduct heat down to make up the difference. If the radiated flux is less than solar plus back radiated, the ground would be slightly warm, and would conduct some to the moving atmosphere above it. The ground CAN be cooler than the atmosphere.
If there were no solar energy reaching the ground, the ground would clearly be cooler than the atmosphere directly above it, but again a small convective heat transfer would replace radiated net flux up (again just a few watts).
M,
Please re read my writeup. If some magic gas replaced CO2 with exact same other properties, the height of outgoing radiation would change, depending on how much is replaced,and if clouds are an issue. In general, even replacing most but not all of the CO2 would lower the resulting temperature DUE TO THE LOWER HEIGHT OF OUTGOING RADIATION, NOT DUE TO LOWERING THE LAPSE RATE.
DeWitt,
I want to expand on the cold ground issue. The radiation from the ground is unique from the absorption and re radiation in the atmosphere. It is from a solid body, probably with a distinct emissivity which may be close to but not exactly black body. The atmosphere to atmosphere is specific bands of CO2 and other gases, and likely has some small wavelength regions with greater and lesser absorption. It is easy to see that back radiation would be smaller if the ground is hotter (from absorbed solar radiation), but it is possible and almost certain that even if the ground is slightly cooler than the atmosphere, it could still have slightly more forward radiation due to the added wavelengths. In that case, convective heat transfer is needed to make the balance up. Since the difference is likely very small, the local reversal should be possible. The ground roughness could cause enough turbulence to overcome the slight inversion with down mixing.
Using Leonard’s numbers, let’s imagine digging a hole 1 km square and 100 km deep, lined with perfect insulation. Instead of digging the hole on Venus, let’s imagine that it is located on Earth.
Assume that the hole starts at sea level at a place where the temperature is a chilly 250 K. Gravity will look after filling the hole with air. Convection with some (minor) help from conduction and radiation will establish an equilibrium state.
The temperature at the bottom of the hole will be 250 K plus (9.8 times 100) = 1,230 Kelvin.
You may argue that this estimate is too high because Earth’s atmosphere has water vapor that reduces the adiabatic lapse rate to as low as 5 K/km. However, once the temperature exceeds 300 K the effect of water vapor on the lapse rate fades rapidly so even with very moist air the temperature at the bottom will exceed 1,150 Kelvin.
gallopingcamel,
That would be correct except that the small amount of water vapor and CO2 are sufficient quantities of greenhouse gas so they would radiate excess heat up and drop the lapse rate. In addition, the ground itself is not a perfect mirror or of small heat capacity, which would also tend to radiate and absorb, and drop the excess temperature more. The initial temperature would be high, but after a while it would drop much lower. There would be no internal large scale vertical circulation to maintain adiabatic lapse rate even if no losses were encountered.
M,
The compressibility of water is extremely low so one can ignore the effect of pressure on density. Temperature on the other hand has a significant effect causing the density to rise as temperature falls.
The effect of gravity is to cause the denser (=cooler) water to descend. This effect causes the observed temperature gradients versus depth in our oceans. Unlike most liquids, water reaches a maximum density at 4 degrees Celsius and then expands until it freezes at 0 degrees Celsius.
As a consequence of water’s anomalous behavior, the temperature at depths below 1,000 meters is ~4 degrees Celsius.
If water behaved like most liquids, ice would sink and our oceans would fill up with ice. I wonder how that would effect global climate! Probably Leonard Weinstein, DeWitt Payne or SOD can tell us.
At great depth, water temperatures do drop below 4 degrees C, and at even greater depths water temperatures do increase adiabatically. (mind you, saline water has different maximum density/temperature than fresh water)
eg. “Bottom temperature varied between 1.16°C at station 3 in the Kermadec Trench at 6007 m and, with the evidence of slight adiabatic heating with depth, 1.78°C at 9729 m (station 8) ” (http://rspb.royalsocietypublishing.org/content/276/1659/1037.long).
Or “If we are studying intermediate layers of the ocean, say at depths near a kilometer, we cannot ignore compressibility.” (http://oceanworld.tamu.edu/resources/ocng_textbook/chapter06/chapter06_05.htm)
-M
Leonard Weinstein,
This is a “Thought Experiment” so it is OK to assume perfectly insulated, perfectly reflecting walls to the pit, like an ideal Dewar flask.
You’d have to assume absolutely zero emissivity walls. Well, OK, you can, but I don’t think there’s any real solid like that.
It’s a balance. Motion pumps heat down; radiation and conduction bring it up. If radiation is zero, conduction very small, then a little motion is all it needs to establish the lapse rate. But even that little motion needs a small energy source to keep it going.
Incidentally, the geothermal gradient is about 3x the lapse rate.
Nick,
Apparently I have not convinced you that conduction is only zero for adiabatic, not constant temperature. Why is that still so given my discussion on the zeroth law?
Oops, you noticed! Yes, I’ve become agnostic on the question of conduction. Your argument is persuasive, but it seems Arthur has consensus science on his side. I haven’t resolved it in my mind because I think in any conceivable real situation radiative transfer will always outweigh molecular conduction – even if it’s argon there will always be dust etc. And radiative transport clearly is conduction-like independent of gravity.
So I think in any realistic situation the lapse rate will be determined by the balance of motion transport (turbulent convection) and radiative transfer (and also phase change, if any). If motion subsides, IR will ensure lapse close to isothermal.
Maximum density of water is at 4 C only for pure water. The salinity (~35) of sea water causes the maximum density to be below the freezing point, which is also depressed by the salinity. Ice will still be less dense than sea water.
http://www.es.flinders.edu.au/~mattom/IntroOc/lecture03.html
The temperature profile of the ocean tends to be stable on average because the heat conducted downward by eddy diffusion is balanced by the cold water forced upward by the cold water that descends at the poles ( http://oceanworld.tamu.edu/resources/ocng_textbook/chapter08/chapter08_05.htm ). That reference may also be applicable to the general problem of heating from the top.
Leonard,
I’ve done the calculations. The surface of Venus sees an almost perfect black body spectrum from the atmosphere. There is no window for surface radiation to escape directly to much higher altitudes. The surface cannot be cooler than the atmosphere above it because there is no mechanism to remove the heat to maintain a lower surface temperature. You only get that sort of temperature inversion on Earth on clear nights with low humidity where the ground can radiate on the order of 100 W/m2 more than it receives from the atmosphere.
Changing the VMR for CO2 at constant surface pressure from 0.96 to 0.48 changes the net radiative flux at the surface by less than 0.5 W/m2. A reduction of surface temperature of 9 K is necessary to restore the same net flux.
From this reference ( http://www.planetary.brown.edu/pdfs/542.pdf Table 1.), the surface albedo of Venus is estimated to be in the range 0.02 to 0.12. That means that at least 88% of the 17 W/m2 average of incident solar radiation at the surface is absorbed by the surface. That heat has to go somewhere.
Arthur Smith,
Wouldn’t an isothermal atmosphere violate equipartition? I haven’t tried to do the calculation, but off the top of my head, it would seem that an isothermal atmosphere would have too much gravitational potential energy compared to kinetic energy.
No – the Boltzmann distribution guarantees that almost all the atmosphere is at low altitude. The “high gravitational potential energy” portion of the atmosphere is a very small (exponentially small – see the equations above) fraction of the total.
There are lot’s of details here to ponder, but I’d like to go back to the opaque enclosure thought experiment.
Let’s imagine what happens above the enclosure. If the enclosure acts as a black body with the same albedo as the surface of the planet, then, effectively, what happens above the enclosure is the same as if the surface had been raised to that altitude at the same time as the the atmosphere below that altitude had been removed. Short wave energy reflected remains the same. (Let’s ignore the fact the diameter of the planet is effectively just a little more than it used to be.) At equilibrium, LW energy from the enclosure would have to equal SW energy not reflected. Whatever amount of radiative energy that had been emitted from below the enclosure is now emitted at the enclosure.
There is less atmosphere above the enclosure to recapture or otherwise delay the outward progress of the energy than there had been with the depth of atmosphere below it included. With less retention of energy (and less atmosphere), the energy within the planet as a whole will move closer to the state it would be in with no atmosphere than it had been before. At this point I am going to jump ahead and say that I think the surface temperature would be less with an enclosure than it would be without.
From my post above: “At equilibrium, LW energy from the enclosure would have to equal SW energy not reflected.”
That’s not correct. See the PVC sphere thought experiment in an earlier post. Nonetheless, the average point of emission of LW has been raised in altitude.
Perhaps this is better.
At equilibrium, LW energy from the enclosure would have to equal SW energy not reflected, plus whatever LW energy was absorbed and re-emitted above it to become DLR.
Chris G,
The thin enclosure was not a good choice for a model. It was used to show that it was not necessary for solar energy to reach the ground to have the actual temperature on Venus. Too many unrealistic restrictions had to be placed on it to come close to getting the energy balance desired. I want to leave that model for a better version. A better model that satisfies that energy balance requirement and that is very close to the real Venus would assume all solar energy is absorbed in the atmosphere before it reaches the ground, but the absorption is through most of the atmosphere. This version of a model actually shows what happens (winds, mixing, etc.) and only has a different effect due to solar energy not reaching reaching the ground. This would result in the ground being slightly cooler than the gas immediately above it, and the lapse rate would probably be a bit lower near the ground, but the overall result would probably be very close to present levels.
I would like to note here that, having discussed this at length with Leonard on the previous thread, I don’t intend to try to convince him here on the points he is wrong. I’ve made my main explanations above, and suggest referring to what I wrote here which I entirely stand by.
Leonard does seem to have moderated his views on things here slightly, and I largely agree with his discussion of real-life Venus above, at least as regarding the issues of absorption within the atmosphere – this is certainly an important point. Nevertheless, the direction of heat motion from the second law of thermodynamics, which is really fundamental here (particularly in its entropic form) has to be from hot to cold, that is from lower atmosphere to upper, so all the net heat flows must be in that direction.
I will just note here a few specific points below of continuing disagreement, and leave it at that.
Leonard quotes several sources on potential temperature – that’s fine as far as it goes, but it is an approximation associated with rapid motion only, the adiabatic approximation, which Scienceofdoom explained right at the start of this whole post. When things slow down, the adiabatic approximation becomes invalid, and real temperature, not potential temperature, is key. In particular, Leonard’s claim that
is false. Conduction drives to constant real temperature, not potential temperature. The rate of heat flow by conduction is proportional to the gradient of temperature, not the gradient of potential temperature. And the zeroth law ensures that the equilibrium state, which is the ultimate state of any closed system with many degrees of freedom of this sort, must have a constant uniform temperature (not uniform potential temperature).
These laws apply regardless of any external fields like gravity or internal interactions between the molecules. Even if some portion of the atmosphere liquefies, so we have a lake condensed at the bottom of our “room” and a gas of molecules above that, the zeroth law ensures that, once this system reaches equilibrium, the temperature must be the same, bottom to top.
Potential temperature is a simplification associated with the adiabatic approximation. Leonard points out the adiabatic approximation is pretty good when you have large parcels of atmosphere moving – the heat exchange is only at the edges of the parcels. But actual motion of air is complex and chaotic; turbulence intervenes, and the large motions devolve into eddies of smaller and smaller dimensions. Motion of air is necessarily resisted by the surrounding air; real atmospheres have viscosity. Convection is dissipative, and the energy of motion must eventually turn into heat. The convection will slow down and eventually cease, unless there is something driving it; energy flux from the sun directly to the planetary surface (or to low levels of the atmosphere at least) is essential.
“it is an approximation associated with rapid motion only”
What you need here is the Rayleigh number. High Ra means convective transport dominates diffusive. Ra goes up very fast with length scale. In atmospheres, it’s generally huge.
Leonard,
Fair enough.
However, for the atmosphere to absorb all the solar radiation, it has to take on the characteristics of a black body. Otherwise there will be wavelengths that are not absorbed. It will do this at a sufficient depth or density, but in that case, you have merely substituted a black body with a clearly defined surface for one with a fuzzy surface. If no radiative energy can get in below the surface boundary, then none can get out as well.
My understanding is that without there being more energy below than above, convection will not happen. If there is no external energy that penetrates to the lower levels, there is no external energy to power convection.
If so, then below this level of total absorption of radiation, there is no transfer of energy through radiation and there is almost none through convection. I suppose there will exist lateral movements, but outside of turbulence effects, and convection powered by decay or tidal forces from below, I don’t imagine there is much vertical mixing. That leaves mostly conduction. Within an almost exclusively conductive body, whether solid or fluid, I can’t imagine an interior region cooler than an exterior region which entirely encapsulates it.
I believe that DeWitt has said the same thing in more numeric terms.
I suspect that your model relies on there being a boundary layer where incoming radiation is entirely blocked and outgoing is not.
Arthur Smith,
I went back and read Toth’s note ( http://arxiv.org/PS_cache/arxiv/pdf/1002/1002.2980v2.pdf ) about the Virial theorem and planetary atmospheres. The Virial theorem applies as long as local thermal equilibrium (LTE) applies, whether the temperature varies with altitude or not. Collisions and mean free path are not important in the derivation except as applies to LTE being valid. He also supports your position that thermal equilibrium = isothermal.
Leonard,
For a tall room with opaque real surfaces (emissivity = 1) at the top and the bottom with only the top surface maintained at a constant temperature and the bottom surface perfectly insulated, for an initial temperature gradient increasing from top to bottom in the absence of absorbed solar radiation or some other source of energy to the bottom surface, the bottom surface will emit more energy than it receives because it’s a more perfect black body than the atmosphere above it. That means it will cool and cool the atmosphere above it. In the fullness of time, the temperature of the top and bottom of the room will become equal, the same as would apply if there were no gas, or a perfectly transparent gas, in the room. Small as it is, the solar flux at the Venusian surface is what maintains the high surface temperature.
You could start from the other direction with the bottom surface at absolute zero and all the gas in a layer on the bottom. The bottom surface temperature would never exceed the temperature at the top whether the gas were transparent or opaque in the IR.
Chris,
The atmosphere of Venus is known to absorb over 90% of the solar radiation. This is probably closer to 95% with reflection from the surface back up. This is already a very small fraction of the absorbed solar energy heating the ground. Kicking it up some more in a model to effectively 100% is not a big deal. Note the fact that incoming solar energy is mostly different wavelengths than the thermal outgoing energy, and the thermal energy is almost fully absorbed and re radiated in just a few meters (see DeWitts comments). The solar radiation is mostly absorbed by the clouds and aerosols (and some by CO2), and this energy is converted to thermal energy, so it is not necessary to invoke a diffuse black body.
DeWitt,
I stated that the walls and top and bottom were perfect mirrors, i.e., not absorbing or emitting. This was to keep radiation out of the point I was making.
FWIW, clouds, aerosols, and the atmospheric gases themselves are what I’m thinking of as a diffuse black body.
Thanks to scienceofdoom for making his site available for this innovative and hugely useful format and to his colleagues for making it possible. A simple question for Arthur Smith: Why do use the term “net heat” in your discussion of the 2nd law of thermodynamics after introducing it with the Clausius definition which deals only with “heat” ?
People,
Joe Born claims he is not a scientist. However, he seems to be the responder that gets it. Please read his comment Aug 16, 9:53 pm, as it makes the critical point you seem to be missing. Keep in mind that the adiabatic lapse rate is order of 0.01 K/m. This is a very small gradient. Most problems look at many orders of magnitude larger gradients, and small physical distances, so conduction in gases in normal gradients don’t have to consider the gravity effects. Speed of change is NOT the defining feature of adiabatic compression or expansion, conduction through confining boundaries is. In many cases a slow change does allow conduction through the boundary, but not always.
DeWitt Payne
I also went back and read Toth’s note ( http://arxiv.org/PS_cache/arxiv/pdf/1002/1002.2980v2.pdf ) about the Virial theorem and planetary atmospheres. I did not see him demonstrate a position that thermal equilibrium = isothermal. He seemed to imply that with “We assume that the gas is in thermal equilibrium (the real atmosphere isn’t, but that’s another story), so its temperature is constant”, but with no supporting material. He may have only meant that in the absence of convection, that radiation would drive it to isothermal. I agree with that, and since the gas he referred to was diatomic, it would radiate. However, that is not the point I was making, and would not be true for a single atom molecule.
I didn’t say demonstrates, I said supports.
In a perfectly reflecting container starting with an adiabatic lapse rate, if the gas radiates at all, and all real gases at significant pressure will radiate by collisional induction if nothing else, then the gas at the bottom of the room will radiate more energy upward than it receives and will cool. The top, OTOH will see more radiation than it emits and will warm. The end result will be no temperature gradient. A perfectly transparent gas in a perfectly reflecting container is getting rather far from reality but somehow I don’t think that in the limiting case of perfect transparency the gas will suddenly be able to maintain a temperature gradient.
A question to SoD, Arthur, Leonard and others who might want to answer:
If we started the room with isothermal gas that is evenly distributed (mass wise – that is density is the same everywhere) would you agree that at first, the temperature on the top of the room would start falling and on the bottom the temperature would start rising.
Disclaimer (just in case): I’m aware that the initial reaction of the system doesn’t say much about the final equilibrium state. Just trying to get some footing which way stuff goes under certan circumstances.
A temperature gradient would certainly develop if gravity were suddenly switched on. The initial conditions are somewhat similar to a diffuse gas cloud in space that undergoes gravitational collapse. If the cloud is mostly hydrogen and large enough, the temperature at the center will be high enough for a fusion reaction to start and form a new star.
Leonard,
I may have asked this on the other thread but I can’t remember the answer. If your tall room started out with an isothermal atmosphere, do you maintain that a temperature gradient equal to the adiabatic lapse rate would form?
@Arthur Smith
You ignored planet’s gravity variation with altitude.
For the concentration of molecules n at a distance r from the center of a planet the Boltzmann law gives then:
n(r)=no*exp[G*M*m/(k*T*r)]
where:
G – gravitational constant
M – planet’s mass
m – molecule’s mass
k – Boltzmann constant
T – temperature of the atmosphere
At infinity, we would obtain a finite value for the concentration of n, namely n = no. This is nonsense.
Therefore, the Boltzmann formula in general does not apply to the planetary atmosphere.
@DeWitt Payne
Based on my comment, an isothermal atmosphere is not an equilibrium state of a planet’s atmosphere and therefore a temperature gradient will appear.
Well – yes, of course, under the full gravity field planetary atmospheres are not stable, they decay gradually, so strictly speaking there can be no full thermodynamic equilibrium in such a system (in the long run it all goes away). Which I did point out in a comment lost in the other original thread.
For asteroids and even our Moon, the ratio of n(surface) to your n0 is low enough that most of the atmosphere decays in millions of years or less (actually that decay is accelerated by solar wind ionization effects in the upper atmosphere, so the Boltzmann statistics underestimates the true loss rate).
For Earth that formula is why we have no hydrogen or helium in our atmosphere – the planet didn’t have sufficient gravity to keep that n(surface)/n0 ratio high enough. But for oxygen and nitrogen in our atmosphere the ratio is high enough to retain the atmosphere for many billions of years.
The approximation of a constant g is pretty close to true out to 100 km from the surface (the change in g is about 1.5%), which is all we’ve been talking about in the examples here.
DeWitt,
Let us be clear what we are referring to. A gas in gravity, and not radiating, would always tend toward adiabatic, since that lapse rate is the condition of no heat transfer. However if the gas also radiates, this radiation energy is not gravity dependent, so will decrease the lapse rate. The final result depends on whether convection driven by sources and sinks can mix the atmosphere enough to overcome the radiation flux. The conduction alone would drive any non adiabatic gradient toward adiabatic, but conduction alone is very slow, and even a very small level of radiation would dominate it. Note that I was careful to define the gas in the tall room as non radiating, as single atom molecules such as Argon would be. If the room of non radiating gas were initially forced to be isothermal, it would take millions of years for conduction alone to force it to an adiabatic lapse rate, but it would go that way. Real atmospheres are radiating, so the tall room example was to show the effect of adiabatic compression as a cause of heating, nothing more.
Mait,
Yes, and in fact it would go to the adiabatic lapse rate.
I think there are only two basic issues where those here are disagreeing on.
1) The need to have a non trivial amount of solar radiation reach the surface of Venus in order to have the high temperature vs. not needing any.
2) The issue of whether the adiabatic lapse rate or isothermal is the condition where there is no conduction heat transfer (in the absence of radiation).
I think we all agree that radiation tends to lower the lapse rate, and convective mixing tends to restore it. Does anyone disagree on these statements? If we agree, is there any argument that supports or refutes the two points that has not been made? I don’t think we can add much to this argument if we can’t resolve those points.
I think we are not understanding what the other considers to be key points. It does not matter to me whether the boundary layer of a black body is a millimeter thick or kilometers thick; what I am saying is that, below that boundary layer, it is not possible for there to be an interior region that is cooler than a more exterior region which entirely encloses it.
That was intended to be a counter to your assertion that the ground, or area near the ground, could be cooler than the atmosphere above it, when the atmosphere is thick enough to absorb all radiation.
There will be convection above this layer, but I don’t see that it matters if we are talking about what happens below the point of complete absorption.
I’m not sure what you mean by “to have the high temperature”. High relative to what?
It takes a lot of atmosphere for the density to reach levels where the gas acts as a black body. In your model, there has to be at least this much atmosphere or there will be solar radiation reaching the surface. (Even if it has to go through some absorb-emit cycles to get there.) That also implies that there is a lot of atmosphere above the region where the atmospheric gases take on black body characteristics. (I suspect it implies more gas above the boundary layer than actually exists on Venus because we agree that some solar radiation gets through, but no matter, this is a thought experiment.) Since a lot of the atmosphere is some variety of GHG, it’s going to be _hot_ at the boundary layer. And it is not going to get any cooler the further in you go. I don’t know if it is going to get any hotter, but it isn’t going to get any cooler. If the atmosphere is thick enough to absorb as a black body, it is thick enough to emit as a black body. No?
And if it emits as a black body, then it will fairly quickly achieve LTE with the surface.
Science of Doom, do you have any comments at this point?
DeWitt,
If the atmosphere absorbs 100% of the ground radiation in a short distance, I agree that even if it received no solar radiation, it would not be significantly cooler than the lower atmosphere, but would be about the same temperature rather than warmer as it is with absorbed solar radiation. This does not change my basic point that no absorbed solar radiation to the ground is needed for the surface and lower atmosphere to be hot. I also agree that greatly reducing the CO2 concentration and replacing it with equivalent other gas would still result in a hot ground, although the level might change some.
Nick,
I have repeatedly said that the issue of conduction being zero with adiabatic or isothermal gradients is not important in real cases due to radiation and convection dominating the issue. It is a fundamental issue, not a practical one. I would like it resolved, but it actually is not important to the specific problems.
Leonard,
The formation of an adiabatic lapse rate from an isothermal gas creates a perpetual motion machine of the second kind. Put a thermopile in the room with the hot junctions at the bottom and the cold junctions at the top (or maybe it’s the other way around).. For an isothermal system, there is no voltage generated and no energy can be extracted. The formation of a temperature gradient will create a voltage that can be used to do work. The average temperature in the room will then drop over time. You are then extracting energy from a single thermal reservoir. That’s perpetual motion of the second kind. If you start with a temperature gradient, you can extract energy, but the process will cause the bottom to cool and the top to warm and eventually eliminate the gradient.
Conversely, you can create a temperature gradient from an isothermal system with the same thermopile or some other form of heat pump, but only by doing work on the system. On a planet, that work comes from the temperature difference between the poles and the equator. There is no such temperature difference in your example so no work can be done without violating the Second Law by reducing the temperature of the system. A temperature gradient can’t form. That also implies that any gradient that exists will go away over time.
I suspect that the formation of a temperature gradient from an isothermal gas also reduces the total entropy of the system, which violates the Third Law.
I kinda disagree about the perpetual motion machine… if the sole mechanism was gravity, id agree. But say with a slow rotating planet such as Venus. With high atmospheric absorption at altitude on the day side, causing the super rotation with the differential caused by the loses on the night side, this could conceivably be the engine performing the work… it not really creating energy, but using solar energy to run the engine, exhausting to space.
Now if the super rotation is able to cause mixing o the layers, by say simple displacement of the molecules at the lower/boundary levels to the high winds, it could cause mixing o the molecules across the altitudes, which would lead to what Leonard has been saying? Yes ? No?
But really speaking, there are mechanisms available to do the work, and plenty o energy available from the sun. Without needing to rely on free energy.
Leonard,
It´s rare to see such depth and engagement in blog debates. Congratulations to you all.
I read your post above, and I think I understood most of it. It´s beyond my reach to argue for either side, though.
But I´d like to understand your sentence below. What would be the consequences of your reasoning being the correct one? What would change in the accepted projections about future emissions and warming, for instance?
This issue is important because it relates to the mechanism causing greenhouse gas atmospheric warming, and the effect of changing amounts of the greenhouse gases.
I did not follow the whole discussion. Please feel free to direct me to somewhere else if you explained this already.
DeWitt,
If either radiation or a device such as a thermocouple changed the lapse rate, obviously there would have to be something to drive the lapse rate back. This takes energy (Nick’s heat pump). There is no free lunch or violation of any laws by the lapse rate being the no conduction case. If there were no external drivers, and no radiation, the effect of the thermocouple would be to lower the lapse rate and lower the energy of the entire system. Slow conduction back would eventually reestablish the lapse rate if the thermocouple were removed (and no radiation), but the entire curve would have shifted to a slightly lower temperature corresponding to the lower energy state. It is like having a level of water at elevation going through a generator to a lower level. You have to pump it back up to repeat and there is no free energy, just the initial gravitational potential energy.
Alexandre,
There was initial discussions on why greenhouse gases raise ground temperatures, and specifically why Venus is hot at lower levels. Statements were made that solar input is the source of energy, but the location of outgoing radiation coupled to the adiabatic compression in the atmosphere is the cause of greenhouse heating. The function of the greenhouse gas was to move the location of outgoing radiation by absorbing and re radiating through several layers. This resulted in a limited energy flux from layer to layer through the atmosphere. If convective mixing from circulation and turbulence (from day/night and latitude variations) is able to force the atmosphere to maintain the adiabatic lapse rate in the presence of this flux (atmospheric pumping), the only factors that determine ground temperature are the location of the radiation to space, and the adiabatic lapse rate. This is not the commonly accepted model of what happens. One major issue is the need for some solar energy to directly heat the ground. Another is quantity of greenhouse gas needed. You need to read the details again to better see the arguments. It is difficult to completely answer you in a short space. If you re read and have specific questions I will try to answer best I can.
Leonard,
I understand the basic principles of the greenhouse effect (as commonly accepted), with GHG raising the location of OLR emission. This height would be (AFAIK) somewhere near the Earth’s tropopause in the case of the main CO2 absorption band.
I do not understand how these principles conflict with your sentence below:
If convective mixing from circulation and turbulence (from day/night and latitude variations) is able to force the atmosphere to maintain the adiabatic lapse rate in the presence of this flux (atmospheric pumping), the only factors that determine ground temperature are the location of the radiation to space, and the adiabatic lapse rate. This is not the commonly accepted model of what happens.
In the mainstream understanding, adiabatic lapse rate and the height of radiation are relevant to surface temperature, as both change the amount of OLR. Other factors are important too, like the amount of incoming shortwave. But I don’t think this is disputed here.
I assume your discussion is far beyond these basics.
Where does your view conflict with the commonly accepted model?
I understand you don’t reject the relevance of GHG, as suggested by the end of the first paragraph in your Concluding Remarks above. With higher concentration of, say, CO2, IR photons will have a greater likelihood of encountering a CO2 molecule and will raise the OLR emission to a thinner and higher layer of the atmosphere. Where do you disagree, then?
Alexandre,
The discussion was on why Venus is hot. The common accepted view is that solar radiation reaching the surface is trapped by the greenhouse gases, heating the lower atmosphere and ground. The buoyancy from atmosphere heated to slightly above the adiabatic lapse rate by the hot ground is also required to mix the lower atmosphere to maintain the adiabatic lapse rate. My position, and that of several others, is that the radiation does not have to reach the ground. It can be absorbed in the atmosphere well above the ground, and as long as it can induce circulation and mixing, the adiabatic lapse rate assures the lower atmosphere and ground will be just as hot as if the solar radiation reached the ground. There is also no requirement for positive buoyancy since the large circulation currents mix the atmosphere sufficiently. An additional difference comes from discussions on the effect of reducing the amount of greenhouse gas but replacing it with a non greenhouse gas to maintain total mass.
Leonard,
You can’t extract free energy from a gravitational field. You can drop weights, but you have to lift them up first. Filling the room from the top is dropping weights. How much energy did it take to lift all that gas to the top of the room? If we’re talking Venus with g = 8.87 m/s2, a 100 km tall room and a surface pressure of 93 bar, that’s 93*101,300 Pa/8.87 m/s2 or 1.062E6kg. Lifting that mass to 100 km against g of 8.87 m/s requires 8.87E5 J/kg. No wonder it gets hot when it’s dropped into the room.
The entropy of a system with a temperature gradient is always lower than an isothermal system. Creating an adiabatic lapse rate from an isothermal gas creates free energy and lowers entropy. Therefore it can’t happen in a closed system.
DeWitt,
The entire point of the tall room model was to point out that you can get a higher ground temperature from a lower (but selected value) temperature gas supplied at the top, just from adiabatic compression, if the gas filled the room enough to retain a useful pressure at the top (which required that it would be massive enough). This was just to demonstrate adiabatic compression could do the heating. I made this model for a limited reason as a prelude to the main discussion. This point feeds into the actual case of Venus as shown next.
Now we look at the atmosphere of Venus. The GHG raises the location of OLR emission from the surface. The effective height of the OLR (average) has the gas at the temperature required to match the OLR to input. We thus have a fixed temperature at a fixed height as in the room model. The objective here was to demonstrate that solar energy going into and heating the upper atmosphere and this upper atmosphere being transported down by convection (and being adiabatically heated as it goes down) could result in the high temperature at the bottom. Notice no solar energy was required to go deep into the atmosphere or any to heat the ground directly. The temperature at the outgoing height is sustained by solar input energy, but this is a much lower temperature than below. This upper level temperature plus convection to sufficiently mix the atmosphere to maintain the adiabatic lapse rate even in the presence of radiation flux losses does the lower level heating. The only question is whether a limited depth of penetration of solar heating is sufficient to cause the large scale mixing. Keep in mind that over 90% of the absorbed solar radiation was absorbed on Venus by the top half of the height of the atmosphere. The fact that the atmosphere has radiation means that some forcing has to be added to maintain the adiabatic lapse rate in the presence of the lost energy in the radiation (unlike the room model). This forcing comes from unequal solar heating of the atmosphere (day/night and latitude variation) causing large scale circulation currents. The large atmospheric tidal mixing does the rest. Thus it is not the trapping of solar radiation that reached the ground and lowest level that caused heating, but trapping anywhere in the atmosphere, even all near the top, plus mixing to maintain adiabatic lapse rate.
I hope this makes clear why I used the room model as a background.
I would like to throw this into the mix:
“Gibbs (1928) applied the entropy maximization principle to an
isolated, heterogeneous mass under the influence of
gravity and proved that an isothermal profile has
greater entropy than any other possible profile having
the same static energy, and will therefore be the equi-
librium temperature profile.”
http://journals.ametsoc.org/doi/abs/10.1175/JAS3906.1#h3
This seems to say support Arthur, except that for the qualifying statement about static energy, which I am not able to resolve.
D.
diessolli,
I read the paper and can’t find any reason to disprove it, so I am less sure on the no conduction issue. I also am not able to resolve the static energy point. I do know that if a tall enough gas column is stirred enough it will form the adiabatic profile. Since thermal conductivity is so small in gases compare to any reasonable convective mixing, this would result whether isothermal or adiabatic were the maximum entropy state. One point bothers me. It is clear that if parcels of gas are moved up and down it drives the profile toward adiabatic. The size of the parcels can be reduced until we are talking about molecular motion. Where does the size of the parcel enter the analysis?
I also made the point earlier and repeat it, that it does not matter to my overall analysis which profile is the case of no conduction since that conductivity is so small that it is not a player compared to radiation and convection. However, if the case of no conductivity is isothermal, I would be wrong on my 1:32 pm point 2), and only point one would be the main issue.
I would say that when you go down to that level thermodynamic concepts like heat and even temperature are no longer meaningful.
You can use the microscopic viewpoint to look at the problem of conduction though. To demonstrate that an isothermal gas column develops an adiabatic temperature profile you would have to show that the vertical exchange of molecules prefers the down ward direction.
Whilst I agreed that the answer does not really matter when looking at the real atmosphere, I think it’s an interesting exercise, and I admit that it feels somewhat frustrating to not be able to answer this conclusively and simply write down what profile the column would develop.
D.
As Arthur says, “adiabatic” has a built-in notion of timescale. The timescale is relative to the length scale, as indicated by the Rayleigh Number (which must be large). As parcels shrink, the timescale over which you can speak of adiabatic shrinks faster (Ra goes down as L^3). The adiabatic pump becomes like a bicycle pump with a very leaky washer.
I gotta ask, it seems everyone “kinda” agrees when it comes to a planet, its “possible” to get a adiabatic lapse rate from upper atmosphere solar absorption, through atmospheric mixing, being driven by the night/day differential?
And everyone has come to the conclusion that the room will settle to an isothermal equilibrium, lacking the differential in energy input/exhaust to power the molecular mixing?
Or is there still an impasse here 😉
The first is in fact the point of main contention and not agreed to by all. The second is a basic physics issue, and not yet fully resolved, but is not critical to the main issue.
Sorry, I read and I argued several times over, and only now do I come to the point of realizing that this whole argument does not really matter.
Leonard argues the point that it is largely pressure that drives surface temperature and that changing the concentration of greenhouse gases only changes the altitude at which incoming energy is balanced with outgoing energy. This is very close to what David Archer says in _Global Warming: Understanding the Forecast_. OK, maybe that is a book targeted at climate change for dummies, but I believe the concept has validity.
However, my point is that humans are changing the radiative properties of the earth a lot more than we are changing the pressure of the atmosphere. Under conditions where pressure does not change in any significant way, changing the radiative properties has a lot more effect than changes in pressure. That’s kind of a no-brainer for anyone.
So, I think I’m going to leave off that I simply disagree with Leonard when he says, “This issue is important…”
I dont really see what this has to do with earth…. We know what earths surface absorbs. Of this theory of Leonards, i would be thinking, it should result in uniform temperatures at all altitudes below the “tropopause” irrespective of day or night, where as there should be a noticeable difference in a pure GHG driven profile…. But im guessing 🙂
Let me rephrase this.
The discussion has been very interesting. However, since some solar radiation penetrates to the surface on both Earth and Venus, the question of what the temperature by altitude would be if it did not is immaterial to what is happening on Earth or Venus.
Maybe we should start talking about Jupiter.
I’m thinking of Hadley cell circulation, or something similar. If all the incoming energy were absorbed at some altitude, whatever that altitude is, and regardless of whether it is absorbed in a thin layer or a thick one, there will be upward convection from that layer to some tropopause layer above it. So, if there is upward movement at some location, there has to be some downward movement elsewhere. But how far down will the movement be?
The sinking mass of air will gain in temperature on the way down. At some point, the energy it lost through radiation at altitude will be balanced through the energy it gains from falling. When it does, it will quit falling. If it fell further, it would get hotter than what would be required to make it buoyant. I suspect the balancing point for this will not be below the altitude from where the upward movement was started. So, it isn’t clear that this convective movement would deliver energy the surface if the layer of total absorption were above it.
Chris G,
The change in radiative properties of Earth’s atmosphere is the more important issue. However, although we know CO2 is a greenhouse gas, direct calculations show that doubling it’s concentration and assuming that is the only change, would only raise average temperatures a bit over 1 degree C. The main issue in the AGW issue is the effect of feedback. The warmers say the water vapor positive feedback would increase that effect to 3 to 6 degrees C. The facts seem to indicate that is wrong, and a negative feedback may be more likely, although this is not conclusive yet. The result is that either there is no problem or there is a big problem. Data so far seems to be heading toward the no problem conclusion. If we get several years of cooling in the immediate future, as is now expected, the AGW issue is dead.
We are drifting off topic.
However, there are many feedbacks, both positive and negative. The geologic record seems to indicate that the climate system is not strongly self-stabilizing as you suggest, and that under present conditions, positive feedbacks will be stronger than the negative ones until some higher mean temperature is reached.
There is no indication that global temperatures are decreasing (especially not in the oceans), and even if they do decrease for several years, that would not mean that the long-term trend is not still upward. For instance, there are several downturns in the graphs available here:
http://woodfortrees.org/plot/
None of the downturns are strong enough to remove the overall upward trend.
Even though it’s perhaps not so important in a real atmosphere, I’d like to get the fundamental physics right.
In the “tall room” thought experiment with no radiatively-absorbing gas (and no convection) I believe that the temperature will end up isothermal (of equal temperature).
I believe Leonard thinks that it will end up following the adiabatic lapse rate.
Conduction equalizes temperature not potential temperature.
At least, I can’t find an atmospheric text book with a different point of view. And I can’t find a fundamental thermodynamic explanation for temperature not equalizing – whether that takes millions or billions of years isn’t important.
With the radiatively-absorbing gas (in the tall room) we seem all in agreement.
I believe that the adiabatic lapse rate is reached once convection – bulk movements of air – takes place. If no convection takes place, there will be no adiabatic lapse rate.
I also would like to get the physics right, and I have read different sources with different conclusions. However, it is not required for the basic argument I made.
If there is radiation, and there is no convection, there will be no adiabatic lapse rate. This is true whether the case of no conduction is isothermal or adiabatic, since the conduction is so small compared to any significant radiation. We agree on that.
However, there will be convection due to solar heating variation. The only issue on Venus, and for my argument is related to the need for that solar energy to reach the lower atmosphere and ground. If the convection can maintain the adiabatic lapse rate when heated only from the upper level of the atmosphere, that will heat the lower level and ground. In that case, it is only the level where outgoing radiation effectively leaves from and the adiabatic lapse rate that determines the ground temperature. This also shows what the effect of changing greenhouse gas concentration would be. It would only change the altitude of outgoing radiation, and that would be the cause of any decreased temperature. Having many time the greenhouse concentration is not as big a factor as having a taller (higher pressure) atmosphere once the greenhouse concentration is already fairly high.
I agree with all of your points, noting the “ifs”.
The area that I still don’t feel I know for sure is the effect in a real Venusian atmosphere of the rotating planet and differential solar heating. This supplies energy – perhaps enough to create the convection needed. Perhaps not.
Note that the fact that the real Venus has a lapse rate matching the theoretical adiabatic lapse rate doesn’t prove the theory that the differential heating and rotating planet causes it – the alternative theory is that enough solar energy reaches low enough in the atmosphere to initiate convection.
The question for the real Venus is what causes the convection?
However, I expect that the question of whether differential heating and planetary rotation can cause sufficient convection is a soluble problem, I just I don’t know how to solve it.
So I can’t say that Leonard is wrong on this point – but equally – I’m not sure how he can be so sure that he is right. I think some equations with numbers are needed. And these equations aren’t the derivation of the adiabatic lapse rate. Why?
Because we all agree that moving air vertically generates adiabatic cooling (for lifting), or adiabatic heating (for lowering).
But what causes the huge convective motions on the real Venus?
In place of equations could be the results of some numerical analysis that model convective flows.
I want to make one additional point. If, as on Venus, the greenhouse concentration is so high that the radiation flux in the lower atmosphere is very low (DeWitt concludes about 8 W/m2), the convective energy requirement to replace it is also very low. The end point of a perfectly absorbing gas is no radiation energy flux, so very small convection would maintain the adiabatic lapse rate. If this gas absorbed solar energy very near the top (but deep enough to induce a small circulation), that would be enough to heat the surface, and the temperature would be the temperature at the location of OLR, plus the adiabatic lapse rate time altitude.
Science of Doom,
I would like to see a calculation also, but I am not able to do one myself. However, I used existing literature to get an idea of what drives atmospheric circulation. It is clear that getting different amounts of energy into different locations in the atmosphere causes the circulation and mixing. On Earth, the ground and water absorb most of the energy, so a method to couple that to the atmosphere was needed. On Venus, the solar energy mainly goes directly into the atmosphere, so that coupling was not needed. The small portion that reached the ground did need to be coupled, but it was a small component. Why do you feel that the known absorption of over 90% of the solar energy in the top half of Venus’s atmosphere is not the main driver of the atmospheric convection, but the less than 10% absorbed by the surface is possibly important?
Leonard Weinstein:
Good point.
To answer that satisfactorily would require solving the radiative transfer equations for the solar flux absorbed through the atmosphere along with the equivalent calculations for longwave. And do we know enough about Venus to really do these calculations?
Still, a strongly absorbing atmosphere can provide a high amplification of the temperature expected from only solar energy received in the lower part of the atmosphere. That part is simple to demonstrate qualitatively, e.g., in Venusian Mysteries – noting that the atmospheric absorption is more complex than that model due to “windows” in the absorption.
There is the study I cited in the original Venus article – The Recent Evolution of Climate on Venus, Mark Bullock and David Grinspoon, Icarus (2001) – where their solution to the RTE using the radiative-convective model gave a close match to the actual.
Well, you do get the prize for making me think..
scienceofdoom,
I really do not think that we need to solve the radiative equations. DeWitt has shown that the typical absorption path of LWR is only a few m at low elevations, and the flux at lower levels is about 8 W/m2. Several source show that over 90% of the solar absorbed radiation is absorbed by the upper half of the atmosphere. This would then be converted to LWR. It seems to me the only issue is the ability of the unequal solar heating in the upper atmosphere to generate the convection needed to maintain the adiabatic lapse rate.
Leonard Weinstein:
We absolutely would need to solve those equations. Otherwise we can’t make any prediction about the surface temperature and whether high enough temperatures are generated via radiative effects to generate “natural” convection.
The diffusive (Rosseland) model is a very good approximate solution in these circumstances of high opacity. It’s Eq (1) in Ramanathan and Coakley. That’s what the 8 W/m2 is based on.
scienceofdoom,
It would seem that with all the recent progress in modeling, that there would be someone able to examine the points of issue. I do not have any contact with the modelers. If you or anyone you know has contact, would you encourage them to do a cut at this. I don’t know if a result either way would change any minds, but I think it would be useful.
I think the present format has been very useful. It has not made all come to a single position, but it has allowed open and detailed back and forth in an open and friendly discussion, and clarified the positions on remaining differences on the Venus issue. I think there are more agreements than differences, and the differences are mainly on a basic physics issue and on details if a process can be actually done with certain limitations. I would like to see more discussions of this level, on other selected topics if possible, here or or on other sites.
Leonard Weinstein:
I agree. I appreciate your contribution, it has made me think a lot and I feel like I understand the subject better as a result.
I hope we have many more discussions like this one.
DeWitt @ Aug 18, 6:41pm says: “The entropy of a system with a temperature gradient is always lower than an isothermal system”.
My initial reaction was that would depend on whether the [enclosed] isothermal system had a temperature higher than the maximum of the temperature gradient. However the true measure would be a comparison of the total energy in the 2 [enclosed] areas. The temperature gradient area would have, assuming, that the gradient was the total energy, energy expressed as Et [top of the gradient]*Et-1*Et-2….Etb[bottom gradient]; the energy in the isothermal area would be the energy in a unit of area x total area.
But this assumes that the temperature of the isothermal system is above zero; if the temp is not or just above zero so that there is little molecular motion, in effect no conduction and radiation the DeWitt’s statement holds true.
The interesting thing is what would happen if both the gradient and isothermal areas had gravity introduced. The gradient area would tend to a isothermal condition but the isothermal area would have a gravity induced gradient and the other bells and whistles of conduction, convection and radiative transfer.
I don’t know if this chicken and the egg scenario holds up but doesn’t it suggest that gravity [and therein pressure] is an [the] initiating factor?
I should have specified that the total energy of both systems was the same. diessoli’s post above referenced Gibbs’ calculation in 1928 that the entropy of an isothermal system was higher than for any system with the same energy with a temperature gradient.
That should be: …an isothermal heterogeneous system under the influence of a gravitational field was….
I apologise if this is going too far off topic, however, I think it’s still relevant considering this post involves convection and Arthur Smith has contributed to it (I assume to be the same Arthur Smith as the author of this paper: http://arxiv.org/abs/0802.4324).
The paper I mentioned claims to be a proof of the atmospheric greenhouse effect, I say claims as I believe it’s fundamentally wrong.
The error, I believe, is with Equation 10 on Page 2, which describes the change in energy at the surface of a planet in purely radiative terms for a planet without an atmosphere, or with an atmosphere that doesn’t absorb radiation to any significant degree. This is only valid in the case with no atmosphere, as an atmosphere (whether it can absorb radiation or not) would most likely be capable of other forms of heat transport (this is not explicitly defined one way or another in the paper). The Equation begins with:
E(dot) = E(absorbed) – E(emitted)
However, I believe it should be:
E(dot) = E(absorbed) – E(emitted) – E(atmosphere)
Where E(atmosphere) is 0 for a planet without an atmosphere, and is rather complicated for one with an atmosphere. From what I can tell, this (possible) error is carried throughout the paper.
I think it raises an interesting thought experiment though, involving two Earth-like planets which only differ in that one has an atmosphere that can absorb radiation, while the other can’t. If they started at an equilibrium temperature without a source of radiation (i.e. no Sun), then when a radiation source is introduced (the Sun):
a) Which would reach a new equilibrium temperature first?
b) Which would reach the higher equilibrium temperature?
c) What would be the difference between equilibrium temperatures?
The answer to the first question, I think, would be the atmosphere which can absorb radiation would reach a new equilibrium temperature first. However, the answers to the second and third questions I think are more complicated. In one case, the atmosphere can transfer energy via conduction/convection and radiation, but it can also lose energy to space. In the other, it can only transfer energy via conduction/convection, however, it can only lose energy back to the surface.
I’m not sure if this has been discussed elsewhere, but I’d be interested to know if my interpretation is correct and if the corresponding equations still show greenhouse gases account for ~33 K warming above radiative equilibrium (without getting in to a long discussion about whether an average temperature is realistic!). I would have also thought a (relatively) simple model with appropriate simplifications could illustrate this.
kamilian:
It is the same Arthur Smith who contributed here and in the paper you cite. I’m sure Arthur would be delighted to answer you himself as well.
You might find these posts relevant: The Hoover Incident and the update (just written), Heat Transfer Basics and Non-Radiative Atmospheres.
I haven’t studied Arthur’s paper but have had a quick look. In simple terms, the energy transfer between the surface of the planet and the atmosphere has no actual impact on the energy balance of the planet.
You could also argue that the equation was incomplete because it didn’t cover the energy transfer between the surface and the core.
As there is no mechanism for the atmosphere or the core to lose energy to space both of these are not necessary in the “total climate” energy balance. (If 10^25J moves from the surface to the atmosphere it is still in “the climate system”. If 10^25J moves back to the surface it is still in “the climate system”).
What you maybe thinking is that the dynamic energy transfer between the planet’s surface and atmosphere does affect the transient response of the planet. This is true of course and noted in The Hoover Incident.
I also noted in The Hoover Incident that the interaction with the atmosphere can affect the equilibrium response. This is because there will be more ice – therefore more solar radiation reflected – and also a change to cloud cover which also affects the albedo of the planet.
So any climate response which affects albedo will affect the equilibrium temperature.
Hi SOD,
Thanks for your response, however, I think you’ve missed the point I’m trying to raise.
I read some of the posts you linked to and at first thought they were completely wrong! However, the more I read, the more I realised what you were trying to say, and believe that you have incorporated a similar problem to the one I believe I’ve highlighted in the Arthur Smith paper, from the other direction (which I’ll explain further in a minute).
The issue that I mentioned in the paper is that Equation 10 effectively uses the total planet energy budget (E(absorbed) – E(emitted)) and applies it to the surface of the planet. If a planet has no atmosphere, then it’s valid, as the total energy budget will be equal to the surface energy budget. However, if a planet has an atmosphere then it’s no longer valid. The Trenberth energy budget illustrates this, as you’ve already mentioned in the “Heat Transfer Basics and Non Radiative Atmospheres” post, energy is transferred to the atmosphere by conduction/convection and evaporation. To ignore this and suggest that the energy budget of the surface of a planet with an atmosphere is only based in radiative transfer is, I believe, invalid.
The paper starts from scratch and attempts to prove that the greenhouse effect contributes ~33 K warming above the radiative equilibrium of ~255 K. Your posts effectively assume that this is correct, and if you remove the greenhouse effect from the atmosphere (by “hoovering” the gases that can absorb radiation emitted from the surface and remove the ability from water vapour) there’ll be a cooling effect (which there may or may not be, and the magnitude may or may not be ~33 K).
In the “Hoover Incident” post I don’t find any real reference to conduction/convection/evaporation, you’ve based the surface temperature change purely in radiative terms.
In the “Heat Transfer Basics and Non Radiative Atmospheres” post you suggest that ~102 W/m^2 of sensible heat is added to the ~150 W/m^2 (difference between 240 W/m^2 and 396 W/m^2) to give a surface cooling of ~252 W/m^2 at t = 0. I don’t believe this is correct as the sensible heat transfer didn’t change at t = 0 (i.e. it’s not an additional cooling effect), and again, the ~150 W/m^2 cooling assumes that the greenhouse effect caused the ~33 K above radiative equilibrium.
A big thanks to Leonard Weinstein and Arthur Smith for their contribution in writing this joint article/discussion.
What was most useful was everyone’s willingness to let their ideas be tested. That was why so many thought experiments were posed in the discussion that followed the earlier article.
These thought experiments allowed us to isolate what was causing the differences of opinion.
My thinking about some of the subject matter was quite confused before we started the original discussion. Because we all took the time I think I gained some clarity.
kamilian,
But it is an additional cooling effect. At the current steady state (K&T 1997), the surface sees 168 + 324 = 492 W/m2 of incoming radiation from the sun and the atmosphere but only radiates 390 W/m2. The rest, 102 W/m2, is lost by convection to the atmosphere. Without that heat transfer, the atmosphere would cool because it would be radiating more heat than it received.
The atmosphere balance is 350 W/m2 from the surface and 67 from sunlight for 417 W/m2 absorbed. Radiation up from the atmosphere and cloud tops is 195 W/m2 up and 324 W/m2 down for 519 W/m2 total and a difference of 102 W/m2, the amount supplied from the surface by convection.
If you suddenly removed the absorption and emission of radiation by the atmosphere (but maintained the same albedo), the surface would see 235 W/m2 of incoming radiation but, at t=0, would still be losing 390 W/m2 by radiation and 102 W/m2 by convection for a net difference of 257 W/m2. The surface would then cool and the atmosphere would warm until convection shut down, which would happen quickly.
DeWitt Payne,
I realised I made mistakes after posting it. However, the surface does not absorb 235 W/m^2 as there’s been no mention of changes in surface or cloud albedo. The surface would be expected to still absorb 168 W/m^2, while the system would be expected to absorb 235 W/m^2. As you said though, there will be a net reduction in energy lost from the atmosphere and an increase in energy lost from the surface.
Anyway, this is now going away from what I was trying to illustrate and I shouldn’t have replied to it in the first place!
In K&T97, the incoming solar flux is 342 W/m2, not 235 W/m2. Clouds, aerosols, etc. in the atmosphere reflect 77 W/m2 and the surface reflects 30 W/m2 leaving 235 W/m2 to be absorbed. The atmosphere absorbs 67 W/m2, mostly in the near IR by water vapor and the rest in the UV in the stratosphere, and the surface absorbs 168 W/m2. In the thought experiment, we make the atmosphere transparent but don’t change the total albedo. That means that the atmosphere cannot absorb that 67 W/m2 because it’s transparent. Therefore all that energy reaches the surface and the surface absorbs all 235 W/m2 because we have said in the definition of the experiment that total albedo doesn’t change.
kamilian – Yes, that was my article. However, you left out something in your transcription of my eq. 10 – E(dot) is actually subscripted – E_planet(dot) – i.e. the rate of change in time of the total energy of the entire planet.
That means it includes the energy of the atmosphere and the core, oceans, whatever else, not just the surface. The point is the subsequent paragraph that describes why, in the long run, on average, E_planet(dot) should be zero. Do you have some reason to think it shouldn’t be?
In particular, energy flow from surface to atmosphere by convection would not change E_planet at all, it’s just a redistribution of energy among components of the planet. It doesn’t enter into that equation at all.
Hi Arthur,
Thanks for the response, to answer your question, no, I have no reason to think that E(dot)_planet should be non-zero.
[…] As air rises it cools via adiabatic expansion (see the lengthy Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion). […]
Bumping a bit of an old thread, but I came up with a question for the top heated part of the problem. More specifically – shouldn’t we consider thermal diffusivity instead of conductivity when looking at how fast equilibrium is achieved in this case (thermal diffusivity should be quite high for air and I imagine it should be quite similar for gases with the same weight particles).
[…] Update – New article – Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion […]
[…] Update – New articles – Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion […]
SOD, Leonard and Arthur: This subject has come out elsewhere in the blogosphere recently and has provoked what might be some interesting ideas. I didn’t follow your earlier discussion.
Figure 3.4 in Arthur’s section shows the standard reasoning for how pressure varies with height in a thin layer of atmosphere. We integrate this expression to show how pressure changes with altitude over longer distances. However, applying the molecular theory of gases, the pressure difference between the top and bottom surface of a thin layer MUST be the result of different velocities of the molecules moving up and down. The difference in velocity is caused, of course, because kinetic energy is being converted into potential energy in the molecules that are moving upward (and cooling them when we consider a large enough group). The opposite is true for the molecules moving downward.
In the standard discussion of the relationship between pressure and molecular motion (in a laboratory setting), we assume that molecular motion and pressure are the same in all directions. The molecules rising have the same kinetic energy as those falling. In a column of atmosphere at equilibrium, the same number of molecules move up and down past any particular height and we assume they carry (on the average) the same amount of kinetic energy. However, whenever there is a pressure gradient – a difference between the impulse conveyed upward and downward by molecular collisions, there must be an accompanying difference in the kinetic energy conveyed upward and downward. The standard deviation of the adiabatic lapse rate tells us how potential energy change (and PV work) with height is converted into a temperature change with height.
So the existence of a pressure gradient with height automatically implies the existence of a temperature gradient with height; both phenomena are produced by molecular motion.
Frank,
This isn’t the right thread. What you want is the exchange between Neal J. King and willb and others on the Paradigm Shifts in Convection and Water Vapor starting here:
https://scienceofdoom.com/2011/06/12/paradigm-shifts-in-convection-and-water-vapor/#comment-12688
DeWitt: Unfortunately I remembered this post. The title of the post you linked had little to do with their discussion, which I had forgotten. I’ve been reading the discussion with some difficulty. Did they ever reach an useful conclusion? Should I paste these comments over there?
Some of the ideas above came from considering a thought experiment that might be more illuminating than filling a tall room with gas.
Imagine a perfectly-insulated, opaque, horizontal cylinder 1 m2 in diameter and 20 km tall, filled with 10^4 kg of ideal gas. These numbers are chosen to model the earth’s surface atmospheric pressure and the height of 99% of the atmosphere. We will place this cylinder 10 km above the surface of the earth and then rotate it into a vertical position on a frictionless pivot placed at the center of mass (so no work is done during rotation). The starting temperature of the gas will be the temperature at 10 km, which will then determine the pressure inside the cylinder. To avoid problems that might develop during rotation, we could postulate an instantaneous rotation or a series of removable barriers of negligible volume every meter. If we want to force what happens in response to rotation to be reversible, we could imagine that these barriers slowly move as the system responds before they are removed.
After rotating the cylinder to a vertical position, most of the gas is going to fall to the bottom of the cylinder and warm because of the kinetic energy it gains. The gas at the top will expand and cool due to reduced pressure (and we could also say that the gas at the bottom has warmed due to compression). The result should be a temperature and pressure gradient much like that present in our atmosphere, but without the involvement of all of the other factors that complicate our atmosphere: surface heating, convection, and radiation. If we fill the cylinder with a mixture of inert gases with the same density as our atmosphere (which don’t have the vibrational or rotational energy levels need to emit or absorb radiation), we can even eliminate internal energy transfer by radiation. Notice that this temperature gradient will develop spontaneously, so simplistic assumptions that the 2LoT forbids spontaneous development of a temperature gradient shouldn’t apply here. There are equations that show how the entropy of a gas varies with temperature and pressure, but I’m not sure how to apply them to this situation. It don’t know if the gravitation field enters into such calculations.
When the cylinder is vertical, we calculate the pressure in the cylinder from the weight of the overlying gas. When the cylinder is horizontal, the weight of the overlying gas is irrelevant. That can’t be right, the same physical principles must apply in both situations. This led to the realization that any pressure gradient in the vertical position must arise from molecular motion, which is what determines pressure when the cylinder is horizontal.
Frank,
You may also be interested in these posts: A Matter of Some Gravity and Perpetuum Mobile and the extensive comment threads at WUWT. They’re both about the idea that even with a transparent atmosphere and a planet with an isothermal surface, the atmosphere will have a lapse rate close to the dry adiabatic rate. The author, Willis Eschenbach, disagrees. I agree with Willis. Your column experiment is relevant because Willis argues (correctly, IMO) that if the equilibrium state isn’t isothermal, one could create a perpetual motion machine of some kind by using the temperature difference either between the top and bottom of the column or between the tops of two columns filled with gases with different atomic or molecular weights (helium and xenon, e.g.) that would have different lapse rates.
The problem is that one must rely on Second Law arguments as nobody seems to have come up with a proof based on statistical mechanics, including Boltzmann. Second Law arguments are unsatisfying or unconvincing to many.
The willb, Neal King discussion fizzled out, as I remember.
DeWitt: Thanks for the references. I read the Willb and Neal King discussion and King’s reference to Feynman Vol I, Chapter 40. I comment at SOD to learn (and shouldn’t preach) and I appreciate the help. I now know where I made my mistake. I was correct in recognizing that a pressure gradient must – at a molecular level – be produced by a gradient in the impulse being conveyed upwards and downward (which was ignored by WIllis, who started me thinking). However, there are two ways to produce a gradient in the impulse: 1) By different upward and downward molecular speeds (and therefore temperatures) as described by me above OR 2) By a gradient in the number of molecules colliding. Temperature is the average kinetic energy PER MOLECULE, so temperature won’t decrease if the pressure decrease is caused only by a decrease in the density of molecules. Feynman says:
P2 – P1 = n2kT2 – n1kT1 = dP = -nmg.dh
IF T2 = T1, then: kT.dn = -mg.dh
and dn/dh = (-mg/kT)*n
and n(h) = n(0)*exp(-mg/kT) isothermal
substituting gives: P(h) = P(0)*exp(-mg/kT) isothermal
The pressure and the density of molecules will both decrease at the same rate in an isothermal system, so different upward and downward molecular speeds are not required to create a pressure gradient and different speeds will not perturb an isothermal system. Willb objected that Feynman had assumed an isothermal system. Neal King said (and I agree) that Feynman has shown that the lapse rate isn’t driven away from isothermal by the kinetic theory of gases; the lapse rate is not constrained by the kinetic theory of gases. A sensible next step might be to start with a 1 km column containing 10,000 isothermal layers with an overall adiabatic profile and computationally discover whether such a system spontaneously evolves to isothermal by simple thermal diffusion or whether it gets stuck somewhere.
As you point out, two insulated columns of gas with different heat capacities (and therefore two different adiabatic lapse rates (g/Cp)) in thermal contact with the ground would have different temperatures on top and this temperature difference could be used to drive perpetual motion. Therefore an adiabatic lapse rate won’t develop spontaneously. Unfortunately, Willis didn’t put this “elevator speech” in his opuses and he seemed to make other mistakes in his discussion.
Dewitt Payne: “The problem is that one must rely on Second Law arguments as nobody seems to have come up with a proof based on statistical mechanics, including Boltzmann. Second Law arguments are unsatisfying or unconvincing to many.”
Actually, statistical mechanics is precisely the basis for the Velasco et al and Román et al. papers I cited in that thread. The former of those critiqued a Coombes & Laue paper, also based on statistical mechanics, that purported to establish that such an isolated gas column in a gravitational field would be isothermal. Velasco et al. demonstrated a flaw in that conclusion, demonstrating the gas would exhibit a nonzero lapse rate, although one that approaches zero as the number of molecules approaches infinity.
One can readily test Velasco et al.’s quantitative result in a thought experiment in which there is but a single gas molecule in the “column.” It would be interesting to see the perpetual-motion-machine argument applied to the single-molecule gas column.
Since the centre of mass has fallen on rotation PE must change to KE.
This results in an increase in internal energy of the gas as a whole.
The increase in temperature in a fixed volume will increase the pressure.
Restoring the cylinder to the horizontal will require external work equal to the loss of PE on falling.
On repeating the cycle the temperature and pressure inside the cylinder increases with each cycle.
Since heat cannot enter or leave the cylinder the external work done is continually changed into internal energy of the gas.
Byran: I placed hypothetical, temporary barriers inside the cylinder so nothing would happen as the cylinder was rotated. However, if you omitted the barriers and tipped the cylinder just slightly, the weight of the falling gas might drive rotation somewhat like a pendulum. Momentum could drive the cylinder past vertical (with a temperature gradient) and restore nearly isothermal conditions when the system was near horizontal. These thoughts didn’t seem to be taking me anywhere useful.
[…] who have been around for a while will remember the interesting discussion Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion in which myself, Arthur Smith and Leonard all put forward a point of view on a challenging […]
I have been fascinated by the arguments in this post and the previous 2 concerning Venus. Leonard has 90% convinced me that convective mixing by differential solar heating directly in the atmosphere pumps heat down to the surface. I am not yet convinced that an isolated tall room would maintain an adiabatic lapse rate indefinitely. However this comment is not trying to argue one way or the other. Instead, I propose a possible way that all of these hypotheses could actually be tested experimentally in a laboratory.
Gas centrifuges (such as those used for Uranium isotope separation) can spin at well over 1000 revs/sec simulating huge gravitational fields. For a gas chamber with radius R and an inner bore of radius ‘a’ spinning at an angular velocity Omega, the effective gravitatonal acceleration at a distance z inwards from the outer rim is g(z) =Omega^2(R-z). This gives a lapse rate Omega^2(R-z)/Cp. see diagram
The outer rim of the centrifuge is held in a heat bath at some temperature To and then the inner core filled with a vacuum. An IR sensor is placed in the core of the centrifuge, and thermocouples placed at distances of say 1 cm inwards from the rim. The gas inside the centrifuge can be varied from as needed air, Co2, Argon. Integrating the adiabatic lapse rate gives temperature distribution of
T(z)= T0 +Omega^2/Cp(z^2/2 -Rz)
Taking a centrifuge with an outer radius of 10cm and an inner radius of 1 cm spinning at 100 rev/sec, the predicted temperature gradient is shown in here, where the centrifuge is surrounded by a heat bath held at 15C and that the inner bore is a vacuum.
By varying gas mixtures, surface temperatures, gas pressure, external gas heating, these scenarios could probably actually be tested. An IR detector in the core could also measure the outgoing IR from greenhouse gases.
I am not aware of any such experiments being done previously – so please correct me if I am mistaken. The cost would not be excessive and likely within the budget of a university physics department.
[…] Convection, Venus, Thought Experiments and Tall Rooms Full of Gas – A Discussion […]
There has been much speculation about whether a temperature gradient will spontaneously develop in an isolated column of gas. You can experiment with an online molecular dynamics simulation of a 2-D box with and without gravity here:
http://physics.weber.edu/schroeder/md/InteractiveMD.html
The atoms are colored according to their kinetic energy. In the presence of gravity, a density gradient develops and one can see that the frequency of collisions with the wall (ie pressure) is greater near the bottom than the top. When the Temperature is about 3 and Gravity is 0.1, there is a nice distribution of colors. 300 Atoms in the largest box (Volume = 10000) is a good place to start. Time step 0.003. 25 Steps per frame. As best I can tell, a temperature gradient doesn’t spontaneously develop, but someone will have to abstract quantitative information (density, kinetic energy, pressure) from the simulation to be convincing.
The temperature is raised by clicking on the faster and slower buttons (or more gradually with the -1% and +1% buttons). When you turn on or up the gravity, molecules begin falling – which increases their kinetic energy and temperature. Starting with zero gravity and E=1.2, will equilibrate to a useful distribution of speeds when gravity is turn up to 0.1. If you increase the temperature using “Faster”, the temperature goes up immediately and then falls off as some kinetic energy is converted to potential energy. If you start with gravity = 1 and E=5, the molecules will expand into empty space with a useful spectrum of speeds.
One can watch one or a few atoms. You can also add bonds.
The correct analysis was provided some years ago by three Spanish physics teachers in a paper Velasco et al, which suffered from (1) being introduced into the blogosphere through a site known for speculative theories and (2) being written by non-native speakers of English who assumed that their readers were more versed in statistical mechanics than the typical visitor to sites such as this.
My (ultimately unsuccessful) attempt to get an explanation of that paper posted at Watts Up With That is described here: http://wattsupwiththat.com/2014/08/18/monday-mirthiness-spot-the-troll/#comment-1711550.
In short, however, that paper and the one on which it depends demonstrate analytically, without approximation, that gravity would cause the equilibrium lapse rate to differ from zero, but only immeasurably. That is, the lapse rate would be zero for all practical purposes, but the reasoning that DeWitt Payne, Robert Brown, and others employed to prove that proposition was faulty.
Joe: I happen to agree with the reasoning of the same people you cite, but a post from Clive Best cites two scientific publications suggesting that there could be some real scientific controversy behind the hot air in the blogosphere. Further experiments may be appropriate.
http://clivebest.com/blog/?p=4101
Chemists routinely use molecular dynamics simulations to determine the (thermodynamic) behavior of molecules that is hard to study in the laboratory. In an earlier discussion with Pekka, I noted that molecular dynamics experiments could tell us what the laws of physics actually predict about the behavior of a gas in a gravitational field. I later found the above website, which is relatively new.
Having done some simple molecular dynamics, I personally find it fascinating to “turn on” gravity, watch the molecules “fall” and heat up at the bottom of the container, and then see that kinetic energy distributed upward. Aside from the perpetual motion argument, I find this demonstration far more illuminating than all of the words (including my own) written about this subject.
In this post, Leonard imagines “pouring argon into a “perfectly thermally insulated fully enclosed room 1 km x 1 km x 100 km tall, located on the surface of Venus. The walls (and bottom and top) are assumed to have negligible heat capacity.” He can do something similar at this site! By selecting Figure 3d under presets and suddenly turning gravity up to 4, he can drop a chunk of frozen argon into something equivalent to his room and watch what happens!
I’m not a scientist, so the Velasco et al. paper (or, rather, the Roman et al. paper on which it was based) was a tough slog for me, but I found it compelling because it started from the first principles of statistical mechanics. For instance, it didn’t even assume the Boltzmann distribution. Consequently, unless there was some calculus error that I missed (and that’s entirely possible, since they used an integration identity I didn’t verify myself), it’s hard to see how they could have gone wrong.
I’ll mention, though, that Clive Best’s proposed experiment should theoretically (all of these effects are actually too small to measure experimentally) come to a conclusion different from what the experiments proposed above would, since his setup is immersed in a heat bath, so it would exhibit the set of possible microstates that I’m told is called a “canonical ensemble” rather than the “microcanonical ensemble” of microstates that Velasco et al. say a system not so immersed would exhibit. Statistically, the latter arrangement would exhibit an unimaginably small but nonzero lapse rate, whereas the former’s rate actually would be zero.
As I said, I found the statistical mechanics needed to reach the ultimate answer to be daunting–but more convincing than the arguments set forth above or at WUWT, which in my view are based on a faulty sense of thermodynamics’ range of applicability. None of the PhDs to whose attention I brought Velasco et al. betrayed an ability to make anything more than conclusory arguments against Velasco et al.’s reasoning.
I’m under no illusion that this comment will get anyone to pay attention to Velasco et al. and Roman et al. But, if you really have a burning desire to find the real answer–and you have the time–I’m convinced that those papers are where it lies.
I don’t know, what the paper of Velasco et al claims more specifically (I haven’t seen any full reference to it). Thus I cannot be sure on it’s correctness.
From the second law it follows that a fully isolated volume of gas is exactly isothermal in thermal equilibrium also in gravitational field, but being exactly isothermal does not mean that the average kinetic energy of the molecules must be exactly the same, if the molecules are interacting. Thus the molecular interaction might explain the claimed result of Velasco et al.
The Roman et al. paper on which Velasco et al. is based is here: https://tallbloke.wordpress.com/2012/01/04/the-loschmidt-gravito-thermal-effect-old-controversy-new-relevance/comment-page-1/#comment-13301.
The Velasco et al. paper itself is here: https://tallbloke.files.wordpress.com/2012/01/s-velasco.pdf.
“From the second law it follows that a fully isolated volume of gas is exactly isothermal in thermal equilibrium also in gravitational field.”
This is the kind of statement I find conclusory. The Roman et al. paper shows that it’s correct only in the limit of an infinite number of particles. Of course, if the number of particles is on the order to 10^23, that’s pretty darn close to infinite in this context–but not quite there.
After you gave a second author’s name I succeeded already in finding both those papers by Román, Velasco and White from 1995 and 1996. Both papers confirm that they agree fully on the validity of the standard conclusion of isothermality in the thermodynamic limit of infinite number of particles. Whether there’s a correction due to the finite number of particles is really irrelevant for virtually all practical cases. There might perhaps be some very special cases of microscopic dimensions, where the number of particles is so small that the number makes a difference, but it seems impossible that a case will ever be brought up, where at the same time gravity plays a role and the number of particles is so small.
In addition it’s not possible to create microcanonical ensembles in practice. Thus even that issue is of interest only as a discussion of some fundamentals of statistical mechanics. (There are also other issues of that type that have not been fully resolved.)
Both paper agree with Maxwell and Boltzmann and reject the ideas of Loschmidt, which are discussed in that thread.
There’s neither any contradiction between these papers ant the argument that you appear to not like.
Mr. Pirilä:
Yes, yes, yes, the difference is indeed too small to be measured, as I’ve said repeatedly for a couple of years now.
If you think about it, though, your comment about not being able to make a perfect microcanonical-ensemble-exhibiting system doesn’t prove much; you can’t make a perfect canonical-ensemble-exhibiting system, either–and that’s the only way you get a value exactly equal to zero.
The real point is that it is only through arguments of Velasco et al.’s type, not the mere plausibility arguments made elsewhere, that one can rigorously arrive at the answer–which is indeed that the lapse rate is essentially zero. The reason I bring it up is that it shows the arguments of the type given here to be bad physics: http://wattsupwiththat.com/2012/01/19/perpetuum-mobile/#comment-873227. Specifically, Velasco et al. show that there is indeed an (again, minuscule) lapse rate but that, contrary to what Dr. Brown contends, perpetual motion does not result. Therefore, although his conclusion is virtually correct, his argument in support is not.
This shows why we laymen need to take what scientists say with a grain of salt.
Joe Born,
You are right.
All thermodynamical ensembles are idealizations, they cannot be realized in practice. They are useful concepts in deriving results that are as accurately true as thermodynamics describes real world as Classical Thermodynamics is also an idealization, and actually an ad hoc idealization that was originally justified only by it’s agreement with observations at the level the measurements could determine.
The same is true of most other detailed theories and calculations that physicists make.
In this case the issue is not the behavior of the real system. Román, White, and Velasco studied properties of the idealization, not properties of the real world. It seems clear from the paper that they had no problems with the standard understanding of the properties of the real world. On that they agreed fully with other physicists, who agree with Maxwell and Boltzmann, and reject Loschmidt on this issue.
As a side remark I can tell that my notes from 2011 (the notes are not dated, but that’s when I wrote them and made them available) contain essentially same arguments as the 1996 paper of Velasco et al, presents referring to Coombes and laue but formulated mathematically somewat differently.
I believe that I am in agreement with you as far as all the paragraphs of your last comment–with the possible exception of the last. In particular, yes, Velasco et al. agree that in the limit Maxwell-Boltzmann applies and that Loschmidt et al. have it wrong.
But my point, as it has been for some years, is that it is only through statistical-mechanics reasoning such as that of Velasco et al. that we can creditably arrive at the correct answer. Most explanations that I have seen the scientists give arrive at essentially the right answer, but for the wrong reasons.
Now, since I’m a layman (who is at the moment somewhat pressed for time), I can’t say that I have completely assessed your paper referred to in your last missive’s final paragraph. I will merely mention that I need to look more carefully at the following excerpt:
“According to the kinetic gas theory every component of the velocity is distributed in accordance to the Maxwell-Boltzmann distribution.”
My current suspicion is that such a result follows only in the limit of an infinite number of particles. If you consider the M-B distribution’s derivation, I think you will see it assumes an infinite-heat-capacity heat bath, which in reality is only approached in the limit. My point is that the alternative arguments I’ve seen for for isothermality fail in the absence of such a heat bath.
I do not disagree with the proposition that as a practical matter the lapse rate would be zero. My point is, and always has been, that the arguments I’ve seen in the blogosphere for that proposition are flawed.
Dr. Pirilä:
Having returned home and taken the time to read your 2011 notes more carefully, I can’t say it’s clear to me that they “contain essentially same arguments as the 1996 paper of Velasco et al.” To me it appears that instead you do the same as Coombes and Laue, i.e., show that a uniform-temperature Maxwell-Boltzmann distribution is consistent with particles’ losing speed as they rise. What Velasco et al. show instead is that such a distribution is correct only in the limit: they show that a minuscule temperature gradient instead prevails if the number of particles is finite. Your notes don’t seem to say that.
Joe Born,
That’s what I tried to say, when I wrote:
I didn’t mean that my argument covers everything they say, but that they did include in their text also discussion similar to mine, and that this discussion was based on Coombes and Laue.
Two very different issues come up in the papers of Velasco, Román, and White:
1) The observable physical phenomena that can be derived both using the microcanonical and the canonical ensemble.
2) How microcanonical ensemble leads to deviation from the results based on canonical ensemble, when considered with full mathematical accuracy?
As I already wrote, there are further complex issues that come up, when statistical mechanics is considered with full mathematical rigor. Such questions continue to be discussed in scientific literature, but these issues are of no significance for typical applications of statistical mechanics or thermodynamics.
What they have done may have some pedagogical value as it allows for determining, how rapidly the results based on microcanonical ensemble approach the asymptotic behavior, when the number of particles increases.
What’s important for physical analysis of the atmosphere is that the concept of local thermodynamic equilibrium can be used making no significant error. That requires that we can divide atmosphere in subvolumes that are at the same time large enough to assure that each volume behaves like a member of grand canonical ensemble following laws of thermodynamics and small enough to have pressure and temperature to be very close to constant over it’s dimensions. One more requirement for many considerations is that the dimensions of the volume are large relative to the mean free path of the molecules. In most relevant considerations these requirements are satisfied with safe margins, but problems may arise near the surfaces of liquids and solids, and at extremely high altitudes (far above the stratosphere).
Exactly. The Maxwell-Boltzmann distribution is approached so rapidly that Velasco et al.’s refinement of Coombes and Laue could never make any difference experimentally. The only reason I have harped on Velasco et al. is that it shows that, although the conclusion of isothermality is essentially correct, proofs of the type introduced here http://wattsupwiththat.com/2012/01/19/perpetuum-mobile/#comment-873227 (under “fourth”) and expanded upon here: http://wattsupwiththat.com/2012/01/24/refutation-of-stable-thermal-equilibrium-lapse-rates/ are based on bad physics.
Such “proofs” essentially say that if any mean-molecular-velocity gradient did occur at equilibrium in a thermally isolated gas column, and if such gradients were different for two isolated gas columns, then bringing those gas columns into thermal communication would result in perpetual motion. Hence their conclusion that no gradient could exist at equilibrium. So that “proof” purports to rule out any nonzero gradient at all, no matter how minuscule.
That reasoning is faulty because an inference to be drawn for Velasco et al. is that bringing the two erstwhile-isolated gas columns into communication would instead result in the thus-created composite body’s assuming a common, lower gradient and thus avoiding the perpetual motion that the proponents of that type of “proof” would attribute to it.
I don’t agree that the arguments are bad physics.
The idealization used in the argument is almost perfect, and the result is exact in this idealization. That’s good physics by all standards of physics.
Some mathematicians who study mathematical foundations of physics, and perhaps some philosophers of physics worry about this kind of details, but from the persistence of such activities does not follow that there’s anything wrong in the physics.
The B-E Law contemplates two thermally isolated systems that are initially at equilibrium and for the sake of argument exhibit different lapse rates in the presence of gravity. Consequently, at least one of them has a non-zero thermal-equilibrium lapse rate. If those erstwhile-isolated systems are then thermally coupled at two different altitudes, the B E Law says that net heat flow between them would last forever (or, apparently, at least until their temperatures decay to absolute zero if they are coupled through heat engines) as the two systems attempt to maintain their pre-coupling lapse rates.
But I know of no law of physics that says this would be the result. What law of physics says the net heat flow would not be zero instead or at least decay to zero as the now-coupled systems approach a common, composite-system equilibrium lapse rate? Unless there is one, Dr. Brown’s “proof” is invalid. This was the point of my initial comment: one would be better advised to rely on Velasco et al. for proof of (near) isothermality than to adopt Dr. Brown’s reasoning.
Well, I guess we’ll have to agree to disagree. To a layman, when a scientist says a particular situation will result in perpetual motion when in fact it won’t, that’s bad physics. Perhaps you scientists are privy to ineffables we layman are not granted to grasp.
But I don’t think so.
The issue is not, whether a situation will result in perpetual motion or not.
It is, whether certain concepts are well defined or whether the velocity distribution of molecules follows exactly the Maxwell-Boltzmann distribution. In the case discussed by Velasco et al defining temperatures at all levels with the precision required by that approach is one problem and the velocity distribution is (very slightly) different from Maxwell-Boltzmann.
That the velocity distribution differs from Maxwell-Boltzmann can be seen by considering systems that have only a two or few more molecules even without going through their derivations. Two molecules divide their combined kinetic energy differently from the Maxwell-Boltzmann distribution with or without gravity. What they derive is just an extension of that observation.
It’s analogous to the difference between binomial distribution and normal distribution.
It was probably obscure of me to refer to perpetual motion. But here is what I do think is bad physics:
“Two different columns of gas with different lapse rates. Place them in good thermal contact at the bottom, so that the bottoms remain at the same temperature. They must therefore be at different temperatures at the top. Run a heat engine between the two reservoirs at the top and it will run forever, because as fast as heat is transferred from one column to another, (warming the top) it warms the bottom of that column by an identical amount, causing heat to be transferred at the bottom to both cool the column back to its original temperature profile and re-warm the bottom of the other column. The heat simply circulates indefinitely, doing work as it does, until the gas in both columns approaches absolute zero in temperature, converting all of their mutual heat content into work.”
I think that the two columns would assume a new, lower lapse rate and not go to absolute zero.
The assumption that’s contradicted is that the equilibrium lapse rates differ. One of the columns and the heat engine can also be replaced by a thermoelectric pair that generates electric power when a temperature difference is maintained between the two junctions. Thus the only alternative consistent with the second law is that the equilibrium lapse rate is zero.
Velasco et al. say otherwise. If you check out their Eq’n (8), you’ll see that two different-size, same-average-molecular-kinetic-energy islolated columns will indeed have different lapse rates, because their respective values of E differ: there would indeed be different lapse rates. And thereafter bringing the erstwhile-isolated columns into thermal communication will result in their assuming a new, lower, common lapse rate dictated by the sum of the two E’s. This can be seen by carefully studying Roman et al’s math.
Joe,
You should take into account that what breaks down in their case is the connection between temperature and the average molecular kinetic energy. Temperature can be defined only for a local thermodynamic equilibrium, but their results deviate from the more standard ones only as much as they lead to deviation from local thermodynamic equilibrium.
If we have only one particle in the box, the sum of the kinetic energy and the potential energy must be constant. Thus the velocity depends uniquely on the temperature. With two particles the results for the relation between the average kinetic energy and altitude is somewhat weaker, but still strong, etc. This has, however, absolutely no relevance for thermodynamics, and almost no relevance for statistical mechanics of realistic systems.
I’m sorry, but I have no idea what you mean. I’ve told you why coupling two erstwhile-isolated columns of different lapse rates wouldn’t cause their temperatures to fall to zero, as Dr. Brown contended and you apparently agreed. Your response is the vague, conclusory statement that something (what, I’m not sure) has almost no relevance to statistical mechanics. That statement is too impressionistic to pass any information. But Velasco et al.’s paper really is statistical mechanics, unlike the blog arguments I’ve otherwise encountered. So saying it has almost no relevance is just wrong. And talking about one or two particles also is a non starter. That’s not enough particles? Tell me how many you’d like. The gradient applies to any number you pick.
Velasco et al. say that (1) a thermally isolated column of monatomic particles will have a gradient in average kinetic energy and (2) the gradient it exhibits will depend on the column’s total energy. Therefore two such columns can have different gradients.
Question 1: Do you agree with Velasco et al. on this?
Question 2: If so, take two semi-infinite columns of identical-molecular-mass monatomic gases that have the same average energy per molecule, but one has only half the particles the other has, so their average-kinetic-energy gradients differ in accordance with Velasco et al.’s Eq’n (8). What do you think happens if those two columns are tthen brought into thermal communication?
Question 3: If you think they would drive themselves to zero kinetic energy, as Dr. Brown contended, what mechanism would cause that to happen, and how would it differ from the result that would follow if the columns’ numbers of particles were instead the same?
Let me give you a hint. Given that both columns have the same average energy per unit molecule (but, having different numbers of molecules, different total energies), what, over a long time, would the average rate of energy transfer per unit time between the two columns be after they are brought into thermal communication?
The argument of Brown and others requires that the columns contain different molecules. It implies also that the argument being falsified tells that the equilibrium lapse rate would be different for different molecules. Their argument cannot be made without the above assumptions. Therefore I added the alternative of using thermoelectric pairs, which are obviously not affected significantly by gravitation.
Laws of thermodynamics apply only in the thermodynamic limit. The argument of Brown is a thermodynamic argument. Velasco et al agree on the conclusion in the thermodynamic limit.
The difference in the gradient in the average kinetic energy that Velasco et al derive cannot even in principle be used to drive a heat engine. Coupling thermally two otherwise isolated columns at bottom and top leads probably to the same difference in resulting average kinetic energies between the top and bottom as would be obtained by putting all the gas in the same column, because it would be a microcanonical system again. I have not tried to study this question so carefully that I would be sure of this conclusion.
Microcanonical systems have a fixed energy. As a whole they do not have any temperature. Small subvolumes within a large microcanonical volume of gas form ensembles that are very close to grand canonical ensembles, but not exactly such. The minuscule deviations from grand canonical ensembles are of the same order of magnitude as the dependence of the average kinetic energy on the height. For this reason temperature is not perfectly defined for any such subvolume. Again the lack of perfectness is of the same order of magnitude.
The results of Velasco et al does not contradict to the least the argument of Brown. Their results apply to a different problem. Canonical and grand canonical ensembles apply better to the consideration of systems that interact with the surrounding. What happens within a totally isolated “black box” inherent in the definition of the microcanonical ensemble is of little practical interest.
For anyone who happens upon this page, the preceding colloquy is a good example of why many of us laymen feel that science is too important to be left to scientists. A lot of them can parrot statements they’ve learned in school that are true if understood properly, but they use them inappositely. The truth is, they often don’t understand what they think they know.
Dr. Pirilä hides behind the contention that an assembly exhibiting a microstate in a microcanonical ensemble does not have a temperature. Although there is a sense in which that can be argued to be true–it depends on definitional differences I won’t go into here–that sense is irrelevant to Dr. Brown’s thesis, in which the lapse rate he was talking about was clearly a gradient in mean translational kinetic energy, because he had elsewhere explicitly said his definition of temperature was mean molecular kinetic energy (I believe he assumed monatomic molecules). Velasco et al. clearly says that this quantity exists and has a value at equilibrium that is nonzero in a gravitational field, and Dr. Brown clearly says that, if that such a nonzero quantity were to prevail–and he explicitly said it didn’t matter how small it was–coupling two columns would so drive a heat engine as to cool the columns to absolute zero. Dr. Pirilä rambles on with a lot of irrelevancies and then says Velasco et al.’s conclusions are not inconsistent with Dr. Brown’s “proof.”
I’m sure there are indeed scientists somewhere who can grasp the concepts well enough to see this isn’t true, but I have so far encountered none in the blogosphere. So I am very cautious about accepting a proposition merely because it’s what scientists say. That’s too bad, because it requires laymen to spend more time investigating this stuff than we’d have to if we could rely on scientists.
Joe,
You’re completely missing the point. If the kinetic energy distribution does not follow exactly the Maxwell-Boltzmann distribution, then the average kinetic energy is NO LONGER EXACTLY EQUIVALENT TO TEMPERATURE. If you could put perfect thermocouples in the top and bottom of the column, they would have exactly the same potential, i.e. you couldn’t extract energy from the system. Nothing in the papers you cite disagree with this. There may be a miniscule kinetic energy gradient. There is no temperature gradient.
DeWitt,
It’s more essential that you cannot couple anything to a microcanonical system without losing its microcanonical nature. It’s possible, in principle, to couple two microcanonical systems together to form a larger microcanonical system. Coupling the system in any way to the outside world, as any heat engine or thermoelectric couple does by definition of the new system, changes what was previously microcanonical to canonical and removes the whole effect Joe Born has been discussing.
An idealized heat engine discussed in terms of statistical mechanics would be disturbed by external fluctuations and would operate alternatively as heat engine and heat pump on the microscopic scale of the thermal fluctuations.
A totally isolated column is just that, it can never influence it’s surroundings without losing its nature as totally isolated. Such columns cannot be observed at the level required to see that difference in average kinetic energy without causing much larger changes to the system and thus without losing the signal.
Pekka,
So one shouldn’t even talk about temperature in a micro-canonical system? One can’t measure temperature without some connection to the outside world. If you can’t measure something, it might as well not exist. That would also imply that the concept of a temperature lapse rate in a micro-canonical system is meaningless.
DeWitt,
That’s right at least when the required precision is considered.
A microcanonical ensemble as a whole has a fixed energy, not a distribution of energy that corresponds to a temperature. Thus definition of temperature does not apply to it. Small parts of a large isolated (microcanonical) system may have properties close to those of (grand) canonical ensemble and thus have temperatures, but that’s only approximate and fails totally, when a great accuracy is required as in this argument.
The older paper of Román et al presents various average quantities as function of height for systems of a few particles. From those curves we can see, how the average kinetic energy decreases with height, and how the density of gas does not fall as fast as the standard barometric formula tells. Thus we have more molecules of lower energy at the top than we would have in the case of canonical ensemble. This explains, how the combined effect does not correspond to a lower temperature at the level a temperature can be considered.
Anyway all these effects are really, really tiny. The effects are comparable to the influence of a single molecule. Any interaction with anything external disturbs the system with such an amount (or much more). The whole consideration is purely academic as the walls of the column must also be totally non-interacting for the calculation to be valid. Not a single molecule not part of the system considered is allowed to exchange energy with it.
In some sense this whole discussion is irrelevant, but it’s actually nice be forced to think this kind of basics every now and then.
Pekka,
Precisely my thought as well.
Actually, the point is whether Dr. Brown’s proof is valid. In that context, the thermodynamic definition of temperature makes no sense, and the statistical-mechanical definition is at least problematic. But the gas-law definition, i.e., mean molecular translational kinetic energy, is consistent with the proof’s opening assumption: a non-zero lapse rate at equilibrium. Therefore, since Dr. Payne persists in straddling the thermodynamic and statistical-mechanical definitions, I believe it is he who missed the point.
Unfortunately, this site rejected my attempt to post a comment that explained this in greater detail. So I’ve been forced to leave the discussion maddeningly incomplete.
Joe Born,
The ideal gas law does not apply to a microcanonical system.
The fundamental definition of temperature is that when no heat flows between two connected objects, they are at the same temperature. A microcanonical system, as Pekka pointed out, cannot be connected to anything else and remain a microcanonical system. There is no temperature lapse rate in a microcanonical system because it doesn’t have a defined temperature, only a total energy.
DeWitt Payne: “The fundamental definition of temperature is that when no heat flows between two connected objects, they are at the same temperature.”
True, that is one of the definitions of temperature, namely, the thermodynamic one. If Dr. Brown had been using that definition, though, his proof would have been as follows. The isolated gas column is at equilibrium. By definition, that means no net heat flows. The temperature definition I choose to use is the thermodynamic one, in accordance with which temperature is equal if no net heat flows. Equal temperature means zero temperature gradient, i.e., zero lapse rate. Q.E.D.
That would have been a valid proof (of no thermodynamic-definition lapse rate at equilibrium). But his proof instead relied on what I’ve dubbed the “Brown-Eschenbach Law of Lapse-Rate Conservation” (the “B-E Law”). So his proof necessarily used the gas-law, or kinetic-theory, definition.
DeWitt Payne: “There is no temperature lapse rate in a microcanonical system because it doesn’t have a defined temperature, only a total energy.”
Here Dr. Payne implicitly employs the statistical-mechanical definition of temperature, which equals mean molecular kinetic energy exactly only for the canonical ensemble, not the microcanonical ensemble. (I might note in passing that no real-world system is characterized by the canonical ensemble any more than by the microcanonical ensemble.)
But if Dr. Brown had been using that definition, his proof would have been the following. Take a thermally isolated gas column in a gravitational field. Its microstates constitute a microcanonical ensemble. But, according to the statistical-mechanical definition of temperature, such a system has no defined temperature. Therefore, that column has no lapse rate—zero or non-zero—if you use the statistical-mechanical definition of temperature. Q.E.D: it has no non-zero lapse rate.
That would have been a valid proof of no non-zero statistical-mechanical-temperature-definition lapse rate.
But that’s not what Dr. Brown did. What he did was rely on the B-E Law, of which I am aware of no proof.
My apologies for a duplicate post, but I previously entered this in the wrong place:
The B-E Law contemplates two thermally isolated systems that are initially at equilibrium and for the sake of argument exhibit different lapse rates in the presence of gravity. Consequently, at least one of them has a non-zero thermal-equilibrium lapse rate. If those erstwhile-isolated systems are then thermally coupled at two different altitudes, the B E Law says that net heat flow between them would last forever (or, apparently, at least until their temperatures decay to absolute zero if they are coupled through heat engines) as the two systems attempt to maintain their pre-coupling lapse rates.
But I know of no law of physics that says this would be the result. What law of physics says the net heat flow would not be zero instead or at least decay to zero as the now-coupled systems approach a common, composite-system equilibrium lapse rate? Unless there is one, Dr. Brown’s “proof” is invalid. This was the point of my initial comment: one would be better advised to rely on Velasco et al. for proof of (near) isothermality than to adopt Dr. Brown’s reasoning.
DeWitt Payne: “A microcanonical system, as Pekka pointed out, cannot be connected to anything else and remain a microcanonical system.”
To the extent that’s true, it’s irrelevant to Dr. Brown’s proof. And, to the extent that it’s relevant to Dr. Brown’s proof, it isn’t true.
It is true that an ideal-gas column whose possible microstates constitute a microcanonical ensemble when the column is thermally isolated will, upon thermal communication with another, erstwhile-isolated column, have a vastly expanded ensemble of microstates which is different from that microcanonical ensemble. But how does that fact validate Dr. Brown’s proof or establish the B-E Law, on which his proof is based?
On the other hand, the resultant composite system’s microstates themselves constitute a microcanonical ensemble—with its own non-zero kinetic-energy gradient in the presence of gravity and thus its own non-zero kinetic-theory lapse rate.
It is only if we use the kinetic-theory definition of temperature that Dr. Brown’s initial assumption—a non-zero lapse rate at equilibrium in an isolated gas column—makes any sense. And, under that definition, Velasco et al. show that coupling two different-lapse-rate columns (say, columns having different amounts of the same-molecular-weight ideal gas) would simply result in a new, (also isolated) composite system having a lower but still non-zero kinetic-theory-temperature-definition lapse rate.
Nothing requires the perpetual motion in which the B-E Law says such a coupling would result.
By the way, I paraphrased the B-E Law. Here’s how Dr. Brown introduced it:
“Two different columns of gas with different lapse rates. Place them in good thermal contact at the bottom, so that the bottoms remain at the same temperature. They must therefore be at different temperatures at the top. Run a heat engine between the two reservoirs at the top and it will run forever, because as fast as heat is transferred from one column to another, (warming the top) it warms the bottom of that column by an identical amount, causing heat to be transferred at the bottom to both cool the column back to its original temperature profile and re-warm the bottom of the other column. The heat simply circulates indefinitely, doing work as it does, until the gas in both columns approaches absolute zero in temperature, converting all of their mutual heat content into work.”
One final note. I see little point in arguing about which definition of temperature is most fundamental. I would only suggest contemplating the following thought experiment.
Take a thermally isolated gas column and a negligible-thermal-mass thermometer that initially is thermally isolated from everything. Then bring the thermometer into thermal communication with the column at a high altitude so that the mean translational kinetic energy of the thermometer’s molecules equals that of the gas’s at that altitude.
Return the thermometer to isolation and, in that isolated state, bring it to a lower altitude, where, as Velasco et al. say, the mean molecular translational kinetic energy in the gas column is higher—but in accordance with the thermodynamic definition the temperature is the same. What happens to the thermometer’s mean molecular kinetic energy when the thermometer is brought into thermal communication with the column at the lower, higher-mean-kinetic-energy altitude?
Although I’ll leave you to draw your own conclusions, I’ll confess that the question is more complicated than it may seem to at first. True, Velasco et al. do say there’s a mean-kinetic-energy difference between altitudes. But that difference is an over-time mean that for any significant number of molecules is no doubt only a minuscule fraction of the that difference’s variance. At any time instant, that is, the mean molecular kinetic energy in one altitude range is only slightly more likely to be less than in a lower altitude range than it is to be higher.
And that doesn’t even take into account quantum mechanics, which for all I know so smears the “time instant” as to make that all meaningless; in this I’m venturing beyond what I understand.
But I do understand logic, and I remain convinced that Dr. Brown’s proof is invalid.
Joe,
Remember that the whole difference between the average total kinetic energies of a subvolume in a canonical and in a microcanonical ensemble is of the order of the variation in the potential energy of a single molecule. That minuscule energy is distributed among all molecules.
Any thermometer has a huge thermal capacity in comparison to a single molecule. Therefore any measurement of the temperature disturbs the system much more than the difference to be measured. Therefore all measurements of the nature you propose are impossible. This is not only a practical limitation, but it’s fundamental and cannot be circumvented even in theory – or in any physically meaningful thought experiment.
Dr. Brown’s proof is for the thermodynamics of for the thermodynamic limit of statistical mechanics. It dies not apply to the imagined finite microcanonical ensemble. It’s not right or wrong for that, as it’s not at all about that. As far as I know, nobody has claimed that it would apply to that idealized case.
I may have erred when I wrote above that “an inference to be drawn from Velasco et al. is that bringing the two erstwhile-isolated gas columns into communication would instead result in the thus-created composite body’s assuming a common, lower gradient and thus avoiding the perpetual motion that the proponents of that type of ‘proof’ would attribute to it.” Yes, perpetual motion would be avoided, but I’m not sure the two systems’ gradients would necessarily end up being equal.
In a proof by contradiction a premise is refuted by showing that it leads logically to a false conclusion. The proof is faulty if its reasoning is based on something inconsistent with the premise, or if the effective premise actually differs from that which is to be refuted.
My objection to Robert G. Brown’s silver-wire proof was that its actual premise differed from the one he attempted to refute. The one he attempted to refute was that at equilibrium an ideal-gas column disposed in a gravitational field would exhibit a non-zero lapse rate.
But his actual premise, I objected, was a combination of that proposition and the proposition that thermally coupling the gas column to another system would not affect the gas column’s equilibrium lapse rate. So, I implied, he actually proved only that a gas column’s equilibrium lapse rate couldn’t be non-zero IF it were unaffected by thermal coupling. However, I’m no longer sure he proved even that.
Dr. Brown assumed that thermal coupling would—as we infer from our observations of the zero-equilibrium-lapse-rate world—impose a common lapse rate on the coupled systems; he assumed that at equilibrium the two systems’ mean kinetic energies per molecule are required to be equal at each thermal-coupling point. Frankly, I assumed that, too. As a result of a discussion on another site, though, I realized that I wasn’t so sure that under the non-zero-equilibrium-lapse-rate premise—which must prevail throughout the proof—such an assumption is consistent with the proof’s premise.
To extrapolate to conditions that would prevail under the non-zero-equilibrium-lapse-rate premise of Dr. Brown’s proof, let’s drastically reduce the gas column’s number of molecules to two, and let’s give those two molecules different masses. Note that at any given altitude the mean kinetic energy of the more-massive molecule is less than that of the less-massive one; when they have equal kinetic energies and collide, the less-massive molecule always gains energy at the more-massive one’s expense. If we look at the two molecules as two thermally coupled systems, we see that at equilibrium such systems’ mean kinetic energies per molecule are not necessarily equal.
Yes, yes, I know. A single molecule is not a gas. What it’s undergoing is ballistic motion, not thermal motion. The effect I just pointed out is attenuated as the number of molecules increases. I’ve heard all those arguments. But they describe differences only in degree.
True, the effect is attenuated, but it is not extinguished entirely. So the fact remains that, to the extent that a non-zero equilibrium lapse rate does prevail—as it must according to the premise of Dr. Brown’s proof—thermal coupling may not necessarily imply equal mean kinetic energies per molecule. And Dr. Brown’s proof may therefore include another unfounded assumption, another difference between his proof’s ostensible premise and its actual premise.
Again, I don’t disagree with the essence of Dr. Brown’s overall equilibrium-isothermality conclusion; in fact, his conclusion was my reaction to Velasco et al. from the very start. But I remain convinced that proofs such as Dr. Brown’s based on finding perpetual undriven heat flow are invalid.
Pekka Pirilä: “This is not only a practical limitation, but it’s fundamental and cannot be circumvented even in theory – or in any physically meaningful thought experiment.”
So what? The premise of Dr. Brown’s thought experiment is a non-zero equilibrium lapse rate. Is that physically meaningful? The issue is whether Dr. Brown’s logic is valid, and to test whether it is we have to deal with the world that his proof’s premise assumes, physically meaningful or not.
Pekka Pirilä: “Therefore all measurements of the nature you propose are impossible.”
But I didn’t propose any measurements. I took note of the value that theory gives an unmeasurable quantity, and I used it to test an assumption, on which Dr. Brown’s silver-wire proof is based, about what would happen in the perhaps physically unrealizable world of his proof’s ostensible premise.
Pekka Pirilä: “Therefore any measurement of the temperature disturbs the system much more than the difference to be measured.”
I have never contended otherwise. In fact, I said from the beginning that the incredible smallness of that gradient’s magnitude—which in most cases is probably orders of magnitude smaller than even the fluctuations in the gradient—shows that Velasco et al. essentially establish the ultimate conclusion of Dr. Brown’s proof. My problem is not his ultimate conclusion but his logic.
Please try to focus on the issue. The issue is not whether any physically measurable non-zero equilibrium lapse rate exists. The issue is whether Dr. Brown successfully refuted that proposition, i.e., whether he successfully established that non-zero equilibrium lapse rates necessarily imply that net heat could flow undriven perpetually.
I know you have contended that “The issue is not, whether a situation will result in perpetual motion or not.” But you’re wrong. Or, rather, the precise issue is perpetual heat flow; Dr. Brown’s whole proof depends on such a result’s following logically from the non-zero-equilibrium-lapse-rate premise to be refuted. Unless you can see that it does, we have nothing to discuss.
Joe,
In physics values that are not measurable even in principle are usually not considered worth any attention. Therefore the calculated very small difference in the average kinetic energy is not of physical significance. Thus it does not contradict anything Dr. Brown wrote.
Pekka,
Assuming that this is the article to which Joe Born refers, an insulated silver wire thermally connects the top and bottom of the gas column. Any calculations related to a microcanonical ensemble are therefore irrelevant.
DeWitt,
The issues Joe Born has raised are interesting in the way that pondering such questions reveals something about physics. In this case perhaps most about the nature of microcanonical ensembles, but also more generally about statistical physics.
What I don’t like in his texts is the criticism he presents on Brown’s comment. The comment is, unavoidably, simplifying, but not in a way that could be considered wrong in any sense.