Feeds:
Posts
Comments

In Latent heat and Parameterization I showed a formula for calculating latent heat transfer from the surface into the atmosphere, as well as the “real” formula. The parameterized version has horizontal wind speed x humidity difference (between the surface and some reference height in the atmosphere, typically 10m) x “a coefficient”.

One commenter asked:

Why do we expect that vertical transport of water vapor to vary linearly with horizontal wind speed? Is this standard turbulent mixing?

The simple answer is “almost yes”. But as someone famously said, make it simple, but not too simple.

Charting a course between too simple and too hard is a challenge with this subject. By contrast, radiative physics is a cakewalk. I’ll begin with some preamble and eventually get to the destination.

There’s a set of equations describing motion of fluids and what they do is conserve momentum in 3 directions (x,y,z) – these are the Navier-Stokes equations, and they conserve mass. Then there are also equations to conserve humidity and heat. There is an exact solution to the equations but there is a bit of a problem in practice. The Navier-Stokes equations in a rotating frame can be seen in The Coriolis Effect and Geostrophic Motion under “Some Maths”.

Simple linear equations with simple boundary conditions can be re-arranged and you get a nice formula for the answer. Then you can plot this against that and everyone can see how the relationships change with different material properties or boundary conditions. In real life equations are not linear and the boundary conditions are not simple. So there is no “analytical solution”, where we want to know say the velocity of the fluid in the east-west direction as a function of time and get a nice equation for the answer. Instead we have to use numerical methods.

Let’s take a simple problem – if you want to know heat flow through an odd-shaped metal plate that is heated in one corner and cooled by steady air flow on the rest of its surface you can use these numerical methods and usually get a very accurate answer.

Turbulence is a lot more difficult due to the range of scales involved. Here’s a nice image of turbulence:

Figure 1

There is a cascade of energy from the largest scales down to the point where viscosity “eats up” the kinetic energy. In the atmosphere this is the sub 1mm scale. So if you want to accurately numerically model atmospheric motion across a 100km scale you need a grid size probably 100,000,000 x 100,000,000 x 10,000,000 and solving sub-second for a few days. Well, that’s a lot of calculation. I’m not sure where turbulence modeling via “direct numerical simulation” has got to but I’m pretty sure that is still too hard and in a decade it will still be a long way off. The computing power isn’t there.

Anyway, for atmospheric modeling you don’t really want to know the velocity in the x,y,z direction (usually annotated as u,v,w) at trillions of points every second. Who is going to dig through that data? What you want is a statistical description of the key features.

So if we take the Navier-Stokes equation and average, what do we get? We get a problem.

For the mathematically inclined the following is obvious, but of course many readers aren’t, so here’s a simple example:

Let’s take 3 numbers: 1, 10, 100:   the average = (1+10+100)/3 = 37.

Now let’s look at the square of those numbers: 1, 100, 10000:  the average of the square of those numbers = (1+100+10000)/3 = 3367.

But if we take the average of our original numbers and square it, we get 37² = 1369. It’s strange but the average squared is not the same as the average of the squared numbers. That’s non-linearity for you.

In the Navier Stokes equations we have values like east velocity x upwards velocity, written as uw. The average of uw, written as \overline{uw} is not equal to the average of u x the average of w, written as \overline{u}.\overline{w}. For the same reason we just looked at.

When we create the Reynolds averaged Navier-Stokes (RANS) equations we get lots of new terms like\overline{uw}. That is, we started with the original equations which gave us a complete solution – the same number of equations as unknowns. But when we average we end up with more unknowns than equations.

It’s like saying x + y = 1, what is x and y? No one can say. Perhaps 1 & 0. Perhaps 1000 & -999.

Digression on RANS for Slightly Interested People

The Reynolds approach is to take a value like u,v,w (velocity in 3 directions) and decompose into a mean and a “rapidly varying” turbulent component.

So u = \overline{u} + u', where \overline{u} = mean value;  u’ = the varying component. So \overline{u'} = 0. Likewise for the other directions.

And \overline{uw} = \overline{u} . \overline{w} + \overline{u'w'}

So in the original equation where we have a term like u . \frac{\partial u}{\partial x}, it turns into  (\overline{u} + u') . \frac{\partial (\overline{u} + u')}{\partial x}, which, when averaged, becomes:

\overline{u} . \frac{\partial \overline{u}}{\partial x} +\overline{u' . \frac{\partial u'}{\partial x}}

So 2 unknowns instead of 1. The first term is the averaged flow, the second term is the turbulent flow. (Well, it’s an advection term for the change in velocity following the flow)

When we look at the conservation of energy equation we end up with terms for the movement of heat upwards due to average flow (almost zero) and terms for the movement of heat upwards due to turbulent flow (often significant). That is, a term like \overline{\theta'w'} which is “the mean of potential temperature variations x upwards eddy velocity”.

Or, in plainer English, how heat gets moved up by turbulence.

..End of Digression

Closure and the Invention of New Ideas

“Closure” is a maths term. To “close the equations” when we have more unknowns that equations means we have to invent a new idea. Some geniuses like Reynolds, Prandtl and Kolmogoroff did come up with some smart new ideas.

Often the smart ideas are around “dimensionless terms” or “scaling terms”. The first time you encounter these ideas they seem odd or just plain crazy. But like everything, over time strange ideas start to seem normal.

The Reynolds number is probably the simplest to get used to. The Reynolds number seeks to relate fluid flows to other similar fluid flows. You can have fluid flow through a massive pipe that is identical in the way turbulence forms to that in a tiny pipe – so long as the viscosity and density change accordingly.

The Reynolds number, Re = density x length scale x mean velocity of the fluid / viscosity

And regardless of the actual physical size of the system and the actual velocity, turbulence forms for flow over a flat plate when the Reynolds number is about 500,000. By the way, for the atmosphere and ocean this is true most of the time.

Kolmogoroff came up with an idea in 1941 about the turbulent energy cascade using dimensional analysis and came to the conclusion that the energy of eddies increases with their size to the power 2/3 (in the “inertial subrange”). This is usually written vs frequency where it becomes a -5/3 power. Here’s a relatively recent experimental verification of this power law.

From Durbin & Reif 2010

From Durbin & Reif 2010

 Figure 2

In less genius like manner, people measure stuff and use these measured values to “close the equations” for “similar” circumstances. Unfortunately, the measurements are only valid in a small range around the experiments and with turbulence it is hard to predict where the cutoff is.

A nice simple example, to which I hope to return because it is critical in modeling climate, is vertical eddy diffusivity in the ocean. By way of introduction to this, let’s look at heat transfer by conduction.

If only all heat transfer was as simple as conduction. That’s why it’s always first on the list in heat transfer courses..

If have a plate of thickness d, and we hold one side at temperature T1 and the other side at temperature T2, the heat conduction per unit area:

H_z = \frac{k(T_2-T_1)}{d}

where k is a material property called conductivity. We can measure this property and it’s always the same. It might vary with temperature but otherwise if you take a plate of the same material and have widely different temperature differences, widely different thicknesses – the heat conduction always follows the same equation.

Now using these ideas, we can take the actual equation for vertical heat flux via turbulence:

H_z =\rho c_p\overline{w'\theta'}

where w = vertical velocity, θ = potential temperature

And relate that to the heat conduction equation and come up with (aka ‘invent’):

H_z = \rho c_p K . \frac{\partial \theta}{\partial z}

Now we have an equation we can actually use because we can measure how potential temperature changes with depth. The equation has a new “constant”, K. But this one is not really a constant, it’s not really a material property – it’s a property of the turbulent fluid in question. Many people have measured the “implied eddy diffusivity” and come up with a range of values which tells us how heat gets transferred down into the depths of the ocean.

Well, maybe it does. Maybe it doesn’t tell us very much that is useful. Let’s come back to that topic and that “constant” another day.

The Main Dish – Vertical Heat Transfer via Horizontal Wind

Back to the original question. If you imagine a sheet of paper as big as your desk then that pretty much gives you an idea of the height of the troposphere (lower atmosphere where convection is prominent).

It’s as thin as a sheet of desk size paper in comparison to the dimensions of the earth. So any large scale motion is horizontal, not vertical. Mean vertical velocities – which doesn’t include turbulence via strong localized convection – are very low. Mean horizontal velocities can be the order of 5 -10 m/s near the surface of the earth. Mean vertical velocities are the order of cm/s.

Let’s look at flow over the surface under “neutral conditions”. This means that there is little buoyancy production due to strong surface heating. In this case the energy for turbulence close to the surface comes from the kinetic energy of the mean wind flow – which is horizontal.

There is a surface drag which gets transmitted up through the boundary layer until there is “free flow” at some height. By using dimensional analysis, we can figure out what this velocity profile looks like in the absence of strong convection. It’s logarithmic:

Surface-wind-profile

Figure 3 – for typical ocean surface

Lots of measurements confirm this logarithmic profile.

We can then calculate the surface drag – or how momentum is transferred from the atmosphere to the ocean – using the simple formula derived and we come up with a simple expression:

\tau_0 = \rho C_D U_r^2

Where Ur is the velocity at some reference height (usually 10m), and CD is a constant calculated from the ratio of the reference height to the roughness height and the von Karman constant.

Using similar arguments we can come up with heat transfer from the surface. The principles are very similar. What we are actually modeling in the surface drag case is the turbulent vertical flux of horizontal momentum \rho \overline{u'w'} with a simple formula that just has mean horizontal velocity. We have “closed the equations” by some dimensional analysis.

Adding the Richardson number for non-neutral conditions we end up with a temperature difference along with a reference velocity to model the turbulent vertical flux of sensible heat \rho c_p . \overline{w'\theta'}. Similar arguments give latent heat flux L\rho . \overline{w'q'} in a simple form.

Now with a bit more maths..

At the surface the horizontal velocity must be zero. The vertical flux of horizontal momentum creates a drag on the boundary layer wind. The vertical gradient of the mean wind, U, can only depend on height z, density ρ and surface drag.

So the “characteristic wind speed” for dimensional analysis is called the friction velocity, u*, and u* = \sqrt\frac{\tau_0}{\rho}

This strange number has the units of velocity: m/s  – ask if you want this explained.

So dimensional analysis suggests that \frac{z}{u*} . \frac{\partial U}{\partial z} should be a constant – “scaled wind shear”. The inverse of that constant is known as the Von Karman constant, k = 0.4.

So a simple re-arrangement and integration gives:

U(z) = \frac{u*}{k} . ln(\frac{z}{z_0})

where z0 is a constant from the integration, which is roughness height – a physical property of the surface where the mean wind reaches zero.

The “real form” of the friction velocity is:

u*^2 = \frac{\tau_0}{\rho} = (\overline{u'w'}^2 + \overline{v'w'}^2)^\frac{1}{2},  where these eddy values are at the surface

we can pick a horizontal direction along the line of the mean wind (rotate coordinates) and come up with:

u*^2 = \overline{u'w'}

If we consider a simple constant gradient argument:

\tau = - \rho . \overline{u'w'} = \rho K \frac{\partial \overline{u}}{\partial z}

where the first expression is the “real” equation and the second is the “invented” equation, or “our attempt to close the equation” from dimensional analysis.

Of course, this is showing how momentum is transferred, but the approach is pretty similar, just slightly more involved, for sensible and latent heat.

Conclusion

Turbulence is a hard problem. The atmosphere and ocean are turbulent so calculating anything is difficult. Until a new paradigm in computing comes along, the real equations can’t be numerically solved from the small scales needed where viscous dissipation damps out the kinetic energy of the turbulence up to the large scale of the whole earth, or even of a synoptic scale event. However, numerical analysis has been used a lot to test out ideas that are hard to test in laboratory experiments. And can give a lot of insight into parts of the problems.

In the meantime, experiments, dimensional analysis and intuition have provided a lot of very useful tools for modeling real climate problems.

In Ensemble Forecasting I wrote a short section on parameterization using the example of latent heat transfer and said:

Are we sure that over Connecticut the parameter CDE = 0.004, or should it be 0.0035? In fact, parameters like this are usually calculated from the average of a number of experiments. They conceal as much as they reveal. The correct value probably depends on other parameters. In so far as it represents a real physical property it will vary depending on the time of day, seasons and other factors. It might even be, “on average”, wrong. Because “on average” over the set of experiments was an imperfect sample. And “on average” over all climate conditions is a different sample.

Interestingly, a new paper has just shown up in JGR (“accepted for publication” and on their website in the pre-publishing format): Seasonal changes in physical processes controlling evaporation over an inland water, Qianyu Zhang & Heping Liu.

They carried out detailed measurements over a large reservoir (134 km² and 4-8m deep) in Mississippi for the winter and summer months of 2008. What were they trying to do?

Understanding physical processes that control turbulent fluxes of energy, heat, water vapor, and trace gases over inland water surfaces is critical in quantifying their influences on local, regional, and global climate. Since direct measurements of turbulent fluxes of sensible heat (H) and latent heat (LE) over inland waters with eddy covariance systems are still rare, process-based understanding of water-atmosphere interactions remains very limited..

..Many numerical weather prediction and climate models use the bulk transfer relations to estimate H and LE over water surfaces. Given substantial biases in modeling results against observations, process-based analysis and model validations are essential in improving parameterizations of water-atmosphere exchange processes..

Before we get into their paper, here is a relevant quote on parameterization from a different discipline. This is from Turbulent dispersion in the ocean, Garrett (2006):

Including the effects of processes that are unresolved in models is one of the central problems in oceanography.

In particular, for temperature, salinity, or some other scalar, one seeks to parameterize the eddy flux in terms of quantities that are resolved by the models. This has been much discussed, with determinations of the correct parameterization relying on a combination of deductions from the large-scale models, observations of the eddy fluxes or associated quantities, and the development of an understanding of the processes responsible for the fluxes.

The key remark to make is that it is only through process studies that we can reach an understanding leading to formulae that are valid in changing conditions, rather than just having numerical values which may only be valid in present conditions.

[Emphasis added]

Background

Latent heat transfer is the primary mechanism globally for transferring the solar radiation that is absorbed at the surface up into the atmosphere. Sensible heat is a lot smaller by comparison with latent heat. Both are “convection” in a broad term – the movement of heat by the bulk movement of air. But one is carrying the “extra heat” of evaporated water. When the evaporated water condenses (usually higher up in the atmosphere) it releases this stored heat.

Let’s take a look at the standard parameterization in use (adopting their notation) for latent heat:

LE = ρaLCEU(qw −qa)

LE = latent heat transfer, ρa = air density, L = latent heat of vaporization (2.5×106 J kg–1), CE = bulk transfer coefficient for moisture, U = wind speed, qw & qa are the respective specific humidity in the water-atmosphere interface and the over-water atmosphere

The values  ρa and L are a fundamental values. The formula says that the key parameters are:

  • wind speed (horizontal)
  • the difference between the humidity at the water surface (this is the saturated value which varies strongly with temperature) and the humidity in the air above

We would expect the differential of humidity to be important – if the air above is saturated then latent heat transfer will be zero, because there is no way to move any more water vapor into the air above. At the other extreme, if the air above is completely dry then we have maximized the potential for moving water vapor into the atmosphere.

The product of wind speed and humidity difference indicate how much mixing is going on due to air flow. There is a lot of theory and experiment behind the ideas, going back into the 1950s or further, but in the end it is an over-simplification.

That’s what all parameterizations are – over-simplifications.

The real formula is much simpler:

 LE = ρaL<w’q’>, where the brackets denote averages,w’q’ = the turbulent moisture flux

w is the upwards velocity, q is moisture; and the ‘ denoting eddies

Note to commenters, if you write < or > in the comment it gets dropped because WordPress treats it like a html tag. You need to write &lt; or &gt;

The key part of this equation just says “how much moisture is being carried upwards by turbulent flow”. That’s the real value so why don’t we measure that instead?

Here’s a graph of horizontal wind over a short time period from Stull (1988):

From Stull 1988

From Stull 1988

Figure 1

And any given location the wind varies across every timescale. Pick another location and the results are different. This is the problem of turbulence.

And to get accurate measurements for the paper we are looking at now, they had quite a setup:

Zhang 2014-instruments

Figure 2

Here’s the description of the instrumentation:

An eddy covariance system at a height of 4 m above the water surface consisted of a three-dimensional sonic anemometer (model CSAT3, Campbell Scientific, Inc.) and an open path CO2/H2O infrared gas analyzer (IRGA; Model LI-7500, LI-COR, Inc.).

A datalogger (model CR5000, Campbell Scientific, Inc.) recorded three-dimensional wind velocity components and sonic virtual temperature from the sonic anemometer and densities of carbon dioxide and water vapor from the IRGA at a frequency of 10 Hz.

Other microclimate variables were also measured, including Rn at 1.2 m (model Q-7.1, Radiation and Energy Balance Systems, Campbell Scientific, Inc.), air temperature (Ta) and relative humidity (RH) (model HMP45C, Vaisala, Inc.) at approximately 1.9, 3.0, 4.0, and 5.5 m, wind speeds (U) and wind direction (WD) (model 03001, RM Young, Inc.) at 5.5 m.

An infrared temperature sensor (model IRR-P, Apogee, Inc.) was deployed to measure water skin temperature (Tw).

Vapor pressure (ew) in the water-air interface was equivalent to saturation vapor pressure at Tw [Buck, 1981].

The same datalogger recorded signals from all the above microclimate sensors at 30-min intervals. Six deep cycling marine batteries charged by two solar panels (model SP65, 65 Watt Solar Panel, Campbell Scientific, Inc.) powered all instruments. A monthly visit to the tower was scheduled to provide maintenance and download the 10-Hz time-series data.

I don’t know the price tag but I don’t think the equipment is cheap. So this kind of setup can be used for research, but we can’t put one each every 1km across a country or an ocean and collect continuous data.

That’s why we need parameterizations if we want to get some climatological data. Of course, these need verifying, and that’s what this paper (and many others) are about.

Results

When we look back at the parameterized equation for latent heat it’s clear that latent heat should vary linearly with the product of wind speed and humidity differential. The top graph is sensible heat which we won’t focus on, the bottom graph is latent heat. Δe is humidity, expressed as partial pressure rather than g/kg. We see that the correlation between LE and wind speed x humidity differential is very different in summer and winter:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 2

The scatterplots showing the same information:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 3

The authors looked at the diurnal cycle – averaging the result for the time of day over the period of the results, separated into winter and summer.

Our results also suggest that the influences of U on LE may not be captured simply by the product of U and Δe [humidity differential] on short timescales, especially in summer. This situation became more serious when the ASL (atmospheric surface layer, see note 1) became more unstable, as reflected by our summer cases (i.e., more unstable) versus the winter cases.

They selected one period to review in detail. First the winter results:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 4

On March 18, Δe was small (i.e., 0 ~ 0.2 kPa) and it experienced little diurnal variations, leading to limited water vapor supply (Fig. 5a).

The ASL (see note 1) during this period was slightly stable (Fig. 5b), which suppressed turbulent exchange of LE. As a result, LE approached zero and even became negative, though strong wind speeds of approximately around 10 ms–1 were present, indicating a strong mechanical turbulent mixing in the ASL.

On March 19, with an increased Δe up to approximately 1.0 kPa, LE closely followed Δe and increased from zero to more than 200 Wm–2. Meanwhile, the ASL experienced a transition from stable to unstable conditions (Fig. 5b), coinciding with an increase in LE.

On March 20, however, the continuous increase of Δe did not lead to an increase in LE. Instead, LE decreased gradually from 200 Wm–2 to about zero, which was closely associated with the steady decrease in U from 10 ms–1 to nearly zero and with the decreased instability.

These results suggest that LE was strongly limited by Δe, instead of U when Δe was low; and LE was jointly regulated by variations in Δe and U once a moderate Δe level was reached and maintained, indicating a nonlinear response of LE to U and Δe induced by ASL stability. The ASL stability largely contributed to variations in LE in winter.

Then the summer results:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 5

In summer (i.e., July 23 – 25 in Fig. 6), Δe was large with a magnitude of 1.5 ~ 3.0 kPa, providing adequate water vapor supply for evaporation, and had strong diurnal variations (Fig. 6a).

U exhibited diurnal variations from about 0 to 8 ms–1. LE was regulated by both Δe and U, as reflected by the fact that LE variations on the July 24 afternoon did not follow solely either the variations of U or the variations of Δe. When the diurnal variations of Δe and U were small in July 25, LE was also regulated by both U and Δe or largely by U when the change in U was apparent.

Note that during this period, the ASL was strongly unstable in the morning and weakly unstable in the afternoon and evening (Fig. 6b), negatively corresponding to diurnal variations in LE. This result indicates that the ASL stability had minor impacts on diurnal variations in LE during this period.

Another way to see the data is by plotting the results to see how valid the parameterized equation appears. Here we should have a straight line between LE/U and Δe as the caption explains:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 6

One method to determine the bulk transfer coefficients is to use the mass transfer relations (Eqs. 1, 2) by quantifying the slopes of the linear regression of LE against UΔe. Our results suggest that using this approach to determine the bulk transfer coefficient may cause large bias, given the fact that one UΔe value may correspond to largely different LE values.

They conclude:

Our results suggest that these highly nonlinear responses of LE to environmental variables may not be represented in the bulk transfer relations in an appropriate manner, which requires further studies and discussion.

Conclusion

Parameterizations are inevitable. Understanding their limitations is very difficult. A series of studies might indicate that there is a “linear” relationship with some scatter, but that might just be disguising or ignoring a variable that never appears in the parameterization.

As Garrett commented “..having numerical values which may only be valid in present conditions”. That is, if the mean state of another climate variable shifts the parameterization will be invalid, or less accurate.

Alternatively, given the non-linear nature of climate process, changes don’t “average out”. So the mean state of another climate variable may not shift, the mean state might be constant, but its variation with time or another variable may introduce a change in the real process that results in an overall shift in climate.

There are other problems with calculating latent heat transfer – even accepting the parameterization as the best version of “the truth” – there are large observational gaps in the parameters we need to measure (wind speed and humidity above the ocean) even at the resolution of current climate models. This is one reason why there is a need for reanalysis products.

I found it interesting to see how complicated latent heat variations were over a water surface.

References

Seasonal changes in physical processes controlling evaporation over an inland water, Qianyu Zhang & Heping Liu, JGR (2014)

Turbulent dispersion in the ocean, Chris Garrett, Progress in Oceanography (2006)

Notes

Note 1:  The ASL (atmospheric surface layer) stability is described by the Obukhov stability parameter:

ζ = z/L0

where z is the height above ground level and L0 is the Obukhov parameter.

L0 = −θvu*3/[kg(w'θv')s ]

where θv is virtual potential temperature (K), u* is frictional velocity by the eddy covariance system (ms–1), k is Von Karman constant (0.4), g is acceleration due to gravity (9.8 ms–2), w is vertical velocity (m s–1), and (w’θv‘)s is the flux of virtual potential temperature by the eddy covariance system

The atmosphere cools to space by radiation. Well, without getting into all the details, the surface cools to space as well by radiation but not much radiation is emitted by the surface that escapes directly to space (note 1). Most surface radiation is absorbed by the atmosphere. And of course the surface mostly cools by convection into the troposphere (lower atmosphere).

If there were no radiatively-active gases (aka “GHG”s) in the atmosphere then the atmosphere couldn’t cool to space at all.

Technically, the emissivity of the atmosphere would be zero. Emission is determined by the local temperature of the atmosphere and its emissivity. Wavelength by wavelength emissivity is equal to absorptivity, another technical term, which says what proportion of radiation is absorbed by the atmosphere. If the atmosphere can’t emit, it can’t absorb (note 2).

So as you increase the GHGs in the atmosphere you increase its ability to cool to space. A lot of people realize this at some point during their climate science journey and finally realize how they have been duped by climate science all along! It’s irrefutable – more GHGs more cooling to space, more GHGs mean less global warming!

Ok, it’s true. Now the game’s up, I’ll pack up Science of Doom into a crate and start writing about something else. Maybe cognitive dissonance..

Bye everyone!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Halfway through boxing everything up I realized there was a little complication to the simplicity of that paragraph. The atmosphere with more GHGs has a higher emissivity, but also a higher absorptivity.

Let’s draw a little diagram. Here are two “layers” (see note 3) of the atmosphere in two different cases. On the left 400 ppmv CO2, on the right 500ppmv CO2 (and relative humidity of water vapor was set at 50%, surface temperature at 288K):

Cooling-to-space-2a

Figure 1

It’s clear that the two layers are both emitting more radiation with more CO2.More cooling to space.

For interest, the “total emissivity” of the top layer is 0.190 in the first case and 0.197 in the second case. The layer below has 0.389 and 0.395.

Let’s take a look at all of the numbers and see what is going on. This diagram is a little busier:

Cooling-to-space-3a

Figure 2

The key point is that the OLR (outgoing longwave radiation) is lower in the case with more CO2. Yet each layer is emitting more radiation. How can this be?

Take a look at the radiation entering the top layer on the left = 265.1, and add to that the emitted radiation = 23.0 – the total is 288.1. Now subtract the radiation leaving through the top boundary = 257.0 and we get the radiation absorbed in the layer. This is 31.1 W/m².

Compare that with the same calculation with more CO2 – the absorption is 32.2 W/m².

This is the case all the way up through the atmosphere – each layer emits more because its emissivity has increased, but it also absorbs more because its absorptivity has increased by the same amount.

So more cooling to space, but unfortunately more absorption of the radiation below – two competing terms.

So why don’t they cancel out?

Emission of radiation is a result of local temperature and emissivity.

Absorption of radiation is the result of the incident radiation and absorptivity. Incident upwards radiation started lower in the atmosphere where it is hotter. So absorption changes always outweigh emission changes (note 4).

Conceptual Problems?

If it’s still not making sense then think about what happens as you reduce the GHGs in the atmosphere. The atmosphere emits less but absorbs even less of the radiation from below. So the outgoing longwave radiation increases. More surface radiation is making it to the top of atmosphere without being absorbed. So there is less cooling to space from the atmosphere, but more cooling to space from the surface and the atmosphere.

If you add lagging to a pipe, the temperature of the pipe increases (assuming of course it is “internally” heated with hot water). And yet, the pipe cools to the surrounding room via the lagging! Does that mean more lagging, more cooling? No, it’s just the transfer mechanism for getting the heat out.

That was just an analogy. Analogies don’t prove anything. If well chosen, they can be useful in illustrating problems. End of analogy disclaimer.

If you want to understand more about how radiation travels through the atmosphere and how GHG changes affect this journey, take a look at the series Visualizing Atmospheric Radiation.

 

Notes

Note 1: For more on the details see

Note 2: A very basic point – absolutely essential for understanding anything at all about climate science – is that the absorptivity of the atmosphere can be (and is) totally different from its emissivity when you are considering different wavelengths. The atmosphere is quite transparent to solar radiation, but quite opaque to terrestrial radiation – because they are at different wavelengths. 99% of solar radiation is at wavelengths less than 4 μm, and 99% of terrestrial radiation is at wavelengths greater than 4 μm. That’s because the sun’s surface is around 6000K while the earth’s surface is around 290K. So the atmosphere has low absorptivity of solar radiation (<4 μm) but high emissivity of terrestrial radiation.

Note 3: Any numerical calculation has to create some kind of grid. This is a very course grid, with 10 layers of roughly equal pressure in the atmosphere from the surface to 200mbar. The grid assumes there is just one temperature for each layer. Of course the temperature is decreasing as you go up. We could divide the atmosphere into 30 layers instead. We would get more accurate results. We would find the same effect.

Note 4: The equations for radiative transfer are found in Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. The equations prove this effect.

In A Challenge for Bryan I put up a simple heat transfer problem and asked for the equations. Bryan elected not to provide these equations. So I provide the answer, but also attempt some enlightenment for people who don’t think the answer can be correct.

As DeWitt Payne noted, a post with a similar problem posted on Wattsupwiththat managed to gather some (unintentionally) hilarious comments.

Here’s the problem again:

Case 1

Spherical body, A, of radius ra, with an emissivity, εa =1. The sphere is in the vacuum of space.

It is internally heated by a mystery power source (let’s say nuclear, but it doesn’t matter), with power input = P.

The sphere radiates into deep space, let’s say the temperature of deep space = 0K to make the maths simpler.

1. What is the equation for the equilibrium surface temperature of the sphere, Ta?

Case 2

The condition of case A, but now body A is surrounded by a slightly larger spherical shell, B, which of course is itself now surrounded by deep space at 0K.

B has a radius rb, with an emissivity, εb =1. This shell is highly conductive and very thin.

2a. What is the equation for the new equilibrium surface temperature, Ta’?

2b. What is the equation for the equilibrium temperature, Tb, of shell B?

 

Notes:

The reason for the “slightly larger shell” is to avoid “complex” view factor issues. Of course, I’m happy to relax the requirement for “slightly larger” and let Bryan provide the more general answer.

The reason for the “highly conductive” and “thin” outer shell, B, is to avoid any temperature difference between the inside and the outside surfaces of the shell. That is, we can assume the outside surface is at the same temperature as the inside surface – both at temperature, Tb.

This kind of problem is a staple of introductory heat transfer. This is a “find the equilibrium” problem.

How do we solve these kinds of problems? It’s pretty easy once you understand the tools.

The first tool is the first law of thermodynamics. Steady state means temperatures have stabilized and so energy in = energy out. We draw a “boundary” around each body and apply the “boundary condition” of the first law.

The second tool is the set of equations that govern the movement of energy. These are the equations for conduction, convection and radiation. In this case we just have radiation to consider.

For people who see the solution, shake their heads and say, this can’t be, stay on to the end and I will try and shed some light on possible conceptual problems. Of course, if it’s wrong, you should easily be able to provide the correct equations – or even if you can’t write equations you should be able to explain the flaw in the formulation of the equation.

In the original article I put some numbers down – “For anyone who wants to visualize some numbers: ra=1m, P=1000W, rb=1.01m“. I will use these to calculate an answer from the equations. I realize many readers aren’t comfortable with equations and so the answers will help illuminate the meaning of the equations.

I go through the equations in tedious detail, again for people who would like to follow the maths but don’t find maths easy.

Case 1

Energy in, Ein = Energy out, Eout  :  in Watts (Joules per second).

Ein = P

Eout = emission of thermal radiation per unit area x area

The first part is given by the Stefan-Boltzmann equation (σTa4, where σ = 5.67×10-8), and the second part by the equation for the surface area of a sphere (4πra²)

Eout = 4πra² x σTa4 …..[eqn 1]

Therefore, P = 4πra²σTa4 ….[eqn 2]

We have to rearrange the equation to see how Ta changes with the other factors:

Ta = [P / (4πra²σ)]1/4 ….[eqn 3]

If you aren’t comfortable with maths this might seem a little daunting. Let’s put the numbers in:

Ta = 194K (-80ºC)

Now we haven’t said anything about how long it takes to reach this temperature. We don’t have enough information for that. That’s the nice thing about steady state calculations, they are easier than dynamic calculations. We will look at that at the end.

Probably everyone is happy with this equation. Energy is conserved. No surprises and nothing controversial.

Now we will apply the exact same approach to the second case.

Case 2

First we consider “body A”. Given that it is enclosed by another “body” – the shell B – we have to consider any energy being transferred by radiation from B to A. If it turns out to be zero, of course it won’t affect the temperature of body A.

Ein(a) = P + Eb-a ….[eqn 4], where Eb-a is a value we don’t yet know. It is the radiation from B absorbed by A.

Eout(a) = 4πra² x σTa4 ….[eqn 5]- this is the same as in case 1. Emission of radiation from a body only depends on its temperature (and emissivity and area but these aren’t changing between the two cases)

- we will look at shell B and come back to the last term in eqn 4.

Now the shell outer surface:

Radiates out to space

We set space at absolute zero so no radiation is received by the outer surface

Shell inner surface:

Radiates in to A (in fact almost all of the radiation emitted from the inner surface is absorbed by A and for now we will treat it as all) – this was the term Eb-a

Absorbs all of the radiation emitted by A, this is Eout(a)

And we made the shell thin and highly conductive so there is no temperature difference between the two surfaces. Let’s collect the heat transfer terms for shell B under steady state:

Ein(b) = Eout(a) + 0  …..[eqn 6] – energy in is all from the sphere A, and nothing from outside

             =  4πra² x σTa4 ….[eqn 6a] – we just took the value from eqn 5

Eout(b) = 4πrb² x σTb4 + 4πrb² x σTb4 …..[eqn 7] – energy out is the emitted radiation from the inner surface + emitted radiation from the outer surface

                = 2 x 4πrb² x σTb4 ….[eqn 7a]

 And we know that for shell B, Ein = Eout so we equate 6a and 7a:

4πra² x σTa4 = 2 x 4πrb² x σTb4 ….[eqn 8]

and now we can cancel a lot of the common terms:

ra² x Ta4 = 2 x rb² x Tb4 ….[eqn 8a]

and re-arrange to get Ta in terms of Tb:

Ta4 = 2rb²/ra² x Tb4 ….[eqn 8b]

Ta = [2rb²/ra²]1/4 x Tb ….[eqn 8b]

or we can write it the other way round:

Tb = [ra²/2rb²]1/4 x Ta ….[eqn 8c]

Using the numbers given, Ta = 1.2 Tb. So the sphere is 20% warmer than the shell (actually 2 to the power 1/4).

We need to use Ein=Eout for the sphere A to be able to get the full solution. We wrote down: Ein(a) = P + Eb-a ….[eqn 4]. Now we know “Eb-a” – this is one of the terms in eqn 7.

So:

Ein(a) = P + 4πrb² x σTb4 ….[eqn 9]

and Ein(a) = Eout(a), so:

P + 4πrb² x σTb4 = 4πra² x σTa4  ….[eqn 9]

we can substitute the equation for Tb:

P + 4πra² /2 x σTa4 = 4πra² x σTa4  ….[eqn 9a]

the 2nd term on the left and the right hand side can be combined:

P = 2πra² x σTa4  ….[eqn 9a]

And so, voila:

T’a = [P / (2πra²σ)]1/4 ….[eqn 10] – I added a dash to Ta so we can compare it with the original value before the shell arrived.

T’a = 21/4 Ta   ….[eqn 11] – that is, the temperature of the sphere A is about 20% warmer in case 2 compared with case 1.

Using the numbers, T’a = 230 K (-43ºC). And Tb = 193 K (-81ºC)

Explaining the Results

In case 2, the inner sphere, A, has its temperature increase by 36K even though the same energy production takes place inside. Obviously, this can’t be right because we have created energy??.. let’s come back to that shortly.

Notice something very important - Tb in case 2 is almost identical to Ta in case 1. The difference is actually only due to the slight difference in surface area. Why?

The system has an energy production, P, in both cases.

  • In case 1, the sphere A is the boundary transferring energy to space and so its equilibrium temperature must be determined by P
  • In case 2, the shell B is the boundary transferring energy to space and so its equilibrium temperature must be determined by P

Now let’s confirm the mystery unphysical totally fake invented energy.

Let’s compare the flux emitted from A in case 1 and case 2. I’ll call it R.

  • R(case 1) = 80 W/m²
  • R(case 2) = 159 W/m²

This is obviously rubbish. The same energy source inside the sphere and we doubled the sphere’s energy production!!! Get this idiot to take down this post, he has no idea what he is writing..

Yet if we check the energy balance we find that 80 W/m² is being “created” by our power source, and the “extra mystery” energy of 79 W/m² is coming from our outer shell. In any given second no energy is created.

The Mystery Invented Energy – Revealed

When we snapped the outer shell over the sphere we made it harder for heat to get out of the system. Energy in = energy out, in steady state. When we are not in steady state: energy in – energy out = energy retained. Energy retained is internal energy which is manifested as temperature.

We made it hard for heat to get out, which accumulated energy, which increased temperature.. until finally the inner sphere A was hot enough for all of the internally generated energy, P, to get out of the system.

Let’s add some information about the system: the heat capacity of the sphere = 1000 J/K; the heat capacity of the shell = 100 J/K. It doesn’t much matter what they are, it’s just to calculate the transients. We snap the shell – originally at 0K – around the sphere at time t=100 seconds and see what happens.

The top graph shows temperature, the bottom graph shows change in energy of the two objects and how much energy is leaving the system:

Bryan-sphere

At 100 seconds we see that instead of our steady state 1000W leaving the system, instead 0W leaves the system. This is the important part of the mystery energy puzzle.

We put a 0K shell around the sphere. This absorbs all the energy from the sphere. At time t=100s the shell is still at 0K so it emits 0W/m². It heats up pretty quickly, but remember that emission of radiation is not linear with temperature so you don’t see a linear relationship between the temperature of shell B and the energy leaving to space. For example at 100K, the outward emission is 6 W/m², at 150K it is 29 W/m² and at its final temperature of 193K, it is 79 W/m² (=1000 W in total).

As the shell heats up it emits more and more radiation inwards, heating up the sphere A.

The mystery energy has been revealed. The addition of a radiation barrier stopped energy leaving, which stored heat. The way equilibrium is finally restored is due to the temperature increase of the sphere.

Of course, for some strange reason an army of people thinks this is totally false. Well, produce your equations.. (this never happens)

All we have done here is used conservation of energy and the Stefan Boltzmann law of emission of thermal radiation.

Bryan needs no introduction on this blog, but if we were to introduce him it would be as the fearless champion of Gerlich and Tscheuschner.

Bryan has been trying to teach me some basics on heat transfer from the Ladybird Book of Thermodynamics. In hilarious fashion we both already agree on that particular point.

So now here is a problem for Bryan to solve.

Of course, in Game of Thrones fashion, Bryan can nominate his own champion to solve the problem.

Case A

Spherical body, A, of radius ra, with an emissivity, εa =1. The sphere is in the vacuum of space.

It is internally heated by a mystery power source (let’s say nuclear, but it doesn’t matter), with power input = P.

The sphere radiates into deep space, let’s say the temperature of deep space = 0K to make the maths simpler.

1. What is the equation for the equilibrium surface temperature of the sphere, Ta?

Case B

The condition of case A, but now body A is surrounded by a slightly larger spherical shell, B, which of course is itself now surrounded by deep space at 0K.

B has a radius rb, with an emissivity, εb =1. This shell is highly conductive and very thin.

2a. What is the equation for the new equilibrium surface temperature, Ta’?

2b. What is the equation for the equilibrium temperature, Tb, of shell B?

 

Notes:

The reason for the “slightly larger shell” is to avoid “complex” view factor issues. Of course, I’m happy to relax the requirement for “slightly larger” and let Bryan provide the more general answer.

The reason for the “highly conductive” and “thin” outer shell, B, is to avoid any temperature difference between the inside and the outside surfaces of the shell. That is, we can assume the outside surface is at the same temperature as the inside surface – both at temperature, Tb.

For anyone who wants to visualize some numbers: ra=1m, P=1000W, rb=1.01m

This problem takes a couple of minutes to solve on a piece of paper. I suspect we will wait a decade for Bryan’s answer. But I love to be proved wrong!

In Part One we had a look at some introductory ideas. In this article we will look at one of the ground-breaking papers in chaos theory – Deterministic nonperiodic flow, Edward Lorenz (1963). It has been cited more than 13,500 times.

There might be some introductory books on non-linear dynamics and chaos that don’t include a discussion of this paper – or at least a mention – but they will be in a small minority.

Lorenz was thinking about convection in the atmosphere, or any fluid heated from below, and reduced the problem to just three simple equations. However, the equations were still non-linear and because of this they exhibit chaotic behavior.

Cencini et al describe Lorenz’s problem:

Consider a fluid, initially at rest, constrained by two infinite horizontal plates maintained at constant temperature and at a fixed distance from each other. Gravity acts on the system perpendicular to the plates. If the upper plate is maintained hotter than the lower one, the fluid remains at rest and in a state of conduction, i.e., a linear temperature gradient establishes between the two plates.

If the temperatures are inverted, gravity induced buoyancy forces tend to rise toward the top the hotter, and thus lighter fluid, that is at the bottom. This tendency is contrasted by viscous and dissipative forces of the fluid so that the conduction state may persist.

However, as the temperature differential exceeds a certain amount, the conduction state is replaced by a steady convection state: the fluid motion consists of steady counter-rotating vortices (rolls) which transport upwards the hot/light fluid in contact with the bottom plate and downwards the cold heavy fluid in contact with the upper one.

The steady convection state remains stable up to another critical temperature difference above which it becomes unsteady, very irregular and hardly predictable.

Willem Malkus and Lou Howard of MIT came up with an equivalent system – the simplest version is shown in this video:

Figure 1

Steven Strogatz (1994), an excellent introduction to dynamic and chaotic systems – explains and derives the equivalence between the classic Lorenz equations and this tilted waterwheel.

L63 (as I’ll call these equations) has three variables apart from time: intensity of convection (x), temperature difference between ascending and descending currents (y), deviation of temperature from a linear profile (z).

Here are some calculated results for L63  for the “classic” parameter values and three very slightly different initial conditions (blue, red, green in each plot) over 5,000 seconds, showing the start and end 50 seconds – click to expand:

Lorenz63-5ksecs-x-y-vs-time-499px

Figure 2 – click to expand – initial conditions x,y,z = 0, 1, 0;  0, 1.001, 0;  0, 1.002, 0

We can see that quite early on the two conditions diverge, and 5000 seconds later the system still exhibits similar “non-periodic” characteristics.

For interest let’s zoom in on just over 10 seconds of ‘x’ near the start and end:

Lorenz63-5ksecs-x-vs-time-zoom-499px

Figure 3

Going back to an important point from the first post, some chaotic systems will have predictable statistics even if the actual state at any future time is impossible to determine (due to uncertainty over the initial conditions).

So we’ll take a look at the statistics via a running average – click to expand:

Lorenz63-5ksecs-x-y-vs-time-average-499px

Figure 4 – click to expand

Two things stand out – first of all the running average over more than 100 “oscillations” still shows a large amount of variability. So at any one time, if we were to calculate the average from our current and historical experience we could easily end up calculating a value that was far from the “long term average”. Second – the “short term” average, if we can call it that, shows large variation at any given time between our slightly divergent initial conditions.

So we might believe – and be correct – that the long term statistics of slightly different initial conditions are identical, yet be fooled in practice.

Of course, surely it sorts itself out over a longer time scale?

I ran the same simulation (with just the first two starting conditions) for 25,000 seconds and then used a filter window of 1,000 seconds – click to expand:

Lorenz63-25ksecs-x-time-1000s-average-499px

 Figure 5 – click to expand

The total variability is less, but we have a similar problem – it’s just lower in magnitude. Again we see that the statistics of two slightly different initial conditions – if we were to view them by the running average at any one time –  are likely to be different even over this much longer time frame.

From this 25,000 second simulation:

  • take 10,000 random samples each of 25 second length and plot a histogram of the means of each sample (the sample means)
  • same again for 100 seconds
  • same again for 500 seconds
  • same again for 3,000 seconds

Repeat for the data from the other initial condition.

Here is the result:

Lorenz-25000s-histogram-of-means-2-conditions

Figure 6

To make it easier to see, here is the difference between the two sets of histograms, normalized by the maximum value in each set:

Lorenz-25000s-delta-histogram

Figure 7

This is a different way of viewing what we saw in figures 4 & 5.

The spread of sample means shrinks as we increase the time period but the difference between the two data sets doesn’t seem to disappear (note 2).

Attractors and Phase Space

The above plots show how variables change with time. There’s another way to view the evolution of system dynamics and that is by “phase space”. It’s a name for a different kind of plot.

So instead of plotting x vs time, y vs time and z vs time – let’s plot x vs y vs z – click to expand:

Lorenz63-first-50s-x-y-z-499px

Figure 8 – Click to expand – the colors blue, red & green represent the same initial conditions as in figure 2

Without some dynamic animation we can’t now tell how fast the system evolves. But we learn something else that turns out to be quite amazing. The system always end up on the same “phase space”. Perhaps that doesn’t seem amazing yet..

Figure 7 was with three initial conditions that are almost identical. Let’s look at three initial conditions that are very different: x,y,z = 0, 1, 0;   5, 5, 5;   20, 8, 1:

Lorenz63-first-50s-x-y-z-3-different-conditions-499px

Figure 9 - Click to expand

Here’s an example (similar to figure 7) from Strogatz – a set of 10,000 closely separated initial conditions and how they separate at 3, 6, 9 and 15 seconds. The two key points:

  1. the fast separation of initial conditions
  2. the long term position of any of the initial conditions is still on the “attractor”
From Strogatz 1994

From Strogatz 1994

Figure 10

A dynamic visualization on Youtube with 500,000 initial conditions:

Figure 11

There’s lot of theory around all of this as you might expect. But in brief, in a “dissipative system” the “phase volume” contracts exponentially to zero. Yet for the Lorenz system somehow it doesn’t quite manage that. Instead, there are an infinite number of 2-d surfaces. Or something. For the sake of a not overly complex discussion a wide range of initial conditions ends up on something very close to a 2-d surface.

This is known as a strange attractor. And the Lorenz strange attractor looks like a butterfly.

Conclusion

Lorenz 1963 reduced convective flow (e.g., heating an atmosphere from the bottom) to a simple set of equations. Obviously these equations are a massively over-simplified version of anything like the real atmosphere. Yet, even with this very simple set of equations we find chaotic behavior.

Chaotic behavior in this example means:

  • very small differences get amplified extremely quickly so that no matter how much you increase your knowledge of your starting conditions it doesn’t help much (note 3)
  • starting conditions within certain boundaries will always end up within “attractor” boundaries, even though there might be non-periodic oscillations around this attractor
  • the long term (infinite) statistics can be deterministic but over any “smaller” time period the statistics can be highly variable

References

Deterministic nonperiodic flow, EN Lorenz, Journal of the Atmospheric Sciences (1963)

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Non Linear Dynamics and Chaos, Steven H. Strogatz, Perseus Books  (1994)

Notes

Note 1: The Lorenz equations:

dx/dt = σ (y-x)

dy/dt = rx – y – xz

dz/dt = xy – bz

where

x = intensity of convection

y = temperature difference between ascending and descending currents

z = devision of temperature from a linear profile

σ = Prandtl number, ratio of momentum diffusivity to thermal diffusivity

r = Rayleigh number

b = “another parameter”

And the “classic parameters” are σ=10, b = 8/3, r = 28

Note 2: Lorenz 1963 has over 13,000 citations so I haven’t been able to find out if this system of equations is transitive or intransitive. Running Matlab on a home Mac reaches some limitations and I maxed out at 25,000 second simulations mapped onto a 0.01 second time step.

However, I’m not trying to prove anything specifically about the Lorenz 1963 equations, more illustrating some important characteristics of chaotic systems

Note 3: Small differences in initial conditions grow exponentially, until we reach the limits of the attractor. So it’s easy to show the “benefit” of more accurate data on initial conditions.

If we increase our precision on initial conditions by 1,000,000 times the increase in prediction time is a massive 2½ times longer.

There are many classes of systems but in the climate blogosphere world two ideas about climate seem to be repeated the most.

In camp A:

We can’t forecast the weather two weeks ahead so what chance have we got of forecasting climate 100 years from now.

And in camp B:

Weather is an initial value problem, whereas climate is a boundary value problem. On the timescale of decades, every planetary object has a mean temperature mainly given by the power of its star according to Stefan-Boltzmann’s law combined with the greenhouse effect. If the sources and sinks of CO2 were chaotic and could quickly release and sequester large fractions of gas perhaps the climate could be chaotic. Weather is chaotic, climate is not.

Of course, like any complex debate, simplified statements don’t really help. So this article kicks off with some introductory basics.

Many inhabitants of the climate blogosphere already know the answer to this subject and with much conviction. A reminder for new readers that on this blog opinions are not so interesting, although occasionally entertaining. So instead, try to explain what evidence is there for your opinion. And, as suggested in About this Blog:

And sometimes others put forward points of view or “facts” that are obviously wrong and easily refuted.  Pretend for a moment that they aren’t part of an evil empire of disinformation and think how best to explain the error in an inoffensive way.

Pendulums

The equation for a simple pendulum is “non-linear”, although there is a simplified version of the equation, often used in introductions, which is linear. However, the number of variables involved is only two:

  • angle
  • speed

and this isn’t enough to create a “chaotic” system.

If we have a double pendulum, one pendulum attached at the bottom of another pendulum, we do get a chaotic system. There are some nice visual simulations around, which St. Google might help interested readers find.

If we have a forced damped pendulum like this one:

Pendulum-forced

Figure 1 – the blue arrows indicate that the point O is being driven up and down by an external force

-we also get a chaotic system.

What am I talking about? What is linear & non-linear? What is a “chaotic system”?

Digression on Non-Linearity for Non-Technical People

Common experience teaches us about linearity. If I pick up an apple in the supermarket it weighs about 0.15 kg or 150 grams (also known in some countries as “about 5 ounces”). If I take 10 apples the collection weighs 1.5 kg. That’s pretty simple stuff. Most of our real world experience follows this linearity and so we expect it.

On the other hand, if I was near a very cold black surface held at 170K (-103ºC) and measured the radiation emitted it would be 47 W/m². Then we double the temperature of this surface to 340K (67ºC) what would I measure? 94 W/m²? Seems reasonable – double the absolute temperature and get double the radiation.. But it’s not correct.

The right answer is 758 W/m², which is 16x the amount. Surprising, but most actual physics, engineering and chemistry is like this. Double a quantity and you don’t get double the result.

It gets more confusing when we consider the interaction of other variables.

Let’s take riding a bike [updated thanks to Pekka]. Once you get above a certain speed most of the resistance comes from the wind so we will focus on that. Typically the wind resistance increases as the square of the speed. So if you double your speed you get four times the wind resistance. Work done = force x distance moved, so with no head wind power input has to go up as the cube of speed (note 4). This means you have to put in 8x the effort to get 2x the speed.

On Sunday you go for a ride and the wind speed is zero. You get to 25 km/hr (16 miles/hr) by putting a bit of effort in – let’s say you are producing 150W of power (I have no idea what the right amount is). You want your new speedo to register 50 km/hr – so you have to produce 1,200W.

On Monday you go for a ride and the wind speed is 20 km/hr into your face. Probably should have taken the day off.. Now with 150W you get to only 14 km/hr, it takes almost 500W to get to your basic 25 km/hr, and to get to 50 km/hr it takes almost 2,400W. No chance of getting to that speed!

On Tuesday you go for a ride and the wind speed is the same so you go in the opposite direction and take the train home. Now with only 6W you get to go 25 km/hr, to get to 50km/hr you only need to pump out 430W.

In mathematical terms it’s quite simple: F = k(v-w)², Force = (a constant, k) x (road speed – wind speed) squared. Power, P = Fv = kv(v-w)². But notice that the effect of the “other variable”, the wind speed, has really complicated things.

To double your speed on the first day you had to produce eight times the power. To double your speed the second day you had to produce almost five times the power. To double your speed the third day you had to produce just over 70 times the power. All with the same physics.

The real problem with nonlinearity isn’t the problem of keeping track of these kind of numbers. You get used to the fact that real science – real world relationships – has these kind of factors and you come to expect them. And you have an equation that makes calculating them easy. And you have computers to do the work.

No, the real problem with non-linearity (the real world) is that many of these equations link together and solving them is very difficult and often only possible using “numerical methods”.

It is also the reason why something like climate feedback is very difficult to measure. Imagine measuring the change in power required to double speed on the Monday. It’s almost 5x, so you might think the relationship is something like the square of speed. On Tuesday it’s about 70 times, so you would come up with a completely different relationship. In this simple case know that wind speed is a factor, we can measure it, and so we can “factor it out” when we do the calculation. But in a more complicated system, if you don’t know the “confounding variables”, or the relationships, what are you measuring? We will return to this question later.

When you start out doing maths, physics, engineering.. you do “linear equations”. These teach you how to use the tools of the trade. You solve equations. You rearrange relationships using equations and mathematical tricks, and these rearranged equations give you insight into how things work. It’s amazing. But then you move to “nonlinear” equations, aka the real world, which turns out to be mostly insoluble. So nonlinear isn’t something special, it’s normal. Linear is special. You don’t usually get it.

..End of digression

Back to Pendulums

Let’s take a closer look at a forced damped pendulum. Damped, in physics terms, just means there is something opposing the movement. We have friction from the air and so over time the pendulum slows down and stops. That’s pretty simple. And not chaotic. And not interesting.

So we need something to keep it moving. We drive the pivot point at the top up and down and now we have a forced damped pendulum. The equation that results (note 1) has the massive number of three variables – position, speed and now time to keep track of the driving up and down of the pivot point. Three variables seems to be the minimum to create a chaotic system (note 2).

As we increase the ratio of the forcing amplitude to the length of the pendulum (β in note 1) we can move through three distinct types of response:

  • simple response
  • a “chaotic start” followed by a deterministic oscillation
  • a chaotic system

This is typical of chaotic systems – certain parameter values or combinations of parameters can move the system between quite different states.

Here is a plot (note 3) of position vs time for the chaotic system, β=0.7, with two initial conditions, only different from each other by 0.1%:

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Figure 1

It’s a little misleading to view the angle like this because it is in radians and so needs to be mapped between 0-2π (but then we get a discontinuity on a graph that doesn’t match the real world). We can map the graph onto a cylinder plot but it’s a mess of reds and blues.

Another way of looking at the data is via the statistics – so here is a histogram of the position (θ), mapped to 0-2π, and angular speed (dθ/dt) for the two starting conditions over the first 10,000 seconds:

Histograms for 10k seconds

Histograms for 10,000 seconds

Figure 2

We can see they are similar but not identical (note the different scales on the y-axis).

That might be due to the shortness of the run, so here are the results over 100,000 seconds:

Pendulum-0.7-100k seconds-2 conditions-hist

Histogram for 100,000 seconds

Figure 3

As we increase the timespan of the simulation the statistics of two slightly different initial conditions become more alike.

So if we want to know the state of a chaotic system at some point in the future, very small changes in the initial conditions will amplify over time, making the result unknowable – or no different from picking the state from a random time in the future. But if we look at the statistics of the results we might find that they are very predictable. This is typical of many (but not all) chaotic systems.

Orbits of the Planets

The orbits of the planets in the solar system are chaotic. In fact, even 3-body systems moving under gravitational attraction have chaotic behavior. So how did we land a man on the moon? This raises the interesting questions of timescales and amount of variation. Planetary movement – for our purposes – is extremely predictable over a few million years. But over 10s of millions of years we might have trouble predicting exactly the shape of the earth’s orbit – eccentricity, time of closest approach to the sun, obliquity.

However, it seems that even over a much longer time period the planets will still continue in their orbits – they won’t crash into the sun or escape the solar system. So here we see another important aspect of some chaotic systems – the “chaotic region” can be quite restricted. So chaos doesn’t mean unbounded.

According to Cencini, Cecconi & Vulpiani (2010):

Therefore, in principle, the Solar system can be chaotic, but not necessarily this implies events such as collisions or escaping planets..

However, there is evidence that the Solar system is “astronomically” stable, in the sense that the 8 largest planets seem to remain bound to the Sun in low eccentricity and low inclination orbits for time of the order of a billion years. In this respect, chaos mostly manifest in the irregular behavior of the eccentricity and inclination of the less massive planets, Mercury and Mars. Such variations are not large enough to provoke catastrophic events before extremely large time. For instance, recent numerical investigations show that for catastrophic events, such as “collisions” between Mercury and Venus or Mercury failure into the Sun, we should wait at least a billion years.

And bad luck, Pluto.

Deterministic, non-Chaotic, Systems with Uncertainty

Just to round out the picture a little, even if a system is not chaotic and is deterministic we might lack sufficient knowledge to be able to make useful predictions. If you take a look at figure 3 in Ensemble Forecasting you can see that with some uncertainty of the initial velocity and a key parameter the resulting velocity of an extremely simple system has quite a large uncertainty associated with it.

This case is quantitively different of course. By obtaining more accurate values of the starting conditions and the key parameters we can reduce our uncertainty. Small disturbances don’t grow over time to the point where our calculation of a future condition might as well just be selected from a randomly time in the future.

Transitive, Intransitive and “Almost Intransitive” Systems

Many chaotic systems have deterministic statistics. So we don’t know the future state beyond a certain time. But we do know that a particular position, or other “state” of the system, will be between a given range for x% of the time, taken over a “long enough” timescale. These are transitive systems.

Other chaotic systems can be intransitive. That is, for a very slight change in initial conditions we can have a different set of long term statistics. So the system has no “statistical” predictability. Lorenz 1968 gives a good example.

Lorenz introduces the concept of almost intransitive systems. This is where, strictly speaking, the statistics over infinite time are independent of the initial conditions, but the statistics over “long time periods” are dependent on the initial conditions. And so he also looks at the interesting case (Lorenz 1990) of moving between states of the system (seasons), where we can think of the precise starting conditions each time we move into a new season moving us into a different set of long term statistics. I find it hard to explain this clearly in one paragraph, but Lorenz’s papers are very readable.

Conclusion?

This is just a brief look at some of the basic ideas.

Other Articles in the Series

Part Two – Lorenz 1963

References

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Climatic Determinism, Edward Lorenz (1968) – free paper

Can chaos and intransivity lead to interannual variation, Edward Lorenz, Tellus (1990) – free paper

Notes

Note 1 – The equation is easiest to “manage” after the original parameters are transformed so that tω->t. That is, the period of external driving, T0=2π under the transformed time base.

Then:

Pendulum-forced-equation

where θ = angle, γ’ = γ/ω, α = g/Lω², β =h0/L;

these parameters based on γ = viscous drag coefficient, ω = angular speed of driving, g = acceleration due to gravity = 9.8m/s², L = length of pendulum, h0=amplitude of driving of pivot point

Note 2 – This is true for continuous systems. Discrete systems can be chaotic with less parameters

Note 3 – The results were calculated numerically using Matlab’s ODE (ordinary differential equation) solver, ode45.

Note 4 – Force = k(v-w)2 where k is a constant, v=velocity, w=wind speed. Work done = Force x distance moved so Power, P = Force x velocity.

Therefore:

P = kv(v-w)2

If we know k, v & w we can find P. If we have P, k & w and want to find v it is a cubic equation that needs solving.

Follow

Get every new post delivered to your Inbox.

Join 291 other followers