Feeds:
Posts
Comments

Archive for September, 2011

I don’t think this is a simple topic.

The essence of the problem is this:

Can we measure the top of atmosphere (TOA) radiative changes and the surface temperature changes and derive the “climate sensivity” from the relationship between the two parameters?

First, what do we mean by “climate sensitivity”?

In simple terms this parameter should tell us how much more radiation (“flux”) escapes to space for each 1°C increase in surface temperature.

Climate Sensitivity Is All About Feedback

Climate sensitivity is all about trying to discover whether the climate system has positive or negative feedback.

If the average surface temperature of the earth increased by 1°C and the radiation to space consequently increased by 3.3 W/m², this would be approximately “zero feedback”.

Why is this zero feedback?

If somehow the average temperature of the surface of the planet increased by 1°C – say due to increased solar radiation – then as a result we would expect a higher flux into space. A hotter planet should radiate more. If the increase in flux = 3.3 W/m² it would indicate that there was no negative or positive feedback from this solar forcing (note 1).

Suppose the flux increased by 0. That is, the planet heated up but there was no increase in energy radiated to space. That would be positive feedback within the climate system – because there would be nothing to “rein in” the increase in temperature.

Suppose the flux increased by 5 W/m². In this case it would indicate negative feedback within the climate system.

The key value is the “benchmark” no feedback value of 3.3 W/m². If the value is above this, it’s negative feedback. If the value is below this, it’s positive feedback.

Essentially, the higher the radiation to space as a result of a temperature increase the more the planet is able to “damp out” temperature changes that are forced via solar radiation, or due to increases in inappropriately-named “greenhouse” gases.

Consider the extreme case where as the planet warms up it actually radiates less energy to space – clearly this will lead to runaway temperature increases (less energy radiated means more energy absorbed, which increased temperatures, which leads to even less energy radiated..).

As a result we measure sensitivity as W/m².K which we read as Watts per meter squared per Kelvin” – and 1K change is the same as 1°C change.

Theory and Measurement

In many subjects, researchers’ algebra converges on conventional usage, but in the realm of climate sensitivity everyone has apparently adopted their own. As a note for non-mathematicians, there is nothing inherently wrong with this, but it just makes each paper confusing especially for newcomers and probably for everyone.

I mostly adopt the Spencer & Braswell 2008 terminology in this article (see reference and free link below). I do change their α (climate sensitivity) into λ (which everyone else uses for this value) mainly because I had already produced a number of graphs with λ before starting to write the article..

The model is a very simple 1-dimensional model of temperature deviation into the ocean mixed layer, from the first law of thermodynamics:

C.∂T/∂t = F + S ….[1]

where C = heat capacity of the ocean, T = temperature anomaly, t = time, F = total top of atmosphere (TOA) radiative flux anomaly, S = heat flux anomaly into the deeper ocean

What does this equation say?

Heat capacity times change in temperature equals the net change in energy

- this is a simple statement of energy conservation, the first law of thermodynamics.

The TOA radiative flux anomaly, F, is a value we can measure using satellites. T is average surface temperature, which is measured around the planet on a frequent basis. But S is something we can’t measure.

What is F made up of?

Let’s define:

F = N + f – λT ….[1a]

where N = random fluctuations in radiative flux, f = “forcings”, and λT is the all important climate response or feedback.

The forcing f is, for the purposes of this exercise, defined as something added into the system which we believe we can understand and estimate or measure. This could be solar increases/decreases, it could be the long term increase in the “greenhouse” effect due to CO2, methane and other gases. For the purposes of this exercise it is not feedback. Feedback includes clouds and water vapor and other climate responses like changing lapse rates (atmospheric temperature profiles), all of which combine to produce a change in radiative output at TOA.

And an important point is that for the purposes of this theoretical exercise, we can remove f from the measurements because we believe we know what it is at any given time.

N is an important element. Effectively it describes the variations in TOA radiative flux due to the random climatic variations over many different timescales.

The climate sensitivity is the value λT, where λ is the value we want to find.

Noting the earlier comment about our assumed knowledge of ‘f’ (note 2), we can rewrite eqn 1:

C.∂T/∂t = – λT + N + S ….[2]

remembering that – λT + N = F is the radiative value we measure at TOA

Regression

If we plot F (measured TOA flux) vs T we can estimate λ from the slope of the least squares regression.

However, there is a problem with the estimate:

λ (est) = Cov[F,T] / Var[T] ….[3]

          = Cov[- λT + N, T] / Var[T]

where Cov[a,b] = covariance of a with b, and Var[a]= variance of a

Forster & Gregory 2006

This oft-cited paper (reference and free link below) calculates the climate sensitivity from 1985-1996 using measured ERBE data at 2.3 ± 1.3 W/m².K.

Their result indicates positive feedback, or at least, a range of values which sit mainly in the positive feedback space.

On the method of calculation they say:

This equation includes a term that allows F to vary independently of surface temperature.. If we regress (- λT+ N) against T, we should be able to obtain a value for λ. The N terms are likely to contaminate the result for short datasets, but provided the N terms are uncorrelated to T, the regression should give the correct value for λ, if the dataset is long enough..

[Terms changed to SB2008 for easier comparison, and emphasis added].

Simulations

Like Spencer & Braswell, I created a simple model to demonstrate why measured results might deviate from the actual climate sensitivity.

The model is extremely simple:

  • a “slab” model of the ocean of a certain depth
  • daily radiative noise (normally distributed with mean=0, and standard deviation σN)
  • daily ocean flux noise (normally distributed with mean=0, and standard deviation σS)
  • radiative feedback calculated from the temperature and the actual climate sensitivity
  • daily temperature change calculated from the daily energy imbalance
  • regression of the whole time series to calculate the “apparent” climate sensitivity

In this model, the climate sensitivity, λ = 3.0 W/m².K.

In some cases the regression is done with the daily values, and in other cases the regression is done with averaged values of temperature and TOA radiation across time periods of 7, 30 & 90 days. I also put a 30-day low pass filter on the daily radiative noise in one case (before “injecting” into the model).

Some results are based on 10,000 days (about 30 years), with 100,000 days (300 years) as a separate comparison.

In each case the estimated value of λ is calculated from the mean of 100 simulation results. The 2nd graph shows the standard deviation σλ, of these simulation results which is a useful guide to the likely spread of measured results of λ (if the massive oversimplifications within the model were true). The vertical axis (for the estimate of λ) is the same in each graph for easier comparison, while the vertical axis for the standard deviation changes according to the results due to the large changes in this value.

First, the variation as the number of time steps changes and as the averaging period changes from 1 (no averaging) through to 90-days. Remember that the “real” value of λ = 3.0 :

Figure 1

Second, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The daily temperature and radiative flux is calculated as a monthly average before the regression calculation is carried out:

Figure 2

As figure 2, but for 100,000 time steps (instead of 10,000):

Figure 3

Third, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The regression calculation is carried out on the daily values:

Figure 4

As figure 4, but with 100,000 time steps:

Figure 5

Now against averaging period and also against low pass filtering of the “radiative flux noise”:

Figure 6

As figure 6 but with 100,000 time steps:

Figure 7

Now with the radiative “noise” as an AR(1) process (see Statistics and Climate – Part Three – Autocorrelation), vs the autoregressive parameter φ and vs the number of averaging periods: 1 (no averaging), 7, 30, 90 with 10,000 time steps (30 years):

Figure 8

And the same comparison but with 100,000 timesteps:

Figure 9

Discussion of Results

If we consider first the changes in the standard deviation of the estimated value of climate sensitivity we can see that the spread in the results is much higher in each case when we consider 30 years of data vs 300 years of data. This is to be expected. However, given that in the 30-year cases σλ is similar in magnitude to λ we can see that doing one estimate and relying on the result is problematic. This of course is what is actually done with measurements from satellites where we have 30 years of history.

Second, we can see that mostly the estimates of λ tend to be lower than the actual value of 3.0 W/m².K. The reason is quite simple and is explained mathematically in the next section which non-mathematically inclined readers can skip.

In essence, it is related to the idea in the quote from Forster & Gregory. If the radiative flux noise is uncorrelated to temperature then the estimates of λ will be unbiased. By the way, remember that by “noise” we don’t mean instrument noise, although that will certainly be present. We mean the random fluctuations due to the chaotic nature of weather and climate.

If we refer back to Figure 1 we can see that when the averaging period = 1, the estimates of climate sensitivity are equal to 3.0. In this case, the noise is uncorrelated to the temperature because of the model construction. Slightly oversimplifying, today’s temperature is calculated from yesterday’s noise. Today’s noise is a random number unrelated to yesterday’s noise. Therefore, no correlation between today’s temperature and today’s noise.

As soon as we average the daily data into monthly results which we use to calculate the regression then we have introduced the fact that monthly temperature is correlated to monthly radiative flux noise (note 3).

This is also why Figures 8 & 9 show a low bias for λ even with no averaging of daily results. These figures are calculated with autocorrelation for radiative flux noise. This means that past values of flux are correlated to current vales – and so once again, daily temperature will be correlated with daily flux noise. This is also the case where low pass filtering is used to create the radiative noise data (as in Figures 6 & 7).

Maths

x = slope of the line from the linear regression

x = Cov[- λT + N, T] / Var[T] ….[3]

It’s not easy to read equations with complex terms numerator and denominator on the same line, so breaking it up:

Cov[- λT + N, T] = E[ (λT + N)T ] – E[- λT + N]E[T] ….[4], where E[a] = expected value of a

= E[-λT²] + E[NT] + λ.E[T].E[T] – E[N].E[T]

= -λ { E[T²] – (E[T])² } + E[NT] – E[N].E[T] …. [4]

And

Var[T] = E[T²] – (E[T])² …. [5]

So

x = -λ + { E[NT] – E[N].E[T] } / { E[T²] – (E[T])² } …. [6]

And we see that the regression of the line is always biased if N is correlated with T. If the expected value of N = 0 the last term in the top part of the equation drops out, but E[NT] ≠ 0 unless N is uncorrelated with T.

Note of course that we will use the negative of the slope of the line to estimate λ, and so estimates of λ will be biased low.

As a note for the interested student, why is it that some of the results show λ > 3.0?

Murphy & Forster 2010

Murphy & Forster picked up the challenge from Spencer & Braswell 2008 (reference below but no free link unfortunately). The essence of their paper is that using more realistic values for radiative noise and mixed ocean depth the error in calculation of λ is very small:

From Murphy & Forster (2010)

Figure 10

The value ba on the vertical axis is a normalized error term (rather than the estimate of λ).

Evaluating their arguments requires more work on my part, especially analyzing some CERES data, so I hope to pick that up in a later article. [Update, Spencer has a response to this paper on his blog, thanks to Ken Gregory for highlighting it]

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Stephens (2005), reference and free link below:

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating  from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to "(1)" is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.

Conclusion

Measuring the relationship between top of atmosphere radiation and temperature is clearly very important if we want to assess the all-important climate sensitivity.

Spencer & Braswell have produced a very useful paper which demonstrates some obvious problems with deriving the value of climate sensitivity from measurements. Although I haven’t attempted to reproduce their actual results, I have done many other model simulations to demonstrate the same problem.

Murphy & Forster have produced a paper which claims that the actual magnitude of the problem demonstrated by Spencer & Braswell is quite small in comparison to the real value being measured (as yet I can’t tell whether their claim is correct).

The value called climate sensitivity might be a variable (i.e., not a constant value) and it might turn out to be much harder to measure than it really seems (and already it doesn’t seem easy).

Articles in this Series

Measuring Climate Sensitivity – Part Two – Mixed Layer Depths

Measuring Climate Sensitivity – Part Three – Eddy Diffusivity

References

The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data, Forster & Gregory, Journal of Climate (2006)

Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008)

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

Cloud Feedbacks in the Climate System: A Critical Review, Stephens, Journal of Climate (2005)

Notes

Note 1 – The reason why the “no feedback climate response” = 3.3 W/m².K is a little involved but is mostly due to the fact that the overall climate is radiating around 240 W/m² at TOA.

Note 2 – This is effectively the same as saying f=0. If that seems alarming I note in advance that the exercise we are going through is a theoretical exercise to demonstrate that even if f=0, the regression calculation of climate sensitivity includes some error due to random fluctuations.

Note 3 – If the model had one random number for last month’s noise which was used to calculate this month’s temperature then the monthly results would also be free of correlation between the temperature and radiative noise.

Read Full Post »

In a discussion a little while ago on What’s the Palaver? – Kiehl and Trenberth 1997, one of our commenters asked about the surface forcing and how it could possibly lead to anything like the IPCC-projected temperature change for doubling of CO2.

Following a request for clarification, he added:

..We first look at the RHS. We believe that the atmosphere will also increase in temperature by roughly the same amount, so there will be no change in the conductive term. The increase in the Radiative term is roughly 5.5W/m².

The increase in the evaporative term is much more difficult, but is believed to be in the range 2-7%/DegC. So the increase in the evaporative term is 1.5 to 5.5W/m², for a total change on the RHS of 7 to 11 W/m².

Since balance is an assumption, the LHS changed by the same amount. The surface sensitivity is therefore 0.095 to 0.15 DegC/W/m².

Note that this is the sensitivity to changes in Surface Forcing, whatever the source. It is NOT the response to Radiative Forcing – there is no response of the surface to Radiative Forcing, it can only respond to Sunlight and Back-Radiation.

[See the whole comment and exchange for the complete picture].

These are good questions and no doubt many people have similar ones. The definition of radiative forcing (see CO2 – An Insignificant Trace Gas? Part Seven – The Boring Numbers) is at the tropopause, which is the top of the troposphere (around 12km above the surface).

Why is it at the tropopause and not at the surface? The great Ramanathan explains (in his 1998 review paper):

..Manabe & Wetherald’s [1967] paper, which convincingly demonstrated that the CO2-induced surface warming is not solely determined by the energy balance at the surface but by the energy balance of the coupled surface-troposphere-stratosphere system.

The underlying concept of the Manabe-Wetherald model is that the surface and the troposphere are so strongly coupled by convective heat and moisture transport that the relevant forcing governing surface warming is the net radiative perturbation at the tropopause, simply known as radiative forcing.

In essence, the reason we consider the value at the tropopause is that it is the best value to tell us what will happen at the surface. It is now an idea established for over 40 years, although for some it might sound bizarre. So we will try and make sense of it here.

Here is a schematic originating in Ramanathan’s 1981 paper, but extracted here from his 1998 review paper:

From Ramanathan (1998)

From Ramanathan (1998)

Figure 1

The first thing to pay attention to is the right hand side – 1. CO2 direct surface heating – which is shown as 1.2 W/m².

The surface forcing from a doubling of CO2 is around 1 W/m² compared with around 4 W/m² at the tropopause. The surface forcing is a lot less than at the top of atmosphere!

Before too much joy sets in, let’s consider what these concepts represent. They are essentially idealized quantities, derived from considering the instantaneous change in concentrations of CO2.

As CO2 shows a steady increase year on year, the idea of doubling overnight is clearly not in accord with reality. However, it is a useful comparison point and helps to get many ideas straight. If instead we said, “CO2 increasing by 1% per year”, we would need to define a time period for this 1% annual increase, plus how long after the end before a new balance was restored. It wouldn’t make solving the problem any easier – and it would make the results harder to understand – by contrast GCM’s do consider a steadily rising CO2 level according to whatever scenario they are considering.

So, with the idea of an instantaneous doubling, if the surface increase in radiative forcing is less than the tropopause increase in radiative forcing, this must mean that the balance of energy is absorbed within the atmosphere. And also, we have to consider what happens as a result of the surface energy imbalance.

The numbers I use here are Ramanathan’s numbers from his 1981 paper. Later, and more accurate, numbers have been calculated but don’t affect the main points of this analysis. The reason for reviewing his analysis is because some (but not all) of the inherent responses of the climate system are explicitly calculated – making it easier to understand than the output of  GCM.

Immediate Response

The immediate result of this doubling of CO2 is a reduced emission of radiation (OLR = outgoing longwave radiation) from the climate system into space. See the Atmospheric Radiation and the “Greenhouse” Effect series for detailed explanations of why.

At the tropopause the OLR reduces by 3.1 W/m², and downward emission from the stratosphere into the troposphere increases by 1.2 W/m².

This results in a net forcing at the tropopause of 4.3 W/m². Most of the radiation from the atmosphere to the surface (as a result of more CO2) is absorbed by water vapor. So at the surface the DLR (downward long radiation) increases by only 1.2 W/m² – this is the (immediate) surface forcing. Here is a simple graphical explanation of why the OLR decreases and the DLR increases:

Figure 2 – Click for a larger image

Response After a Few Months

The stratosphere cools and reaches a new radiative equilibrium. This reduces the downward emission from the stratosphere by a small amount. The new value of radiative forcing at the tropopause = 4.2 W/m².

Response After Many Decades

The surface-troposphere warms until a new equilibrium is reached – the radiative forcing at the tropopause has returned to zero.

The Surface

So let’s now consider the surface. Take a look at Figure 1 again. The values/ranges we will consider are calculated by a model. This doesn’t mean they are correct. It means that applying well-understood processes in a simplistic way gives us a “first order” result. The reason for assessing this kind of approach is because our mental models are usually less accurate than a calculated result which draws on well-understood physics.

As Ramanathan says in his 1998 paper:

As a caveat, the system we considered up to this point to elucidate the principles of warming is a highly simplified linear system. Its use is primarily educational and cannot be used to predict actual changes.

Process 1 is as already described – the surface forcing increases by just over 1 W/m². But the balance of 3 W/m² goes into heating the troposphere.

Process 2 – The warming of the troposphere results in increases downward radiation to the surface (because the hotter the body, the higher the radiation emitted). The calculated value is an additional 2.3 W/m², so the surface imbalance is now 3.5 W/m² and the surface temperature must increase in response. Upwards surface radiation and/or sensible and latent heat will increase to balance.

Process 3 – The surface emission of radiation increases at around 5.5 W/m² for every 1°C of surface temperature increase. But this is almost balanced by increased downward radiation from the atmosphere (“back radiation”). The net effect is only about 10% of the change in upward radiation. So latent heat and sensible heat increase to restore the energy balance, but this also heats the troposphere.

Process 4 – The tropospheric humidity increases. This increases the emissivity of the atmosphere near the surface, which increases the back radiation.

So essentially some cycles are reinforcing each other (=positive feedback). The question is about the value of the new equilibrium point.

From Ramanathan (1981)

From Ramanathan (1981)

Figure 3

In Ramanathan’s 1981 paper he gives some basic calculations before turning to GCM results. The basic calculations are quite interesting because one of the purposes of the paper was to explain why some model results of the day produced very small equilibrium temperature changes.

Sadly for some readers, a little maths is necessary to reproduce the result. It is simple maths because it is based on simple concepts – as already presented. As much as possible I follow the equation numbers and notations from Ramanathan’s 1981 paper.

Calculations

Energy balance at an “average” surface:

Upward flux = Downward flux

→  LH + SH + F↑ = F↓ + S + ΔR  ….[2]

where LH = latent heat, SH = sensible heat, F↑ = surface emitted upward radiation, F↓ = surface downward radiation from the atmosphere, S = solar radiation absorbed, ΔR = instantaneous change in energy absorbed at the surface due to an increase in CO2

And see note 1. We have simple formulas for the left hand side.

F↑ = σT4….[3a]

Latent heat and sensible heat flux have “bulk aerodynamic formulas” (note 2):

LH = ρLCDV (q*M – qS)   ….[3b]

SH = ρcpCDV (TM – TS)   ….[3c]

Where ρ = density of air = 1.3 kg/m, L = latent heat of water vapor = 2.5 x 106, CD = empirically determined coefficient ≈ 1.3 x10-3,  V = average wind speed at some reference height above the surface ≈ 5 m/s, q*M = specific humidity at saturation at the surface temperature of the ocean,  qS = specific humidity at the reference height,  TM = temperature of the ocean at the surface,  TS = temperature of the air at the reference height (typically 10m).

To give an idea of typical values, for every 1°C difference between the surface and the air at the reference height, SH = 8.5 W/m²K, and with a relative humidity of 80% at the reference height (and 100% at the ocean surface), LH = 55 W/m²K.

Now we consider changes.

TM‘ is the change in the surface temperature of the ocean as the result of the increased CO2, and similar notation for other changes in values. Missing out a few steps that you can read in the paper:

TM‘ =                                    ΔR(0) + ΔF↓(2) + ΔF↓(3)                               ….[13]

      [ ∂LH/∂TM + ∂SH/∂TM + 4σTM³] + [  ∂LH/∂TS +  ∂SH/∂TS ].TS‘/TM

This probably seems a little daunting to a lot of readers.. so let’s explain it:

  • The parameter on the top line in black, ΔR(0) is the surface radiative forcing from the increase in CO2
  • The red terms are the changes in downward radiation as a result of process 2 and 3 described above
  • The blue terms are the changes in upward flux due to only the ocean surface temperature changing
  • The green terms are the changes in upward flux due to only the atmospheric temperature near the surface changing
  • The blue term ≈ 30 W/m²K @ 15°C; the green term ≈ -8.5 W/m²K @ 15°C (note 3)

And the smaller the total under the line, the higher the increase in temperature. And there are two competing terms:

  • As the surface temperature of the ocean increases the heat transfer from the ocean to the atmosphere increases
  • As the atmospheric temperature (just above the ocean surface) increases the heat transfer from the ocean to the atmosphere decreases

As an interesting comparison, Ramanathan reviewed the methods and results of Newell & Dopplick (1979) who found a changed surface temperature, Tm’ = 0.04 °C as a result of CO2 doubling. Effectively, very little change in surface temperature as a result of doubling of CO2.

Ramanathan states that the calculations of Newell & Dopplick had ignored the red terms and the green terms. Ignoring the red terms means that the heating of the atmosphere is ignored. Ignoring the green terms means that the effect of the ocean surface heating is inflated – if the ocean surface heats and the atmosphere just above somehow stayed the same then the heat transferred would be higher than if the atmospheric temperature also increased as a result. (Because heat transfer depends on temperature difference).

I expect that many people doing their own estimates will be working from similar assumptions.

Later Work

Here is a graphic from Andrews et al (2009), reference and free link below, which shows the simplified idea:

From Andrews et al (2009)

Figure 4

The paper itself is well worth reading and perhaps will be the subject of another article at a later date.

Conclusion

I haven’t demonstrated that the surface will warm by 3°C for a doubling of CO2. But I hope I have demonstrated the complexity of the processes involved and why a simplistic calculation of how the surface responds immediately to the surface forcing is not the complete answer. It is nowhere near the complete answer.

The surface temperature change as a result of doubling of CO2 is, of course, a massively important question to answer. GCM’s are necessarily involved despite their limitations.

Re-iterating what Ramanathan said in his 1998 paper in case anyone thinks I am making a case for a 3°C surface temperature increase:

As a caveat, the system we considered up to this point to elucidate the principles of warming is a highly simplified linear system. Its use is primarily educational and cannot be used to predict actual changes.

References

Trace Gas Greenhouse Effect and Global Warming, V. Ramanathan, Ambio (1998)

The role of ocean-atmosphere interactions in the CO2 climate problem, V Ramanathan, Journal of Atmospheric Sciences (1981)

Thermal equilibrium of the atmosphere with a given distribution of atmospheric humidity, Manabe & Wetherald, Journal of Atmospheric Sciences (1967)

A Surface Energy Perspective on Climate Change, Andrews, Forster & Gregory, Journal of Climate (2009)

Notes

Note 1: The equation ignores the transfer of heat into the ocean depths

Note 2: The “bulk aerodynamic formulas” – as they have become known – are more usable versions of the fundamental equations of heat and water vapor flux. Upward sensible heat flux, SH = ρcp<wT>, where w = vertical velocity, T = temperature, so <wT> is the time average of the product of vertical velocity and temperature. However, turbulent motions are so rapid, changing on such short time intervals that measurement of these values is usually impossible (or requires intensive measurement with specialist equipment in one location). We can write,

w = <w> + w’, where <w> = mean vertical velocity and w’ = deviation of vertical velocity from the mean, likewise T = <T> + T’.

So:

<wT> = <w><T> + <w’ T’> or, Total = Mean + Eddy

Near the surface the mean vertical motion is very small compared with the turbulent vertical velocity and so the turbulent component, <w’ T’>, dominates. Therefore,

SH = cρ <w’ T’>

LH = L ρ <w’ T’>

where  cp = specific heat capacity of air, ρ = density of air, L = latent heat of water vapor

By various thermodynamic arguments, and especially by lots of empirical measurements, an estimate of heat transfer can be made via the bulk aerodynamic formulas shown above, which use the average horizontal wind speed at the surface in conjunction with the coefficients of heat transfer, which are related to the friction term for the wind at the ocean surface.

Note 3: The calculation of each of the partial derivative terms is not shown in the paper, these are my calculations. I believe that ∂LH/∂TS = 0, most of the time – this is because if the atmosphere at the reference height is not saturated then an increase in the atmospheric temperature, TS, does not change the moisture flux, and therefore, does not change the latent heat. I might be wrong about this, and clearly some of the time this assumption I have made is not valid.

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 291 other followers