Archive for the ‘Measurement’ Category

Measurements of outgoing longwave radiation (OLR) are essential for understanding many aspects of climate. Many people are confused about the factors that affect OLR. And its rich variability is often not appreciated.

There have been a number of satellite projects since the late 1970’s, with the highlight (prior to 2001) being the five year period of ERBE.

AIRS & CERES were launched on the NASA AQUA satellite in May 2002. These provide much better quality data, with much better accuracy and resolution.

CERES has three instruments:

  • Solar Reflected Radiation (Shortwave): 0.3 – 5.0 μm
  • Window: 8 – 12 μm
  • Total: 0.3 to > 100 μm

AIRS is an infrared spectrometer/radiometer that covers the 3.7–15.4 μm spectral range with 2378 spectral channels. It runs alongside two microwave instruments (better viewing through clouds): AMSU is a 15-channel microwave radiometer operating between 23 and 89 GHz; HSB is a four-channel microwave radiometer that makes measurements between 150 and 190 GHz.

From Aumann et al (2003):

The simultaneous use of the data from the three instruments provides both new and improved measurements of cloud properties, atmospheric temperature and humidity, and land and ocean skin temperatures, with the accuracy, resolution, and coverage required by numerical weather prediction and climate models.

Among the important datasets that AIRS will contribute to climate studies are as follows:

  • atmospheric temperature profiles;
  • sea-surface temperature;
  • land-surface temperature and emissivity;
  • relative humidity profiles and total precipitable water vapor;
  • fractional cloud cover;
  • cloud spectral IR emissivity;
  • cloud-top pressure and temperature;
  • total ozone burden of the atmosphere;
  • column abundances of minor atmospheric gases such as CO, CH, CO, and N2O;
  • outgoing longwave radiation and longwave cloud radiative forcing;
  • precipitation rate

More about AIRS = Atmospheric Infrared Sounder, at Wikipedia, plus the AIRS website.

More about CERES = Clouds and the Earth’s Radiant Energy System, at Wikipedia, plus the CERES website – where you can select and view or download your own data.

How do CERES & AIRS compare?

CERES and AIRS have different jobs. CERES directly measures OLR. AIRS measures lots of spectral channels that don’t cover the complete range needed to just “add up” OLR. Instead, OLR can be calculated from AIRS data by deriving surface temperature, water vapour concentration vs height, CO2 concentration, etc and using a radiative transfer algorithm to determine OLR.

Here is a comparison of the two measurement systems from Susskind et al (2012) over almost a decade:


From Susskind et al (2012)

Figure 1

The second thing to observe is that the measurements have a bias between the two datasets. But because we have two high accuracy measurement systems on the same satellite we do have a reasonable opportunity to identify the source of the bias (total OLR as shown in the graph is made of many components). If we only had one satellite, and then a new satellite took over with a small time overlap any biases would be much more difficult to identify. Of course, that doesn’t stop many people from trying but success would be much harder to judge.

In this paper, as we might expect, the error sources between the two datasets get considerable discussion. One important point is that version 6 AIRS data (prototyped at the time the paper was written) is much closer to CERES. The second point, probably more interesting, is that once we look at anomaly data the results are very close. We’ll see a number of comparisons as we review what the paper shows.

The authors comment:

Behavior of OLR over this short time period should not be taken in any way as being indicative of what long-term trends might be. The ability to begin to draw potential conclusions as to whether there are long-term drifts with regard to the Earth’s OLR, beyond the effects of normal interannual variability, would require consistent calibrated global observations for a time period of at least 20 years, if not longer. Nevertheless, a very close agreement of the 8-year, 10-month OLR anomaly time series derived using two different instruments in two very different manners is an encouraging result.

It demonstrates that one can have confidence in the 1° x 1° OLR anomaly time series as observed by each instrument over the same time period. The second objective of the paper is to explain why recent values of global mean, and especially tropical mean, OLR have been strongly correlated with El Niño/La Niña variability and why both have decreased over the time period under study.

Why Has OLR Varied?

The authors define the average rate of change (ARC) of an anomaly time series as “the slope of the linear least squares fit of the anomaly time series”.


We can see excellent correlation between the two datasets and we can see that OLR has, on average, decreased over this time period.

Below is a comparison with the El Nino index.

We define the term El Niño Index as the difference of the NOAA monthly mean oceanic Sea Surface Temperature (SST), averaged over the NOAA Niño-4 spatial area 5°N to 5°S latitude and 150°W westward to 160°E longitude, from an 8-year NOAA Niño-4 SST monthly mean climatology which we generated based on use of the same 8 years that we used in the generation of the OLR climatologies.

From Susskind et al (2012)

From Susskind et al (2012)

Figure 2

It gets interesting when we look at the geographical distribution of the OLR changes over this time period:

From Susskind et al (2012)

From Susskind et al (2012)

Figure 3 – Click to Enlarge

We see that the tropics have the larger changes (also seen clearly in figure 2) but that some regions of the tropics have strong positive values and other regions have strong negative values. The grey square square centered on 180 longitude is the Nino-4 region. Values as large as +4 W/m²/decade are found in this region. And values as large as -3 W/m²/decade are found over Indonesia (WPMC region).

Let’s look at the time series to see how these changes in OLR took place:


Figure 4 – Click to Enlarge

The main parameters which affect changes in OLR month to month and year to year are a) surface temperatures b) humidity c) clouds. As temperature increases, OLR increases. As humidity and clouds increase, OLR decreases.

Here are the changes in surface temperature, specific humidity at 500mbar and cloud fraction:

From Susskind (2012)

From Susskind (2012)

Figure 5 – Click to Enlarge

So, focusing again on the Nino-4 region, we might expect to find that OLR has decreased because of the surface temperature decrease (lower emission of surface radiation) – or we might expect to find that the OLR has increased because the specific humidity and cloud fraction have decreased (thus allowing more surface and lower atmosphere radiation to make it through to TOA). These are mechanisms pulling in opposite directions.

In fact we see that the reduced specific humidity and cloud fraction have outweighed the effect of the surface temperature decrease. So the physics should be clear (still considering the Nino-4 region) – if surface temperature has decreased and OLR has increased then the explanation is the reduction in “greenhouse” gases (in this case water vapor) and clouds, which contain water.


We can see similar relationships through correlations.

The term ENC in the graphs stands for El Nino Correlation. This is essentially the correlation of the time-series data with time-series temperature change in the Nino-4 region (more specifically the Nino-4 temperature less the global temperature).

As the Nino-4 temperature declined over the period in question, a positive correlation means the value declined, while a negative correlation means the value increased.

The first graph below is the geographical distribution of rate of change of surface temperature. Of course we see that the Nino-4 region has been declining in temperature (as already seen in figure 2). The second graph shows this as well, but also indicates that the regions west and east of the Nino-4 region have  a stronger (negative) correlation than  other areas of larger temperature change (like the arctic region).

The third graph shows that 500 mb humidity has been decreasing in the Nino-4 region, and increasing to the west and east of this region. Likewise for the cloud fraction. And all of these are strongly correlated to the Nino-4 time-series temperature:

From Susskind (2012)

From Susskind et al (2012)

Figure 6 – Click to expand

For OLR correlations with Nino-4 temperature we find a strong negative correlation, meaning the OLR has increased in the Nino-4 region. And the opposite – a strong positive correlation – in the highlighted regions to east and west of Nino-4:

From Susskind (2012)

From Susskind (2012)

Figure 7 – Click to expand

Note the two highlighted regions

  • to the west: WPMC, Warm Pool Maritime Continent;
  • and to the east: EEMA, Equatorial Eastern Pacific and Atlantic Region

We can see the correlations between the global & tropical OLR and the OLR changes in these regions:


Figure 8 – Click to expand

Both WPMC and EEPA regions together explain the reduction over 10 years in OLR. Without these two regions the change is indistinguishable from zero.


This article is interesting for a number of reasons.

It shows the amazing variability of climate – we can see adjacent regions in the tropics with completely opposite changes over 10 years.

It shows that CERES gets almost identical anomaly results (changes in OLR) to AIRS. CERES directly measures OLR, while AIRS retrieves surface temperature, humidity profiles, cloud fractions and “greenhouse” gas concentrations and uses these to calculate OLR.

AIRS results demonstrate how surface temperature, humidity and cloud fraction affect OLR.

OLR has – over the globe – decreased over 10 years. This is a result of the El-Nino phase – at the start of the measurement period we were coming out of a large El-Nino event, and at the end of the measurement period we were in a La Nina event.

The reduction in OLR is explained by the change in the two regions identified, which are themselves strongly correlated to the Nino-4 region.


Interannual variability of outgoing longwave radiation as observed by AIRS and CERES, Susskind et al, Journal of Geophysical Research (2012) – paywall paper

AIRS/AMSU/HSB on the Aqua Mission: Design, Science Objectives, Data Products, and Processing Systems, Aumann et al, IEEE Transactions on Geoscience and Remote Sensing (2003) – free paper

Read Full Post »

In the last article we had a look at the depth of the “mixed ocean layer” (MLD) and its implications for the successful measurement of climate sensitivity (assuming such a parameter exists as a constant).

In Part One I created a Matlab model which reproduced the same problems as Spencer & Braswell (2008) had found. This model had one layer  (an “ocean slab” model) to represent the MLD with a “noise” flux into the deeper ocean (and a radiative noise flux at top of atmosphere). Murphy & Forster claimed that longer time periods require an MLD of increased depth to “model” the extra heat flow into the deeper ocean over time:

Because heat slowly penetrates deeper into the ocean, an appropriate depth for heat capacity depends on the length of the period over which Eq. (1) is being applied (Watterson 2000; Held et al. 2010). For 80-yr global climate model runs, Gregory (2000) derived an optimum mixed layer depth of 150 m. Watterson (2000) found an initial global heat capacity equivalent to a mixed layer of 200 m and larger values for longer simulations.

This seems like it might make sense – if we wanted to keep a “zero dimensional model”. But it’s questionable whether the model retains any value with this “fudge”. So because heat actually moves from the mixed layer into the deeper ocean (rather than the mixed layer increasing in depth) I instead enhanced the model to create a heat flux from the MLD through a number of ocean layers with a parameter called the vertical eddy diffusivity to determine this heat flux.

So the model is now a 1D model with a parameterized approach to ocean convection.

Eddy Diffusivity

The concept here is the analogy of conductivity but when convection is instead the primary mover of heat.

Heat flow by conduction is governed by a material property called conductivity and by the temperature difference. Changes in temperature are governed by heat flow and by the heat capacity. The result is this equation for reference and interest – so don’t worry if you don’t understand it:

∂T / ∂tα∂²T / ∂z²  – the 1-d version (see note 1)

where T = temperature, t = time, α = thermal diffusivity and z = depth

What it says in almost plain English is that the change in temperature with respect to time is equal to the thermal diffusivity times the change in gradient of temperature with depth. Don’t worry if that’s not clear (and there is a explanation of the simple steps required to calculate this in note 1).

Now the thermal diffusivity, α:

α = k/cpρ, where k = conductivity, cp = heat capacity and ρ = density

So, an important bit to understand..

  • if the conductivity is high and the heat capacity is low then temperature can change quickly
  • if the conductivity is high and the heat capacity is high then it slows down temperature change, and
  • if the conductivity is low and the heat capacity is high then temperature takes much longer to change

Many researchers have attempted to measure an average value for eddy diffusivity in the ocean (and in lakes). The concept here, an explained in Part Two, is that turbulent motions of the ocean move heat much more effectively than conduction. The value can’t be calculated from first principles because that would mean solving the problem of turbulence, which is one of the toughest problems in physics. Instead it has to be estimated from measurements.

There is an inherent problem with eddy diffusivity for vertical heat transfer that we will come back to shortly.

There is also a minor problem of notation that is “solved” here by changing the notation. Usually conductivity is written as “k”. However, most papers on eddy diffusivity write diffusivity as “k”, sometimes “K”, sometimes “κ” (Greek ‘kappa’) – creating potential confusion so I revert back to “α”. And to make it clear that it is the convective value rather than the conductive value, I use αeddy. And for the equivalent parameter to conductivity, keddy.

keddy = αeddycpρ

because cp= 4200 J/K.kg and ρ ≈ 1000 kg/m³:

keddy =4.2 x 106  αeddy – it’s useful to be able to see what the diffusivity means in terms of an equivalent “conductivity” type parameter

Measurements of Eddy Diffusivity

Oeschger et al (1975):

α is an apparent global eddy diffusion coefficient which helps to reproduce an average transport phenomenon consisting of a series of distinct and overlapping mechanisms.

Oeschger and his co-workers studied the problem via the diffusion into the ocean of 14C from nuclear weapons testing.

The range they calculated for αeddy = 1 x 10-4 – 1.8 x 10-4 m²/s.

This equates to keddy = 420 – 760 W/m.K, and by comparison, the conductivity of still water, k = 0.6 W/m.K – making convection around 1,000 times more effective at moving heat vertically through the ocean.

Broecker et al (1980) took a similar approach to estimating this value and commented:

We do not mean to imply that the process of vertical eddy mixing actually occurs within the body of the main oceanic thermocline. Indeed, the values we require are an order of magnitude greater than those permitted by conventional oceanographic wisdom (see Garrett, 1979, for summary).

The vertical eddy coefficients used here should rather be thought of as parameters that take into account all the processes that transfer tracers across density horizons. In addition to vertical mixing by eddies, these include mixing induced by sediment friction at the ocean margins and mixing along the surface in the regions where density horizons outcrop.

Their calculation, like Oeschger’s, used a simple model with the observed values plugged in to estimate the parameter:

Anyone familiar with the water mass structure and ventilation dynamics of the ocean will quickly realize that the box-diffusion model is by no means a realistic representation. No simple modification to the model will substantially improve the situation.

To do so we must take a giant step in complexity to a new generation of models that attempt to account for the actual geometry of ventilation of the sea. We are as yet not in a position to do this in a serious way. At least a decade will pass before a realistic ocean model can be developed.

The values they calculated for eddy diffusivity were broken up into different regions:

  • αeddy(equatorial) = 3.5 x 10-5 m²/s
  • αeddy(temperate) = 2.0 x 10-4 m²/s
  • αeddy(polar) = 3.0 x 10-4 m²/s

We will use these values from Broecker to see what happens to the measurement problems of climate sensitivity when used in my simple model.

These two papers were cited by Hansen et al in their 1985 paper with the values for vertical eddy diffusivity used to develop the value of the “effective mixed depth” of the ocean.

In reviewing these papers and searching for more recent work in the field, I tapped into a rich vein of research that will be the subject of another day.

First, Ledwell et al (1998) who measured eddy diffusivity via SF6 that they injected into the ocean:

The diapycnal eddy diffusivity K estimated for the first 6 months was 0.12 ± 0.02 x10-4 m²/s, while for the subsequent 24 months it was 0.17 ± 0.02 x10-4 m²/s.

[Note: units changed from cm²/s into m²/s for consistency]

It is worth reading their comment on this aspect of ocean dynamics. (Note that isopycnal = contact density surfaces and diapycnal = across isopycnal):

The circulation of the ocean is severely constrained by density stratification. A water parcel cannot move from one surface of constant potential density to another without changing its salinity or its potential temperature. There are virtually no sources of heat outside the sunlit zone and away from the bottom where heat diffuses from the lithosphere, except for the interesting hydrothermal vents in special regions. The sources of salinity changes are similarly confined to the boundaries of the ocean. If water in the interior is to change potential density at all, it must be by mixing across density surfaces (diapycnal mixing) or by stirring and mixing of water of different potential temperature and salinity along isopycnal surfaces (isopycnal mixing).

Most inferences of dispersion parameters have been made from observations of the large-scale fields or from measurements of dissipation rates at very small scales. Unambiguously direct measurements of the mixing have been rare. Because of the stratification of the ocean, isopycnal mixing involves very different processes than diapycnal mixing, extending to much greater length scales. A direct approach to the study of both isopycnal and diapycnal mixing is to release a tracer and measure its subsequent dispersal. Such an experiment, lasting 30 months and involving more than 105 km² of ocean, is the subject of this paper.

From Jayne (2009):

For example, the Community Climate Simulation Model (CCSM) ocean component model uses a form similar to Eq. (1), but with an upper-ocean value of 0.1 x 10-4 m²/s and a deep-ocean value of 1.0 x 10-4 m²/s, with the transition depth at 1000 m.

However, there is no observational evidence to suggest that the mixing in the ocean is horizontally uniform, and indeed there is significant evidence that it is heterogeneous with spatial variations of several orders of magnitude in its intensity (Polzin et al. 1997; Ganachaud 2003).

More on eddy diffusivity measurements in another article – the parameter has a significant impact on modeling of the ocean in GCMs and there is a lot of current research into this subject.

Eddy Diffusivity and Buoyancy Gradient

Sarmiento et al (1976) measured isotopes near the ocean floor:

Two naturally occurring isotopes can be applied to the determination of the rate of vertical turbulent mixing in the deep sea: 222Rn (half-life 3.824 days) and 228Ra (half-life 5.75 years). In this paper we discuss the results from fourteen 222Rn and two 228Ra profiles obtained as part of the GEOSECS program.

From these results we conclude that the most important factor influencing the vertical eddy diffusivity is the buoyancy gradient [(g/p)(∂ρpot/∂z)]. The vertical diffusivity shows an inverse proportionality to the buoyancy gradient.

Their paper is very much about the measurements and calculations of the deeper ocean, but is relevant for anywhere in the ocean, and helps explain why the different values for different regions were obtained by Broecker that we saw earlier. (Prof. Wallace S. Broecker was a co-author on this paper as well, and has authored/co-authored 100’s of papers on the ocean).

What is the buoyancy gradient and why does it matter?

Cold fluids sink and hot fluids rise. This is because cold substances contract and so are more dense. So in general, in the ocean, the colder water is below and the warmer water above. Probably everyone knows this.

The buoyancy gradient is a measure of how strong this effect is. The change in density with depth determines how resistant the ocean is to being overturned. If the ocean was totally stable no heat would ever penetrate below the mixed layer. But it does. And if the ocean was totally stable then the measurements of 14C from nuclear testing would be zero below the mixed layer.

But it is not surprising that the more stable the ocean is due to the buoyancy gradient the less heat diffuses down by turbulent motion.

And this is why the estimates by Broecker shown earlier have a much lower value of diffusivity for the tropics than for the poles. In general the poles are where deep convection takes place – lots of cold water sinks, mixing the ocean – and the tropics are where much weaker upwelling takes place – because the ocean surface is strongly heated. This is part of the large scale motion of the ocean, known as the thermohaline circulation. More on this another day.

Now water is largely incompressible which means that the density gradient is only determined by temperature and salinity. This creates the problem that eddy diffusivity is a value which is not only parameterized, but also dependent on the vertical temperature difference in the ocean.

Heat flow also depends on temperature difference, but with the opposite relationship. This is not something to untangle today. Today we will just see what happens to our simple model when we use the best estimates of vertical eddy diffusivity.

Modeling, Non-Linearity and Climate Sensitivity Measurement Problems

Murphy & Forster agreed in part with Spencer & Braswell about the variation in radiative noise from CERES measurements. I quote at length, because the Murphy & Forster paper is not freely available:

For the parameter N, SB08 use a random daily shortwave flux scaled so that the standard deviation of monthly averages of outgoing radiation (N – λT) is 1.3 W/m².

They base this on the standard deviation of CERES shortwave data between March 2000 and December 2005 for the oceans between 20 °Nand 20 °S.

We have analyzed the same dataset and find that, after the seasonal cycle and slow changes in forcing are removed, the standard deviation of monthly means of the shortwave radiation is 1.24 W/m², close to the 1.3 W/m² specified by SB08. However, longwave (infrared) radiation changes the energy budget just as effectively from the earth as shortwave radiation (reflected sunlight). Cloud systems that might induce random fluctuations in reflected sunlight also change outgoing longwave radiation. In addition, the feedback parameter λ is due to both longwave and shortwave radiation.

Modeled total outgoing radiation should therefore be compared with the observed sum of longwave and shortwave outgoing radiation, not just the shortwave component. The standard deviation of the sum of longwave and shortwave radiation in the same CERES dataset is 0.94 W/m². Even this is an upper limit, since imperfect spatial sampling and instrument noise contribute to the standard deviation.

[Note I change their α (climate feedback) to λ for consistency with previous articles].

And they continue:

We therefore use 0.94 W/m² as an upper limit to the standard deviation of outgoing radiation over the tropical oceans. For comparison, the standard deviation of the global CERES outgoing radiation is about 0.55 W/m².

All of these points seem valid (however, I am still in the process of examining CERES data, and can’t comment on their actual values of standard deviation. Apart from the minor challenge of extracting the data from the netCDF format there is a lot to examine. A lot of data and a lot of issues surrounding data quality).

However, it raised an interesting idea about non-linearity. Readers who remember on Part One will know that as radiative noise increases and ocean MLD decreases the measurement problem gets worse. And as the radiative noise decreases and ocean MLD increases the measurement problem goes away.

If we average global radiative noise and global MLD, plug these values into a zero-dimensional model and get minimal measurement problem what does this mean?

Due to non-linearity, it tells us nothing.

Averaging the inputs, applying them to a global model (i.e., a zero-dimensional model) and calculating λest (from the regression) gets very different results from applying the inputs separately to each region, averaging the results and calculating λest

I tested this with a simple model – created two regions, one 10% of the surface area, the other 90%. In the larger region the MLD was 200m and the radiative noise was zero; and in the smaller region the MLD was 20m and the (standard deviation of) radiative noise was varied from 0 – 2. The temperature and radiative flux were converted into an area weighted time series and the regression produced large deviations from the real value of λ.

A similar run on a global model with an MLD of 180m and radiative noise of 0-0.2 shows an accurate assessment of λ.

This is to be expected of course.

So with this in mind I tested the new 1D model with different values of ocean depth eddy diffusivity,  radiative noise, and an AR(1) model for the radiative noise. I used values for the tropical region as this is clearly the area most likely to upset the measurement – shallow MLD, higher radiative noise and weaker eddy diffusivity.

As best as I could determine from de Boyer Montegut’s paper, the average MLD for the 20°N – 20°S region is approximately 30m.

Here are the results using Oeschger’s value of eddy diffusivity for the tropics and the tropical value of radiative noise from MF2010 – varying ocean depth around 30m and the value of the AR(1) model for radiative noise:

Figure 1

For reference, as it’s hard to read off the graph, the value at 30m and φ=0.5 is λest = 2.3.

Using the current CCSM value of eddy diffusivity for the upper ocean:

Figure 2

For reference,  the value at 30m and φ=0.5 is λest = 0.2. (Compared with the real value of 3.0)

Note that these values are only for one region, not for the whole globe.

Another important point is that I have used the radiative noise value as the standard deviation of daily radiative noise. I have started to dig into CERES data to see whether such a value can be calculated, and also what typical value of autoregressive parameter should be used (and what kind of ARMA model), but this might take some time.

Yet smaller values of eddy diffusivity are possible for smaller regions, according to Jochum (2009). This would likely cause the problems of estimating climate sensitivity to become worse.

Simple Models

Murphy & Forster comment:

Although highly simplified, a single box model of the earth has some pedagogic value. One must remember that the heat capacity c and feedback parameter λ are not really constants, since heat penetrates more deeply into the ocean on long time scales and there are fast and slow climate feedbacks (Knutti et al. 2008).

It is tempting to add a few more boxes to account for land, ocean, different latitudes, and so forth. Adding more boxes to an energy balancemodel can be problematic because one must ensure that the boxes are connected in a physically consistent way. A good option is to instead consider a global climate model that has many boxes connected in a physically consistent manner.

The point being that no one believes a slab model of the ocean to be a model that gives really useful results. Spencer & Braswell likewise don’t believe that the slab model is in any way an accurate model of the climate.

They used such a model just to demonstrate a possible problem. Murphy & Forster’s criticism doesn’t seem to have solved the problem of “can we measure climate sensitivity?

Or at least, it appears easy to show that slightly different enhancements of the simple model demonstrate continued problems in measuring climate sensitivity – due to the impact of radiative noise in the climate system.


I have produced a simple model and apparently demonstrated continued climate sensitivity measurement problems. This is in contrast to Murphy & Forster who took a different approach and found that the problem went away. However, my model has a more realistic approach to moving heat from the mixed layer into the ocean depths than theirs.

My model does have the drawback that the massive army of Science of Doom model testers and quality control champions are all away on their Xmas break. So the model might be incorrectly coded.

It’s also likely that someone else can come along and take a slightly enhanced version of this model and make the problem vanish.

I have used values for MLD and eddy diffusivity that seem to represent real-world values but I have no idea as to the correct values for standard deviation and auto-correlation of daily radiative noise (or appropriate ARMA model). These values have a big impact on the climate sensitivity measurement problem for reasons explained in Part One.

A useful approach to determining the effect of radiative noise on climate sensitivity measurement might be to use a coupled atmosphere-ocean GCM with a known climate sensitivity and an innovative way of removing radiative noise. These kind of experiments are done all the time to isolate one effect or one parameter.

Perhaps someone has already done this specific test?

I see other potential problems in measuring climate sensitivity. Here is one obvious problem – as the temperature of the mixed layer increases with continued increases in radiative forcing the buoyancy gradient increases and the eddy diffusivity reduces. We can calculate radiative forcing due to “greenhouse” gases quite accurately and therefore remove it from the regression analysis (see Spencer & Braswell 2008 for more on this). But we can’t calculate the change in eddy diffusivity and heat loss to the deeper ocean. This adds another “correlated” term that seems impossible to disentangle from the climate sensitivity calculation.

An alternative way of looking at this is that climate sensitivity might not be a constant – as already noted in Part One.

Articles in this Series

Measuring Climate Sensitivity – Part One

Measuring Climate Sensitivity – Part Two – Mixed Layer Depths


Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008) – FREE

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

A box diffusion model to study the carbon dioxide exchange in nature, Oeschger et al, Tellus (1975)

Modeling the carbon system, Broecker et al, Radiocarbon (1980) – FREE

Climate response times: dependence on climate sensitivity and ocean mixing, Hansen et al, Science (1985)

The study of mixing in the ocean: A brief history, MC Gregg, Oceanography (1991) – FREE

Spatial Variability of Turbulent Mixing in the Abyssal Ocean, Polzin et al, Science (1997) – FREE

The Impact of Abyssal Mixing Parameterizations in an Ocean General Circulation Model, Steven R. Jayne, Journal of Physical Oceanography (2009)

The relationship between vertical eddy diffusion and buoyancy gradient in the deep sea, Sarmiento et al, Earth & Planetary Science Letters (1976)

Mixing of a tracer in the pycnocline, Ledwell et al, JGR (1998)

Impact of latitudinal variations in vertical diffusivity on climate simulations, Jochum, JGR (2009) – FREE

Mixed layer depth over the global ocean: An examination of profile data and a profile-based climatology, de Boyer Montegut et al, JGR (2004)


Note 1: The 1D version is really:

∂T / ∂t = ∂/∂z (α.∂T/∂z)

due to the fact that α can be a function of z (and definitely is in the case of the ocean).

Although this looks tricky – and it is tricky to find analytical solutions – solving the 1D version numerically is very straightforward and anyone can do it.

In plain English is looks something like:

– Heat flow into cell X = temperature difference between cell X and cell X-1

– Heat flow out of cell X = temperature difference between cell X and cell X+1

– Change in temperature = (Heat flow into cell X – Heat flow out of cell X) x time / heat capacity

Note 2: I am in the process of examining CERES data. Apart from the challenge of extracting the data from the netCDF format there is a lot to examine. A lot of data and a lot of issues surrounding data quality.

Read Full Post »

In Measuring Climate Sensitivity – Part One we saw that there can be potential problems in attempting to measure the parameter called “climate sensitivity”.

Using a simple model Spencer & Braswell (2008) had demonstrated that even when the value of “climate sensitivity” is constant and known, measurement of it can be obscured for a number of reasons.

The simple model was a “slab model” of the ocean with a top of atmosphere imbalance in radiation.

Murphy & Forster (2010) criticized Spencer & Braswell for a few reasons including the value chosen for the depth of this ocean mixed layer. As the mixed layer depth increases the climate sensitivity measurement problems are greatly reduced.

First, we will consider the mixed layer in the context of that simple model. Then we will consider what it means in real life.

The Simple Model of Climate Sensitivity

The simple model used by Spencer & Braswell has a “mixed ocean layer” of depth 50m.

Figure 1

In the model the mixed layer is where all of the imbalance in top of atmosphere radiation gets absorbed.

The idea in the simple model is that the energy absorbed from the top of atmosphere gets mixed into the top layer of the ocean very quickly. In reality, as we will see, there isn’t such a thing as one layer but it is a handy approximation.

Murphy & Forster commented:

For the heat capacity parameter c, SB08 use the heat capacity of a 50-m ocean mixed layer. This is too shallow to be realistic.

Because heat slowly penetrates deeper into the ocean, an appropriate depth for heat capacity depends on the length of the period over which Eq. (1) is being applied (Watterson 2000; Held et al. 2010).

For 80-yr global climate model runs, Gregory (2000) derived an optimum mixed layer depth of 150 m. Watterson (2000) found an initial global heat capacity equivalent to a mixed layer of 200 m and larger values for longer simulations.

Held et al. (2010) found an initial time constant τ = c/α of about four yr in the Geophysical Fluid Dynamics Laboratory global climate model. Schwartz (2007) used historical data to estimate a globally averaged mixed layer depth of 150 m, or 106 m if the earth were only ocean.

The idea is an attempt to keep the simplicity of one mixed layer for the model, but increase the depth of this mixed layer for longer time periods.

There is always a point where models – simplified versions of the real world – start to break down. This might be the case here.

The initial model was of a mixed layer of ocean, all at the same temperature because the layer is well-mixed – and with some random movement of heat between this mixed layer and the ocean depths. In a more realistic scenario, more heat flows into the deeper ocean as the length of time increases.

What Murphy & Forster are proposing is to keep the simple model and “account” for the ever increasing heat flow into the deeper ocean by using a depth of the mixed layer that is dependent on the time period.

If we do this perhaps the model will work, perhaps it won’t. By “work” we mean provide results that tell us something useful about the real world.

So I thought I would introduce some more realism (complexity) into the model and see what happened. This involves a bit of a journey.

Real Life Ocean Mixed Layer

Water is a very bad conductor of heat – as are plastic and other insulators. Good conductors of heat include metals.

However, in the ocean and the atmosphere conduction is not the primary heat transfer mechanism. It isn’t even significant. Instead, in the ocean it is convection – the bulk movement of fluids – that moves heat. Think of it like this – if you move a “parcel” of water, the heat in that parcel moves with it.

Let’s take a look at the temperature profile at the top of the ocean. Here the first graph shows temperature:

Soloviev & Lukas (1997)

Soloviev & Lukas (1997)

Figure 2

Note that the successive plots are not at higher and higher temperatures – they are just artificially separated to make the results easier to see. During the afternoon the sun heats the top of the ocean. As a result we get a temperature gradient where the surface is hotter than a few meters down. At night and early morning the temperature gradient disappears. (No temperature gradient means that the water is all at the same temperature)

Why is this?

Once the sun sets the ocean surface cools rapidly via radiation and convection to the atmosphere. The result is colder water, which is heavier. Heavier water sinks, so the ocean gets mixed. This same effect takes place on a larger scale for seasonal changes in temperature.

And the top of the ocean is also well mixed due to being stirred by the wind.

A comment from de Boyer Montegut and his coauthors (2004):

A striking and nearly universal feature of the open ocean is the surface mixed layer within which salinity, temperature, and density are almost vertically uniform. This oceanic mixed layer is the manifestation of the vigorous turbulent mixing processes which are active in the upper ocean.

Here is a summary graphic from the excellent Marshall & Plumb:

From Marshall & Plumb (2008)

Figure 3

There’s more on this subject in Does Back-Radiation “Heat” the Ocean? – Part Three.

How Deep is the Ocean Mixed Layer?

This is not a simple question. Partly it is a measurement problem, and partly there isn’t a sharp demarcation between the ocean mixed layer and the deeper ocean. Various researchers have made an effort to map it out.

Here is a global overview, again from Marshall & Plumb:

Figure 4

You can see that the deeper mixed layers occur in the higher latitudes.

Comment from de Boyer Montegut:

The main temporal variabilities of the MLD [mixed layer depth] are directly linked to the many processes occurring in the mixed layer (surface forcing, lateral advection, internal waves, etc), ranging from diurnal [Brainerd and Gregg, 1995] to interannual variability, including seasonal and intraseasonal variability [e.g., Kara et al., 2003a; McCreary et al., 2001]. The spatial variability of the MLD is also very large.

The MLD can be less than 20 m in the summer hemisphere, while reaching more than 500 m in the winter hemisphere in subpolar latitudes [Monterey and Levitus, 1997].

Here is a more complete map by month. Readers probably have many questions about methodology and I recommend reading the free paper:

From de Boyer Montegut et al (2004)

Figure 5 – Click for a larger image

Seeing this map definitely had me wondering about the challenge of measuring climate sensitivity. Spencer & Braswell had used 50m MLD to identify some climate sensitivity measurement problems. Murphy & Forster had reproduced their results with a much deeper MLD to demonstrate that the problems went away.

But what happens if instead we retest the basic model using the actual MLD which varies significantly by month and by latitude?

So instead of “one slab of ocean” at MLD = choose your value, we break up the globe into regions, have different values in each region each month and see what happens to climate sensitivity problems.

By the way, I also attempted to calculate the global annual (area weighted) average of MLD from the maps above, by eye. I also emailed the author of the paper to get some measurement details but no response.

My estimate of the data in this paper was a global annual area weighted average of 62 meters.

Trying Simple Models with Varying MLD

I updated the Matlab program from Measuring Climate Sensitivity – Part One. The globe is now broken up into 30º latitude bands, with the potential for a different value of mixed layer depth for each month of the year.

I created a number of different profiles:

Depth Type 0 – constant with month and latitude, as in the original article

Type 1 – using the values from de Boyer’s paper, as best as can be estimated from looking at the monthly maps.

Type 2 – no change each month, with scaling of 60ºN-90ºN = 100x the value for 0ºN – 30ºN, and 30ºN – 60ºN = 10x the value for 0ºN – 30ºN – similarly for the southern hemisphere.

Type 3 – alternating each month between Type 2 and its inverse, i.e., scaling of 0ºN – 30ºN = 100x the value for 60ºN-90ºN and 30ºN – 60ºN = 10x the value for 60ºN-90ºN.

Type 4 – no variation by latitude, but  month 1 = 1000x month 4, month 2 = 100x month 4, month 3 = 10x month 4, repeating 3 times  per year.

In each case the global annual (area weighted) average = 62m.

Essentially types 2-4 are aimed at creating extreme situations.

Here are some results (review the original article for some of the notation), recalling that the actual climate sensitivity, λ = 3.0:

Figure 6

Figure 7 – as figure 6 without 30-day averaging

Figure 8

Figure 9

Figure 10

Figure 11

Figure 12

What’s the message from these results?

In essence, type 0 (the original) and type 1 (using actual MLDs vs latitude and month from de Boyer’s paper) are quite similar – but not exactly the same.

However, if we start varying the MLD by latitude and month in a more extreme way the results come out very differently – even though the global average MLD is the same in each case.

This demonstrates that the temporal and area variation of MLD can have a significant effect and modeling the ocean as one slab – for the purposes of this enterprise – may be risky.


We haven’t considered the effect of non-linearity in these simple models. That is, what about interactions between different regions and months. If we created a yet more complex model where heat flowed between regions dependent on the relative depths of the mixed layers what would we find?

Losing the Plot?

Now, in case anyone has lost the plot by this stage – and it’s possible that I have – don’t get confused into thinking that we are evaluating GCM’s and gosh aren’t they simplistic.. No, GCM’s have very sophisticated modeling.

What we have been doing is tracing a path that started with a paper by Spencer & Braswell. This paper used a very simple model to show that with some random daily fluctuations in top of atmosphere radiative flux, perhaps due to clouds, the measurement of climate sensitivity doesn’t match the actual climate sensitivity.

We can do this in a model – prescribe a value and then test whether we can measure it. This is where this simple model came in. It isn’t a GCM.

However, Murphy & Forster came along and said if you use a deeper mixed ocean layer (which they claim is justified) then the measurement of climate sensitivity does more or less match the actual climate sensitivity (they also had comment on the values chosen for radiative flux anomalies, a subject for another day).

What struck me was that the test model needs some significant improvement to be able to assess whether or not climate sensitivity can be measured. And this is with the caveat – if climate sensitivity is a constant.

The Next Phase – More Realistic Ocean Model

As Murphy & Forster have pointed out, the longer the time period, the more heat is “injected” into the deeper ocean from the mixed layer.

So a better model would capture this better than just creating a deeper mixed layer for a longer time. Modeling true global ocean convection is an impossible task.

As a recap, conducted heat flow:

q” = k.ΔT/d

where q” = heat flow per unit area, k = conductivity, ΔT = temperature difference, and d = depth of layer

Take a look at Heat Transfer Basics – Part Zero for more on these basics.

For water, k = 0.6 W/m².K. So, as an example, if we have a 10ºC temperature difference across 1 km depth of water, q” = 0.006 W/m². This is tiny. Heat flow via conduction is insignificant. Convection is what moves heat in the ocean.

Many researchers have measured and estimated vertical heat flow in the ocean to come up with a value for vertical eddy diffusivity. This allows us to make some rough estimates of vertical heat flow via convection.

In the next version of the Matlab program (“in press”) the ocean is modeled with different eddy diffusivities below the mixed ocean layer to see what happens to the measurement of climate sensitivity. So far, the model comes up with wildly varying results when the eddy diffusivity is low, i.e., heat cannot easily move into the ocean depths. And it comes up with normal results when the eddy diffusivity is high, i.e., heat moves relatively quickly into the ocean depths.

Due to shortness of time, this problem has not yet been resolved. More in due course.

This article is already long enough, so the next part will cover the estimated values for eddy diffusivity because it’s an interesting subject


Regular readers of this blog understand that navigating to any kind of conclusion takes some time on my part. And that’s when the subject is well understood. I’m finding that the signposts on the journey to measuring climate sensitivity are confusing and hard to read.

And that said, this article hasn’t shed any more light on the measurement of climate sensitivity. Instead, we have reviewed more ways in which measurements of it might be wrong. But not conclusively.

Next up we will take a detour into eddy diffusivity, hoping in the meantime that the Matlab model problems can be resolved. Finally a more accurate model incorporating eddy diffusivity to model vertical heat flow in the ocean will show us whether or not climate sensitivity can be accurately measured.


Articles in this Series

Measuring Climate Sensitivity – Part One

Measuring Climate Sensitivity – Part Three – Eddy Diffusivity


Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008)

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

Observation of large diurnal warming events in the near-surface layer of the western equatorial Pacific warm pool, Soloviev & Lukas, Deep Sea Research Part I: Oceanographic Research Papers (1997)

Atmosphere, Ocean and Climate Dynamics: An Introductory Text, Marshall & Plumb, Elsevier Academic Press (2008)

Mixed layer depth over the global ocean: An examination of profile data and a profile-based climatology, de Boyer Montegut et al, JGR (2004)

Read Full Post »

I don’t think this is a simple topic.

The essence of the problem is this:

Can we measure the top of atmosphere (TOA) radiative changes and the surface temperature changes and derive the “climate sensivity” from the relationship between the two parameters?

First, what do we mean by “climate sensitivity”?

In simple terms this parameter should tell us how much more radiation (“flux”) escapes to space for each 1°C increase in surface temperature.

Climate Sensitivity Is All About Feedback

Climate sensitivity is all about trying to discover whether the climate system has positive or negative feedback.

If the average surface temperature of the earth increased by 1°C and the radiation to space consequently increased by 3.3 W/m², this would be approximately “zero feedback”.

Why is this zero feedback?

If somehow the average temperature of the surface of the planet increased by 1°C – say due to increased solar radiation – then as a result we would expect a higher flux into space. A hotter planet should radiate more. If the increase in flux = 3.3 W/m² it would indicate that there was no negative or positive feedback from this solar forcing (note 1).

Suppose the flux increased by 0. That is, the planet heated up but there was no increase in energy radiated to space. That would be positive feedback within the climate system – because there would be nothing to “rein in” the increase in temperature.

Suppose the flux increased by 5 W/m². In this case it would indicate negative feedback within the climate system.

The key value is the “benchmark” no feedback value of 3.3 W/m². If the value is above this, it’s negative feedback. If the value is below this, it’s positive feedback.

Essentially, the higher the radiation to space as a result of a temperature increase the more the planet is able to “damp out” temperature changes that are forced via solar radiation, or due to increases in inappropriately-named “greenhouse” gases.

Consider the extreme case where as the planet warms up it actually radiates less energy to space – clearly this will lead to runaway temperature increases (less energy radiated means more energy absorbed, which increased temperatures, which leads to even less energy radiated..).

As a result we measure sensitivity as W/m².K which we read as Watts per meter squared per Kelvin” – and 1K change is the same as 1°C change.

Theory and Measurement

In many subjects, researchers’ algebra converges on conventional usage, but in the realm of climate sensitivity everyone has apparently adopted their own. As a note for non-mathematicians, there is nothing inherently wrong with this, but it just makes each paper confusing especially for newcomers and probably for everyone.

I mostly adopt the Spencer & Braswell 2008 terminology in this article (see reference and free link below). I do change their α (climate sensitivity) into λ (which everyone else uses for this value) mainly because I had already produced a number of graphs with λ before starting to write the article..

The model is a very simple 1-dimensional model of temperature deviation into the ocean mixed layer, from the first law of thermodynamics:

C.∂T/∂t = F + S ….[1]

where C = heat capacity of the ocean, T = temperature anomaly, t = time, F = total top of atmosphere (TOA) radiative flux anomaly, S = heat flux anomaly into the deeper ocean

What does this equation say?

Heat capacity times change in temperature equals the net change in energy

– this is a simple statement of energy conservation, the first law of thermodynamics.

The TOA radiative flux anomaly, F, is a value we can measure using satellites. T is average surface temperature, which is measured around the planet on a frequent basis. But S is something we can’t measure.

What is F made up of?

Let’s define:

F = N + f – λT ….[1a]

where N = random fluctuations in radiative flux, f = “forcings”, and λT is the all important climate response or feedback.

The forcing f is, for the purposes of this exercise, defined as something added into the system which we believe we can understand and estimate or measure. This could be solar increases/decreases, it could be the long term increase in the “greenhouse” effect due to CO2, methane and other gases. For the purposes of this exercise it is not feedback. Feedback includes clouds and water vapor and other climate responses like changing lapse rates (atmospheric temperature profiles), all of which combine to produce a change in radiative output at TOA.

And an important point is that for the purposes of this theoretical exercise, we can remove f from the measurements because we believe we know what it is at any given time.

N is an important element. Effectively it describes the variations in TOA radiative flux due to the random climatic variations over many different timescales.

The climate sensitivity is the value λT, where λ is the value we want to find.

Noting the earlier comment about our assumed knowledge of ‘f’ (note 2), we can rewrite eqn 1:

C.∂T/∂t = – λT + N + S ….[2]

remembering that – λT + N = F is the radiative value we measure at TOA


If we plot F (measured TOA flux) vs T we can estimate λ from the slope of the least squares regression.

However, there is a problem with the estimate:

λ (est) = Cov[F,T] / Var[T] ….[3]

          = Cov[- λT + N, T] / Var[T]

where Cov[a,b] = covariance of a with b, and Var[a]= variance of a

Forster & Gregory 2006

This oft-cited paper (reference and free link below) calculates the climate sensitivity from 1985-1996 using measured ERBE data at 2.3 ± 1.3 W/m².K.

Their result indicates positive feedback, or at least, a range of values which sit mainly in the positive feedback space.

On the method of calculation they say:

This equation includes a term that allows F to vary independently of surface temperature.. If we regress (- λT+ N) against T, we should be able to obtain a value for λ. The N terms are likely to contaminate the result for short datasets, but provided the N terms are uncorrelated to T, the regression should give the correct value for λ, if the dataset is long enough..

[Terms changed to SB2008 for easier comparison, and emphasis added].


Like Spencer & Braswell, I created a simple model to demonstrate why measured results might deviate from the actual climate sensitivity.

The model is extremely simple:

  • a “slab” model of the ocean of a certain depth
  • daily radiative noise (normally distributed with mean=0, and standard deviation σN)
  • daily ocean flux noise (normally distributed with mean=0, and standard deviation σS)
  • radiative feedback calculated from the temperature and the actual climate sensitivity
  • daily temperature change calculated from the daily energy imbalance
  • regression of the whole time series to calculate the “apparent” climate sensitivity

In this model, the climate sensitivity, λ = 3.0 W/m².K.

In some cases the regression is done with the daily values, and in other cases the regression is done with averaged values of temperature and TOA radiation across time periods of 7, 30 & 90 days. I also put a 30-day low pass filter on the daily radiative noise in one case (before “injecting” into the model).

Some results are based on 10,000 days (about 30 years), with 100,000 days (300 years) as a separate comparison.

In each case the estimated value of λ is calculated from the mean of 100 simulation results. The 2nd graph shows the standard deviation σλ, of these simulation results which is a useful guide to the likely spread of measured results of λ (if the massive oversimplifications within the model were true). The vertical axis (for the estimate of λ) is the same in each graph for easier comparison, while the vertical axis for the standard deviation changes according to the results due to the large changes in this value.

First, the variation as the number of time steps changes and as the averaging period changes from 1 (no averaging) through to 90-days. Remember that the “real” value of λ = 3.0 :

Figure 1

Second, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The daily temperature and radiative flux is calculated as a monthly average before the regression calculation is carried out:

Figure 2

As figure 2, but for 100,000 time steps (instead of 10,000):

Figure 3

Third, the estimate as the standard deviation of the radiative flux is increased, and the ocean depth ranges from 20-200m. The regression calculation is carried out on the daily values:

Figure 4

As figure 4, but with 100,000 time steps:

Figure 5

Now against averaging period and also against low pass filtering of the “radiative flux noise”:

Figure 6

As figure 6 but with 100,000 time steps:

Figure 7

Now with the radiative “noise” as an AR(1) process (see Statistics and Climate – Part Three – Autocorrelation), vs the autoregressive parameter φ and vs the number of averaging periods: 1 (no averaging), 7, 30, 90 with 10,000 time steps (30 years):

Figure 8

And the same comparison but with 100,000 timesteps:

Figure 9

Discussion of Results

If we consider first the changes in the standard deviation of the estimated value of climate sensitivity we can see that the spread in the results is much higher in each case when we consider 30 years of data vs 300 years of data. This is to be expected. However, given that in the 30-year cases σλ is similar in magnitude to λ we can see that doing one estimate and relying on the result is problematic. This of course is what is actually done with measurements from satellites where we have 30 years of history.

Second, we can see that mostly the estimates of λ tend to be lower than the actual value of 3.0 W/m².K. The reason is quite simple and is explained mathematically in the next section which non-mathematically inclined readers can skip.

In essence, it is related to the idea in the quote from Forster & Gregory. If the radiative flux noise is uncorrelated to temperature then the estimates of λ will be unbiased. By the way, remember that by “noise” we don’t mean instrument noise, although that will certainly be present. We mean the random fluctuations due to the chaotic nature of weather and climate.

If we refer back to Figure 1 we can see that when the averaging period = 1, the estimates of climate sensitivity are equal to 3.0. In this case, the noise is uncorrelated to the temperature because of the model construction. Slightly oversimplifying, today’s temperature is calculated from yesterday’s noise. Today’s noise is a random number unrelated to yesterday’s noise. Therefore, no correlation between today’s temperature and today’s noise.

As soon as we average the daily data into monthly results which we use to calculate the regression then we have introduced the fact that monthly temperature is correlated to monthly radiative flux noise (note 3).

This is also why Figures 8 & 9 show a low bias for λ even with no averaging of daily results. These figures are calculated with autocorrelation for radiative flux noise. This means that past values of flux are correlated to current vales – and so once again, daily temperature will be correlated with daily flux noise. This is also the case where low pass filtering is used to create the radiative noise data (as in Figures 6 & 7).


x = slope of the line from the linear regression

x = Cov[- λT + N, T] / Var[T] ….[3]

It’s not easy to read equations with complex terms numerator and denominator on the same line, so breaking it up:

Cov[- λT + N, T] = E[ (λT + N)T ] – E[- λT + N]E[T] ….[4], where E[a] = expected value of a

= E[-λT²] + E[NT] + λ.E[T].E[T] – E[N].E[T]

= -λ { E[T²] – (E[T])² } + E[NT] – E[N].E[T] …. [4]


Var[T] = E[T²] – (E[T])² …. [5]


x = -λ + { E[NT] – E[N].E[T] } / { E[T²] – (E[T])² } …. [6]

And we see that the regression of the line is always biased if N is correlated with T. If the expected value of N = 0 the last term in the top part of the equation drops out, but E[NT] ≠ 0 unless N is uncorrelated with T.

Note of course that we will use the negative of the slope of the line to estimate λ, and so estimates of λ will be biased low.

As a note for the interested student, why is it that some of the results show λ > 3.0?

Murphy & Forster 2010

Murphy & Forster picked up the challenge from Spencer & Braswell 2008 (reference below but no free link unfortunately). The essence of their paper is that using more realistic values for radiative noise and mixed ocean depth the error in calculation of λ is very small:

From Murphy & Forster (2010)

Figure 10

The value ba on the vertical axis is a normalized error term (rather than the estimate of λ).

Evaluating their arguments requires more work on my part, especially analyzing some CERES data, so I hope to pick that up in a later article. [Update, Spencer has a response to this paper on his blog, thanks to Ken Gregory for highlighting it]

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Stephens (2005), reference and free link below:

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating  from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to “(1)” is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.


Measuring the relationship between top of atmosphere radiation and temperature is clearly very important if we want to assess the all-important climate sensitivity.

Spencer & Braswell have produced a very useful paper which demonstrates some obvious problems with deriving the value of climate sensitivity from measurements. Although I haven’t attempted to reproduce their actual results, I have done many other model simulations to demonstrate the same problem.

Murphy & Forster have produced a paper which claims that the actual magnitude of the problem demonstrated by Spencer & Braswell is quite small in comparison to the real value being measured (as yet I can’t tell whether their claim is correct).

The value called climate sensitivity might be a variable (i.e., not a constant value) and it might turn out to be much harder to measure than it really seems (and already it doesn’t seem easy).

Articles in this Series

Measuring Climate Sensitivity – Part Two – Mixed Layer Depths

Measuring Climate Sensitivity – Part Three – Eddy Diffusivity


The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data, Forster & Gregory, Journal of Climate (2006)

Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008)

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

Cloud Feedbacks in the Climate System: A Critical Review, Stephens, Journal of Climate (2005)


Note 1 – The reason why the “no feedback climate response” = 3.3 W/m².K is a little involved but is mostly due to the fact that the overall climate is radiating around 240 W/m² at TOA.

Note 2 – This is effectively the same as saying f=0. If that seems alarming I note in advance that the exercise we are going through is a theoretical exercise to demonstrate that even if f=0, the regression calculation of climate sensitivity includes some error due to random fluctuations.

Note 3 – If the model had one random number for last month’s noise which was used to calculate this month’s temperature then the monthly results would also be free of correlation between the temperature and radiative noise.

Read Full Post »

In the The Amazing Case of “Back-Radiation” series, which included Part Two and Part Three, someone commented that it would have been good to see more than a few days of DLR (downward longwave radiation, aka “back radiation”) data. There were some monthly summaries from a number of locations, but the BSRN (baseline surface radiation network) data that I selected and plotted was quite limited.

At the time I was using Excel to load up the data and with values recorded every minute it wasn’t easy to plot more than a week of data. Armed with some new tools, here’s the data from Darwin, Australia from 2003 from the BSRN network:


Long, Charles (2009): Basic measurements of radiation at station Darwin

Long, Charles (2009): Basic measurements of radiation at station Darwin


Click on the image for a larger view

The mean = 409 W/m² and the standard deviation = 27 W/m². (I don’t know what happened in July, I expect it is more likely to be instrument / data collection issues than the DLR taking vacation for the month).

Here is the expanded data on January through to June. The vertical axis is the same for each for easier comparison. Click on any of the graphs below to get a larger view.



Long, Charles (2009): Basic measurements of radiation at station Darwin

Long, Charles (2009): Basic measurements of radiation at station Darwin




Long, Charles (2009): Basic measurements of radiation at station Darwin

Long, Charles (2009): Basic measurements of radiation at station Darwin




Long, Charles (2009): Basic measurements of radiation at station Darwin

Long, Charles (2009): Basic measurements of radiation at station Darwin




Long, Charles (2009): Basic measurements of radiation at station Darwin

Long, Charles (2009): Basic measurements of radiation at station Darwin




Long, Charles (2009): Basic measurements of radiation at station Darwin

Long, Charles (2009): Basic measurements of radiation at station Darwin




Long, Charles (2009): Basic measurements of radiation at station Darwin

Long, Charles (2009): Basic measurements of radiation at station Darwin


The atmosphere cools down a lot slower than the land, which is why the difference between DLR for day and night is generally quite small. The way to think about any “body” heating or cooling is to consider two factors:

  • its specific heat capacity (how much heat is needed to lift 1kg of that substance by 1K or 1°C
  • its ability to radiate (or conduct) heat

99% of the atmosphere is composed of gases that can’t radiate any significant heat – N2 and O2. As shown in CO2 – An Insignificant Trace Gas? the absorption and emission ability of these gases is more than a billion times less than water vapor and CO2.

So the result is that the atmosphere takes a long time to heat up and to cool down when radiation is involved.

What is important to understand is that the DLR value measured at any one time is dependent on two important factors:

  • the temperature profile of the atmosphere above the measurement location
  • the concentration of gases that can radiate longwave

So lateral air movements have the ability to cause larger DLR changes. A strong wind blowing colder drier air can reduce the DLR significantly, and a hotter moister wind can increase DLR significantly.

Read Full Post »

In Part One we took a look at what data was available for “back radiation”, better known as Downward Longwave Radiation, or DLR.

The fact that the data is expensive to obtain doesn’t mean that there is any doubt that downward longwave radiation exists and is significant. It’s no more in question than the salinity of the ocean.

There appear to be three difficulties in many people’s understanding of DLR:

  1. It doesn’t exist
  2. It’s not caused by the inappropriately-named “greenhouse” gases
  3. It can’t have any effect on the temperature of the earth’s surface

There appear to be many tens of variants of arguments around these three categories and it’s impossible to cover them all.

What’s better is try and explain why each category of argument is in error.

Part One covered the fact that DLR exists and is significant. What we will look at in this article is what causes it. Remember that we can measure this DLR at night, and the definition of DLR is that it is radiation > 4μm.

99% of solar radiation is <4μm – see The Sun and Max Planck Agree. Solar and longwave radiation are of a similar magnitude (at the top of atmosphere) therefore when we measure radiation with a wavelength > 4μm we know that it is radiated from the surface or from the atmosphere.

Data from the BSRN network, courtesy of the World Radiation Monitoring Center

Data from the BSRN network, courtesy of the World Radiation Monitoring Center

Notice that the night-time radiation (midnight local time = 6am UTC) is not a lot lower than the peak daytime radiation. The atmosphere cools down slower than the surface of the land (but faster than the ocean).

This by itself should demonstrate that what we are measuring is from the atmosphere, not solar radiation – otherwise the night-time radiation would drop to zero.

More DLR measurements from Alice Springs, Australia. Latitude: -23.798000, Longitude: 133.888000. BSRN station no. 1; Surface type: grass; Topography type: flat, rural.

Summer measurements over 4 days:

Forgan, Bruce (2007): Basic measurements of radiation at station Alice Springs (2000-06)

Forgan, Bruce (2007): Basic measurements of radiation at station Alice Springs (2000-06)

Winter measurements over 4 days:

Forgan, Bruce (2007): Basic measurements of radiation at station Alice Springs (2000-06)

Forgan, Bruce (2007): Basic measurements of radiation at station Alice Springs (2000-06)

This radiation is not solar and can only be radiation emitted from the atmosphere.

Properties of Gases – Absorption and Emission

As we can see from the various measurements in Part One, and the measurements here, the amount of radiation from the atmosphere is substantial – generally in the order of 300W/m2 both night and day. What causes it?

If measurements of longwave radiation at the surface are hard to come by, spectral measurements are even more sparse, again due to the expense of a piece of equipment like an FT-IT (Fourier Transform Infrared Spectroscope).

You can see some more background about absorption and emission in CO2 – An Insignificant Trace Gas? – Part Two.

A quick summary of some basics here – each gas in the atmosphere has properties of absorption and emission of electromagnetic radiation – and each gas is different. These are properties which have been thoroughly studied in the lab, and in the atmosphere. When a photon interacts with a gas molecule it will be absorbed only if the amount of energy in the photon is a specific amount – the right quantum of energy to change the state of that molecule – to make it vibrate or rotate, or a combination of these.

The amount of energy in a photon is dependent on its wavelength.

This post won’t be about quantum mechanics so we’ll leave the explanation of why all this absorption happens in such different ways for N2 vs water vapor (for example) and concentrate on a few simple measurements.

The only other important point to make is that if a gas can absorb at that wavelength, it can also emit at that wavelength – and conversely if a gas can’t absorb at a particular wavelength, it can’t emit at that wavelength.

Here are some absorption properties of different gases in the atmosphere:

From the HITRANS database from spectralcalc.com

From the HITRANS database from spectralcalc.com

And for those not used to this kind of graph, the vertical axis is on a logarithmic scale. This means that each horizontal line is a factor of 10.

So if we take the example of oxygen (O2) at 6-7μm the absorption is a factor of 1,000,000,000 times (1 billion times) lower than water vapor at those wavelengths.

Water vapor – as you can see above – absorbs across a very wide range of wavelengths. But if we take a look at CO2 and water vapor in a small region centered around 15μm we can see how different the absorption is:

From the HITRANS database from spectralcalc.com

From the HITRANS database from spectralcalc.com

We know the absorption properties of each gas at each wavelength and therefore we also know the emission properties of each gas at each wavelength.

So when we measure the spectrum of a radiating body we can calculate the energy in each part of the spectrum and calculate how much energy is coming from each gas. There is nothing at all controversial in this – not in physics anyway.

Measured Spectra of Downward Longwave Radiation

Now we know how to assess the energy radiated from each gas we just need some spectral plots of DLR.

Remember in Part One I commented about one of the papers:

Their paper isn’t about establishing whether or not atmospheric radiation exists. No one in the field doubts it, any more than anyone doubts the existence of ocean salinity. This paper is about establishing a better model for calculating DLR – as expensive instruments are not going to cover the globe any time soon.

If we want to know the total DLR and spectral DLR at every point over the globe there is no practical alternative to using models. So what these papers are almost always about is a model to calculate total DLR – or the spectrum of DLR – based on the atmospheric properties at the time. The calculated values are compared with the measurements to find out how good the models are – and that is the substance of most of the papers.

By the way, when we talk about models – this isn’t “predicting the future climate in the next decade using a GCM” model, this is simply doing a calculation – albeit a very computationally expensive calculation – from measured parameters to calculate other related parameters that are more difficult to measure. The same way someone might calculate the amount of stress in a bridge during summer and winter from a computer model. Well, I digress..

What DLR spectral measurements do we have? All from papers assessing models vs measurements..

One place that researchers have tested models is Antarctica. This is because by finding the driest place on earth, it eliminates the difficulties involved in the absorption spectrum of water vapor and the problems of knowing exactly how much water vapor is in the atmosphere at the time the spectral measurements were taken. This helps test the models = solving the radiative transfer equations. In this first example, from Walden (1998), we can see that the measurements and calculations are very close:

Antarctica - Walden (1998)

Antarctica - Walden (1998)

Note that in this field we usually see plots against wavenumber in cm-1 rather than a plot against wavelength in μm. I’ve added wavelength to each plot to make it easier to read.

I’ll comment on the units at the end, because unit conversion is very dull – however, some commenters on this blog have been confused by how to convert radiance (W/m2.sr.cm-1) into flux (W/m2). For now, note that the total DLR value measured at the time the spectrum was taken was 76 W/m2.

We can see that the source of this DLR was CO2, ozone, methane, water vapor and nitrous oxide. Oxygen and nitrogen emit radiation a billion times lower intensity at their peak.

The proportion of DLR from CO2 is much higher than we would see in the tropics, simply because of the lack of water vapor in Antarctica.

Here is a spectrum measured in Wisconsin from Ellingson & Wiscombe (1996):

Wisconsin, Ellingson & Wiscombe (1996)

Wisconsin, Ellingson & Wiscombe (1996)

We see a similar signal to Antarctica with a higher water vapor signal. Notice, as just one point of interest, that the CO2 value is of a higher magnitude than in Antarctica – this is because the atmospheric temperature is higher in Wisconsin than in Antarctica. This paper didn’t record the total flux.

From Evans & Puckrin (2006) in Canada:

Canada, Evans (2006)

Canada in winter, Evans & Puckrin (2006)

By now, a familiar spectrum, note the units are different.

Canada in summer, Evans & Puckrin (2006)

Canada in summer, Evans & Puckrin (2006)

And a comparison with summer with more water vapor.

From Lubin et al (1995) – radiation spectrum from the Pacific:

Pacific, Lubin (1995)

Pacific, Lubin (1995)

Alternative Theories

Some alternative theories have been proposed from outside of the science community:

  • DLR is “reflected surface radiation” by the atmosphere via Rayleigh scattering
  • DLR is just poor measurement technology catching the upward surface radiation

A very quick summary on the two “ideas” above.

Rayleigh scattering is proportional to λ-4, where λ is the wavelength. That’s not easy to visualize – but in any case Rayleigh scattering is not significant for longwave radiation. However, to give some perspective, here are the relative effects of Rayleigh scattering vs wavelength:

So if this mechanism was causing DLR we would measure a much higher value for lower wavelengths (higher wavenumbers). Just for easy comparison with the FTIR measurements above, the above graph is converted to wavenumber to orientate it in the same direction:

Compare that with the measured spectra above.

What about upward surface radiation being captured without the measurement people realizing (measurement error)?

If that was the case the measured spectrum would follow the Planck function quite closely, e.g.:

Blackbody radiation curves for -10'C (263K) and +10'C (283K)

Blackbody radiation curves for -10'C (263K) and +10'C (283K)

(Once again you need to mentally reverse the horizontal axis to have the same orientation as the FTIR measurements).

As we have seen, the spectra of DLR show the absorption/emission spectra of water vapor, CO2, CH4, O3 and NO2. They don’t match Rayleigh scattering and they don’t match surface emission.


The inescapable conclusion is that DLR is from the atmosphere. And for anyone with a passing acquaintance with radiation theory, this is to be expected.

If the atmosphere did not radiate at the spectral lines of water vapor, CO2, CH4 and O3 then radiation theory would need to be drastically revised. The amount of radiation depends on the temperature of the atmosphere as well as the concentration of radiative gases, so if the radiation was zero – a whole new theory would be needed.

Why does the atmosphere radiate? Because it is heated up via convection from the surface, solar radiation and surface radiation. The atmosphere radiates according to its temperature, in accordance with Planck’s law and at wavelengths where gas molecules are able to radiate.

There isn’t any serious theory that the atmosphere doesn’t emit radiation. If the atmosphere is above absolute zero and contains gases that can absorb and emit longwave radiation (like water vapor and CO2) then it must radiate.

And although the proof is easy to see, no doubt there will be many “alternative” explanations proposed..

Update – Part Three now published

Darwinian Selection – “Back Radiation”


Measurements of the downward longwave radiation spectrum over the Antarctic plateau and comparisons with a line-by-line radiative transfer model for clear skies, Walden et al, Journal of Geophysical Research (1998)

The spectral radiance experiment (SPECTRE): Project Description and Sample Results, Ellingson & Wiscombe, Bulletin of the AMU (1996)

Measurements of the radiative surface forcing of climate, Evans & Puckrin, 18th Conference on Climate Variability and Change, (2006)

Spectral Longwave Emission in the Tropics, Lubin et al, Journal of Climate (2005)

Read Full Post »

This could have been included in the Earth’s Energy Budget series, but it deserved a post of its own.

First of all, what is “back-radiation” ? It’s the radiation emitted by the atmosphere which is incident on the earth’s surface. It is also more correctly known as downward longwave radiation – or DLR

What’s amazing about back-radiation is how many different ways people arrive at the conclusion it doesn’t exist or doesn’t have any effect on the temperature at the earth’s surface.

If you want to look at the top of the atmosphere (often abbreviated as “TOA”) the measurements are there in abundance. This is because (since the late 1970’s) satellites have been making continual daily measurements of incoming solar, reflected solar, and outgoing longwave.

However, if you want to look at the surface, the values are much “thinner on the ground” because satellites can’t measure these values (see note 1). There are lots of thermometers around the world taking hourly and daily measurements of temperature but instruments to measure radiation accurately are much more expensive. So this parameter has the least number of measurements.

This doesn’t mean that the fact of “back-radiation” is in any doubt, there are just less measurement locations.

For example, if you asked for data on the salinity of the ocean 20km north of Tangiers on 4th July 2004 you might not be able to get the data. But no one doubts that salt was present in the ocean on that day, and probably in the region of 25-35 parts per thousand. That’s because every time you measure the salinity of the ocean you get similar values. But it is always possible that 20km off the coast of Tangiers, every Wednesday after 4pm, that all the salt goes missing for half an hour.. it’s just very unlikely.

What DLR Measurements Exist?

Hundreds, or maybe even thousands, of researchers over the decades have taken measurements of DLR (along with other values) for various projects and written up the results in papers. You can see an example from a text book in Sensible Heat, Latent Heat and Radiation.

What about more consistent onging measurements?

The Global Energy Balance Archive contains quality-checked monthly means of surface energy fluxes. The data has been extracted from many sources including periodicals, data reports and unpublished manuscripts. The second table below shows the total amount of data stored for different types of measurements:

From "Radiation and Climate" by Vardavas & Taylor (2007)

From "Radiation and Climate" by Vardavas & Taylor (2007)

You can see that DLR measurements in the GEBA archive are vastly outnumbered by incoming solar radiation measurements. The BSRN (baseline surface radiation network) was established by the World Climate Research Programme (WCRP) as part of GEWEX (Global Energy and Water Cycle Experiment) in the early 1990’s:

The data are of primary importance in supporting the validation and confirmation of satellite and computer model estimates of these quantities. At a small number of stations (currently about 40) in contrasting climatic zones, covering a latitude range from 80°N to 90°S (see station maps ), solar and atmospheric radiation is measured with instruments of the highest available accuracy and with high time resolution (1 to 3 minutes).

Twenty of these stations (according to Vardavas & Taylor) include measurements of downwards longwave radiation (DLR) at the surface. BSRN stations have to follow specific observational and calibration procedures, resulting in standardized data of very high accuracy:

  • Direct SW  – accuracy 1% (2 W/m2)
  • Diffuse radiation – 4% (5 W/m2)
  • Downward longwave radiation, DLR – 5% (10 W/m2)
  • Upward longwave radiation – 5% (10 W/m2)

Radiosonde data exists for 16 of the stations (radiosondes measure the temperature and humidity profile up through the atmosphere).

Click for a larger image

A slightly earlier list of stations from 2007:

From "Radiation and Atmosphere" by Vardavas & Taylor (2007)

From "Radiation and Atmosphere" by Vardavas & Taylor (2007)

Solar Radiation and Atmospheric Radiation

Regular readers of this blog will be clear about the difference between solar and “terrestrial” radiation. Solar radiation has its peak value around 0.5μm, while radiation from the surface of the earth or from the atmosphere has its peak value around 10μm and there is very little crossover. For more details on this basic topic, see The Sun and Max Planck Agree


Radiation vs Wavelength -Sun and Earth

Radiation vs Wavelength - Sun and Earth

What this means is that solar radiation and terrestrial/atmospheric radiation can be easily distinguished. Conventionally, climate science uses “shortwave” to refer to solar radiation – for radiation with a wavelength of less than 4μm – and “longwave” to refer to terrestrial or atmospheric radiation – for wavelengths of greater than 4μm.

This is very handy. We can measure radiation in the wavelengths > 4μm even during the day and know that the source of this radiation is the surface (if we are measuring upward radiation from the surface) or the atmosphere (if we are measuring downward radiation at the surface). Of course, if we measure radiation at night then there’s no possibility of confusion anyway.


Here are a few extracts from papers with some sample data.

Downward longwave radiation estimates for clear and all-sky conditions in the Sertãozinho region of São Paulo, Brazil by Kruk et al (2010):

Atmospheric longwave radiation is the surface radiation budget component most rarely available in climatological stations due to the cost of the longwave measuring instruments, the pyrgeometers, compared with the cost of pyranometers, which measure the shortwave radiation. Consequently, the estimate of longwave radiation for no-pyrgeometer places is often done through the most easily measured atmospheric variables, such as air temperature and air moisture. Several parameterization schemes have been developed to estimate downward longwave radiation for clear-sky and cloudy conditions, but none has been adopted for generalized use.

Their paper isn’t about establishing whether or not atmospheric radiation exists. No one in the field doubts it, any more than anyone doubts the existence of ocean salinity. This paper is about establishing a better model for calculating DLR – as expensive instruments are not going to cover the globe any time soon. However, their results are useful to see.

The data was measured every 10 min from 20 July 2003 to 18 January 2004 at a micrometeorological tower installed in a sugarcane plantation. (The experiment ended when someone stole the equipment). This article isn’t about their longwave radiation model – it’s just about showing some DLR measurements:

In another paper, Wild and co-workers (2001) calculated some long term measurements from GEBA: Data from GEBA for selected=

This paper also wasn’t about verifying the existence of “back-radiation” – it was assessing the ability of GCMs to correctly calculate it. So you can note the long term average values of DLR for some European stations and one Japanese station. The authors also showed the average value across the stations under consideration:

And station by station month by month (the solid lines are the measurements):

Wild (2001)

Wild (2001)

Click on the image for a larger view

In another paper, Morcrette (2002) produced a comparison of observed and modeled values of DLR for April-May 1999 in 24 stations (the columns headed Obs are the measured values):

Morcrette (2002)

Morcrette (2002)

Click for a larger view

Once again, the paper wasn’t about the existence of DLR, but about the comparison between observed and modeled data. Here’s the station list with the key:

Click for a larger view

BSRN data

Here is a 2-week extract of DLR for Billings, Oklahoma from the BSRN archives. This is BSRN station no. 28, Latitude: 36.605000, Longitude: -97.516000, Elevation: 317.0 m, Surface type: grass; Topography type: flat, rural.

Data from the BSRN network, courtesy of the World Radiation Monitoring Center

Data from the BSRN network, courtesy of the World Radiation Monitoring Center

And 3 days shown in more detail:

Data from the BSRN network, courtesy of the World Radiation Monitoring Center

Data from the BSRN network, courtesy of the World Radiation Monitoring Center

Note that the time is UTC so “midday” in local time will be around 19:00 (someone good at converting time zones in October can tell me exactly).

Notice that DLR does not drop significantly overnight. This is because of the heat capacity of the atmosphere – it cools down, but not as quickly as the ground.

DLR is a function of the temperature of the atmosphere and of the concentration of gases which absorb and emit radiation – like water vapor, CO2, NO2 and so on.

We will look at this some more in a followup article, along with the many questions – and questionable ideas – that people have about “back-radiation”.

Update: The Amazing Case of “Back-Radiation” – Part Two

The Amazing Case of “Back Radiation” – Part Three

Darwinian Selection – “Back Radiation”


Note 1 – Satellites can measure some things about the surface. Upward radiation from the surface is mostly absorbed by the atmosphere, but the “atmospheric window” (8-12μm) is “quite transparent” and so satellite measurements can be used to calculate surface temperature – using standard radiation transfer equations for the atmosphere. However, satellites cannot measure the downward radiation at the surface.


Radiation and Climate, I.M. Vardavas & F.W. Taylor, International Series of Monographs on Physics – 138 by Oxford Science Publications (2007)

Downward longwave radiation estimates for clear and all-sky conditions in the Sertãozinho region of São Paulo, Brazil, Kruk et al, Theoretical Applied Climatology (2010)

Evaluation of Downward Longwave Radiation in General Circulation Models, Wild et al, Journal of Climate (2001)

The Surface Downward Longwave Radiation in the ECMWF Forecast System, Morcrette, Journal of Climate (2002)

Read Full Post »

This article follows:

  • Part One – which explained a few basics in energy received and absorbed, and gave a few useful “numbers” to remember
  • Part Two – which explained energy balance a little more
  • Part Three – which explained how the earth radiated away energy and how more “greenhouse” gases might change that

What is albedo? Albedo, in the context of the earth, is the ratio of reflected solar radiation to incident solar radiation. Generally the approximate value of 30% is given. This means that 0.3 or 30% of solar radiation is reflected and therefore 0.7 or 70% is absorbed.

Until the first satellites started measuring reflected solar radiation in the late 1970’s, albedo could only be estimated. Now we have real measurements, but reflected solar radiation is one of the more challenging measurements that satellites make. The main reason for this is reflected solar radiation takes place over all angles, making it much harder for satellites to measure compared with, say, the outgoing longwave radiation.

Reflected solar radiation is one of the major elements in the earth’s radiation budget.

Over the 20th century, global temperatures increased by around 0.7°C. Increases in CO2, methane and other “greenhouse” gases have a demonstrable “radiative forcing”, but changes in planetary albedo cannot be ruled out as also having a significant effect on global temperatures. For example, if the albedo had reduced from 31% to 30% this would produce an increase in radiative forcing (prior to any feedbacks) of 3.4W/m2 – of similar magnitude to the calculated (pre-feedback) effects from “greenhouse” gases.

Average global variation in albedo (top) and reflected solar radiation (bottom)

from Hatzianastassiou (2004)

from Hatzianastassiou (2004)

(click on the image for a larger picture)

(click on the image for a larger picture)

The first measurements of albedo were from Nimbus-7 in 1979, and the best quality measurements were from ERBE from November 1984 to February 1990. There is a dataset of measurements from 1979 to 1993 but not from the same instruments, and then significant gaps in the 1990s until more accurate instruments (e.g. CERES) began measurements. Satellite data of reflected solar radiation from latitudes above 70° is often not available. And comparisons between different ERB datasets show differences of comparable magnitude to the radiative forcing from changes in “greenhouse” gases.

Therefore, to obtain averages or time series over more than a decade requires some kind of calculation. Most of the data in this article is from Hatzianastassiou et al (2004) – currently available here.

The mean monthly shortwave (SW) radiation budget at the top of atmosphere (TOA) was computed on 2.5 longitude-latitude resolution for the 14-year period from 1984 to 1997, using a radiative transfer model with long-term climatological data from the International Satellite Cloud Climatology Project (ISCCP-D2)..

The model was checked against the best data:

The model radiative fluxes at TOA were validated against Earth Radiation Budget Experiment (ERBE) S4 scanner satellite data (1985–1989).

The results were within 1% of ERBE data, which is within the error estimates of the instrument. (See “Model Comparison” at the end of the article).

It is important to understand that using a model doesn’t mean that a GCM produced (predicted) this data. Instead all available data was used to calculate the reflected solar radiation from known properties of clouds, aerosols and so on. However, it also means that the results aren’t perfect, just an improvement on a mixture of incomplete datasets.

Here is the latitudinal variation of incident solar radiation – note that the long-term annual global average is around 342 W/m2 – followed by “outgoing” or reflected solar radiation, then albedo:

Shortwave received and reflected plus albedo, Hatzianastassiou (2004)

Shortwave received and reflected plus albedo, Hatzianastassiou (2004)

The causes of reflected solar radiation are clouds, certain types of aerosols in the atmosphere and different surface types.

The high albedo near the poles is of course due to snow and ice. Lower albedo nearer the equator is in part due to the low reflectivity of the ocean, especially when the sun is high in the sky.

Typical values of albedo for different surfaces (from Linacre & Geerts, 1997)

  • Snow                                     80%
  • Dry sand in the desert        40%
  • Water,  sun at 10°              38%  (sun close to horizon)
  • Grassland                            22%
  • Rainforest                           13%
  • Wet soil                               10%
  • Water, sun at 25°               9%
  • Water, sun at 45°               6%
  • Water, sun at 90°                3.5%  (sun directly overhead)

Here is the data on reflected solar radiation and albedo as a time-series for the whole planet:

Time series changes in solar radiation and albedo, Hatzianastassiou (2004)

Time series changes in solar radiation and albedo, Hatzianastassiou (2004)

(click on the image for a larger picture)

Over the time period in question:

The 14-year (1984–1997) model results, indicate that Earth reflects back to space 101.2Wm-2 out of the received 341.5Wm-2, involving a long-term planetary albedo equal to 29.6%.

The incident solar radiation has a wider range for the southern hemisphere – this is because the earth is closer to the sun (perihelion) in Dec/Jan, which is the southern hemisphere summer.

And notice the fascinating point that the calculations show the albedo reducing over this period:

The decrease of OSR [outgoing solar radiation] by 2.3Wm-2 over the 14-year period 1984–1997, is very important and needs to be further examined in detail. The decreasing trend in global OSR can be also seen in Fig. 5c, where the mean global planetary albedo, Rp, is found to have decreased by 0.6% from January 1984 through December 1997.

The main cause identified was a decrease in cloudiness in tropical and sub-tropical areas.

Model Comparison

For those interested, some ERBE data vs model:

(click on the image for a larger picture)


Long-term global distribution of earth’s shortwave radiation budget at the top of atmosphere, N. Hatzianastassiou et al, Atmos. Chem. Phys. Discuss (2004)

Read Full Post »

Many questions have recently been asked about the relative importance of various mechanisms for moving heat to and from the surface, so this article covers a few basics.

One Fine Day – the Radiation Components


Surface Radiation - clear day and cloudy day, from Robinson (1999)

Surface Radiation - clear day and cloudy day, from Robinson (1999)


I added some color to help pick out the different elements, note that temperature variation is also superimposed on the graph (on its own axis). The blue line is net longwave radiation.

Not so easy to see with the size of graphic, here they are expanded:


Clear sky

Clear sky



Cloudy sky

Cloudy sky


Note that the night-time is not shown, which is why the net radiation is almost always positive. You can see that the downward longwave radiation measured from the sky (in clear violation of the Imaginary Second Law of Thermodynamics) doesn’t change very much – equally so for the upwards longwave radiation from the ground. You can see the terrestrial (upwards longwave) radiation follows the temperature changes – as you would expect.

Sensible and Latent Heat

The energy change at the surface is the sum of:

  • Net radiation
  • “Sensible” heat
  • Latent heat
  • Heat flux into the ground

“Sensible” heat is that caused by conduction and convection. For example, with a warm surface and a cooler atmosphere, at the boundary layer heat will be conducted into the atmosphere and then convection will move the heat higher up into the atmosphere.

Latent heat is the heat moved by water evaporating and condensing higher up in the atmosphere. Heat is absorbed in evaporation and released by condensation – so the result is a movement of heat from the surface to higher levels in the atmosphere.

Heat flux into the ground is usually low, except into water.


Surface Heat Components in 3 Locations, Robinson (1999)

Surface Heat Components in 3 Locations, Robinson (1999)


All of these observations were made under clear skies in light to moderate wind conditions.

Note the low latent heat for the dry lake – of course.

The negative sensible heat in Arizona (2nd graphic) is because it is being drawn from the surface to evaporate water. It is more usual to see positive sensible heat during the daytime as the surface warms the lower levels of the atmosphere.

The latent heat is higher in Arizona than Wisconsin because of the drier air in Arizona (lower relative humidity).

The ratio of sensible heat to latent heat is called the Bowen ratio and the physics of the various processes mean that this ratio is kept to a minimum – a moist surface will hardly increase in temperature while evaporation is occurring, but once it has dried out there will be a rapid rise in temperature as the sensible heat flux takes over.

Heat into the Ground


Temperature at two depths in soil - annual variation, Robinson (1999)

Temperature at two depths in soil - annual variation, Robinson (1999)


We can see that heat doesn’t get very far into soil – because it is not a good conductor of heat.

Here is a useful table of properties of various substances:

The rate of heat penetration (e.g. into the soil) is dependent on the thermal diffusivity. This is a combination of two factors – the thermal conductivity (how well heat is conducted through the substance) divided by the heat capacity (how much heat it takes to increase the temperature of the substance).

The lower the value of the thermal diffusivity the lower the temperature rise further into the substance. So heat doesn’t get very far into dry sand, or still water. But it does get 10x further into wet soil (correction thanks to Nullius in Verba- really it gets 3x further into wet soil because “Thickness penetrated is proportional to the square root of diffusivity times time” – and I didn’t just take his word for it..)

Why is still water so similar to dry sand? Water has 4x the ability to conduct heat, but also it takes almost 4x as much heat to lift the temperature of water by 1°C.

Note that stirred water is a much better conductor of heat – due to convection. The same applies to air, even more so – “stirred” air (= moving air) conducts heat a million times more effectively than still air.

Temperature Profiles Throughout a 24-Hour Period


Temperature profiles throughout the day, Robinson (1999)

Temperature profiles throughout the day, Robinson (1999)


I’ll cover more about temperature profiles in a later article about why the troposphere has the temperature profile it does.

During the day the ground is being heated up by the sun and by the longwave radiation from the atmosphere. Once the sun sets, the ground cools faster and starts to take the lower levels of the atmosphere with it.


Just some basic measurements of the various components that affect the surface temperature to help establish their relative importance.

Note: All of the graphics were taken from Contemporary Climatology by Peter Robinson and Ann Henderson-Sellers (1999)

Read Full Post »

This post covers a dull subject. If you are new to Science of Doom, the subject matter here will quite possibly be the least interesting in the entire blog. At least, up until now. It’s possible that new questions will be asked in future which will compel me to write posts that climb to new heights of breath-taking dullness.

So commenters take note – you have a duty as well. And new readers, quickly jump to another post..


In an earlier post – Why Global Mean Surface Temperature Should be Relegated, Or Mostly Ignored – we looked at the many problems of trying to measure the surface of the earth by measuring the air temperature a few feet off the ground. And also the problems encountered in calculating the average temperature by an arithmetic mean. (An arithmetic mean for those not familiar with the subject is the “usual” and traditional averaging where you add up all the numbers and divide by how many values you had).

We looked at an example where the average temperature increased, but the amount of energy radiated went down. Energy radiated out would seem to be a more useful measure of “real temperature” so clearly arithmetic averages of temperature have issues. This is how GMST is calculated – well not exactly, as the values are area-weighted, but there is no factoring in of how surface temperature affects energy radiated.

But in the discussion someone brought up emissivity and what effect it has on the calculation of energy radiated. So in the interests of completeness we arrive here.

Emissivity of the Earth’s Surface

Our commenter asked:

So what are the non-black body corrections required for the initial calculation 396W/sqm? And what are the corrections for the equivalent temperature calculation? And do they cancel out (I think not due to the non-linearity issue) ?

What’s this about? (Of course, read the earlier post if you haven’t already).

Energy radiated from a body, E=εσT4

where T is absolute temperature (in K), σ=5.67×10-8 and ε is the emissivity.

ε is a value between 0 and 1, and 1 is the “blackbody”. The value – very important to note – is dependent on wavelength.

So the calculations I showed (in the thought experiment) where temperature went up but energy radiated went down need adjustment for this non-blackbody emissivity.

How Emissivity Changes

Here we consult the “page-turner”, Surface Emissivity Maps for use in Satellite Retrievals of Longwave Radiation by Wilber (1999).

Emissivity vs wavelength for various substances, Wilber (1999)

Emissivity vs wavelength for various substances, Wilber (1999)

And yet more graphs at the end of the post – spreading out the excitement..

Note the key point, in the wavelengths of interest emissivity is close to 1 – close to a blackbody.

For beginners to the subject, who somehow find this interesting and are therefore still reading, the wavelengths in question: 4-30μm are the wavelengths where most of the longwave radiation takes place from the earth’s surface. Check out CO2 – An Insignificant Trace Gas? for more on this.

I did wonder why the measurements weren’t carried on to 30μm and as far as I can determine it is less interesting for satellite measurements – because satellites can see the surface the best in the “atmospheric window” of 8-14μm.

So with the data we have we see that generally the value is close to unity – the earth’s surface is very close to a “blackbody”. Energy radiated in 4-16μm wavelengths only account for 50-60% of the typical energy radiated from the earth’s surface, so we don’t have the full answer. Still with my excitement already at fever pitch on this topic I think others should take on the task of tracking down emissivity of representative earth surface types at >16μm and report back.

So we have some ideas of emissivities, they are not 1, but generally very close. How does this affect the calculation of energy radiated?

Mostly Harmless

Not much effect.

I took the original example with 7 equal areas at particular temperatures for 1999 and show emissivities (these are arbitrarily chosen to see what happens):

  • Equatorial region: 30°C ;  ε = 0.99
  • Sub-tropics: 22°C, 22°C ;  ε = 0.99
  • Mid-latitude regions: 12°C, 12°C ;  ε = 0.80
  • Polar regions: 0°C, 0°C ;  ε = 0.80

The average temperature, or “global mean surface temperature” = 14°C.

And in 2009 (same temperatures as in the previous article):

  • Equatorial region: 26°C ;  ε = 0.99
  • Sub-tropics: 20°C, 20°C ;  ε = 0.99
  • Mid-latitude regions: 12°C, 12°C ;  ε = 0.80
  • Polar regions: 5°C, 5°C ;  ε = 0.80

The average temperature, or “global mean surface temperature” = 14.3°C.

The calculation of the energy radiated is done by simply taking each temperature and applying the equation above – E=εσT4

Because we are calculating the total energy we are simply adding up the energy value from each area. All the emissivity does is weight the energy from each location.

  • With the emissivity values as shown, the 1999 energy = 2426 W/ arbitrary area
  • With the emissivity values as shown, the 2009 energy = 2416 W/ same arbitrary area

So once again the energy radiated has gone down, even though the GMST has increased.

If we change around the emissivities, so that ε=0.8 for Equatorial & Sub-Tropics, while ε=0.99 for Mid-Latitude and Polar regions, the GMST values are the same.

  • With the new emissivity values, the 1999 energy = 2434 W/ arbitrary area
  • With the emissivity values as shown, the 2009 energy = 2442 W/ same arbitrary area

So the temperature has gone up and the energy radiated has also gone up.

Therefore, emissivity does change the situation a little. I chose more extreme values of emissivity than are typically found to see what the effect was.

The result is not complex or non-linear because emissivity simple “weights” the value of energy making it more or less important as the emissivity is higher or lower.

In the second example above, if the magnitude of temperature changes was slightly greater in the polar and equatorial regions this would be enough to still show a decrease in energy while “GMST” was increasing.

More Emissivity Graphs

Emissivity vs wavelength of various substances, Wilber (1999)

Emissivity vs wavelength of various substances, Wilber (1999)


Emissivity in the wavelengths of interest for the earth’s radiation is generally very close to 1. Assuming “blackbody” radiation is a reasonable assumption for most calculations of interest – as other unknowns are typically a higher source of error.

Because the earth’s surface has been mapped out and linked to the emissivities, if a particular calculation does need high level accuracy the emissivities can be used.

In the terms of how emissivity changes the “surprising” result that temperature can increase while energy radiated decreases – the answer is “not much”.

Read Full Post »

« Newer Posts - Older Posts »