Feeds:
Posts
Comments

Archive for the ‘Atmospheric Physics’ Category

In Part Seven – GCM I  through Part Ten – GCM IV we looked at GCM simulations of ice ages.

These were mostly attempts at “glacial inception”, that is, starting an ice age. But we also saw a simulation of the last 120 kyrs which attempted to model a complete ice age cycle including the last termination. As we saw, there were lots of limitations..

One condition for glacial inception, “perennial snow cover at high latitudes”, could be produced with a high-resolution coupled atmosphere-ocean GCM (AOGCM), but that model did suffer from the problem of having a cold bias at high latitudes.

The (reasonably accurate) simulation of a whole cycle including inception and termination came by virtue of having the internal feedbacks (ice sheet size & height and CO2 concentration) prescribed.

Just to be clear to new readers, these comments shouldn’t indicate that I’ve uncovered some secret that climate scientists are trying to hide, these points are all out in the open and usually highlighted by the authors of the papers.

In Part Nine – GCM III, one commenter highlighted a 2013 paper by Ayako Abe-Ouchi and co-workers, where the journal in question, Nature, had quite a marketing pitch on the paper. I made brief comment on it in a later article in response to another question, including that I had emailed the lead author asking a question about the modeling work (how was a 120 kyr cycle actually simulated?).

Most recently, in Eighteen – “Probably Nonlinearity” of Unknown Origin, another commented highlighted it, which rekindled my enthusiasm, and I went back and read the paper again. It turns out that my understanding of the paper had been wrong. It wasn’t really a GCM paper at all. It was an ice sheet paper.

There is a whole field of papers on ice sheet models deserving attention.

GCM review

Let’s review GCMs first of all to help us understand where ice sheet models fit in the hierarchy of climate simulations.

GCMs consist of a number of different modules coupled together. The first GCMs were mostly “atmospheric GCMs” = AGCMs, and either they had a “swamp ocean” = a mixed layer of fixed depth, or had prescribed ocean boundary conditions set from an ocean model or from an ocean reconstruction.

Less commonly, unless you worked just with oceans, there were ocean GCMs with prescribed atmospheric boundary conditions (prescribed heat and momentum flux from the atmosphere).

Then coupled atmosphere-ocean GCMs came along = AOGCMs. It was a while before these two parts matched up to the point where there was no “flux drift”, that is, no disappearing heat flux from one part of the model.

Why so difficult to get these two models working together? One important reason comes down to the time-scales involved, which result from the difference in heat capacity and momentum of the two parts of the climate system. The heat capacity and momentum of the ocean is much much higher than that of the atmosphere.

And when we add ice sheets models – ISMs – we have yet another time scale to consider.

  • the atmosphere changes in days, weeks and months
  • the ocean changes in years, decades and centuries
  • the ice sheets changes in centuries, millennia and tens of millenia

This creates a problem for climate scientists who want to apply the fundamental equations of heat, mass & momentum conservation along with parameterizations for “stuff not well understood” and “stuff quite-well-understood but whose parameters are sub-grid”. To run a high resolution AOGCM for a 1,000 years simulation might consume 1 year of supercomputer time and the ice sheet has barely moved during that period.

Ice Sheet Models

Scientists who study ice sheets have a whole bunch of different questions. They want to understand how the ice sheets developed.

What makes them grow, shrink, move, slide, melt.. What parameters are important? What parameters are well understood? What research questions are most deserving of attention? And:

Does our understanding of ice sheet dynamics allow us to model the last glacial cycle?

To answer that question we need a model for ice sheet dynamics, and to that we need to apply some boundary conditions from some other “less interesting” models, like GCMs. As a result, there are a few approaches to setting the boundary conditions so we can do our interesting work of modeling ice sheets.

Before we look at that, let’s look at the dynamics of ice sheets themselves.

Ice Sheet Dynamics

First, in the theme of the last paper, Eighteen – “Probably Nonlinearity” of Unknown Origin, here is Marshall & Clark 2002:

The origin of the dominant 100-kyr ice-volume cycle in the absence of substantive radiation forcing remains one of the most vexing questions in climate dynamics

We can add that to the 34 papers reviewed in that previous article. This paper by Marshall & Clark is definitely a good quick read for people who want to understand ice sheets a little more.

Ice doesn’t conduct a lot of heat – it is a very good insulator. So the important things with ice sheets happen at the top and the bottom.

At the top, ice melts, and the water refreezes, runs off or evaporates. In combination, the loss is called ablation. Then we have precipitation that adds to the ice sheet. So the net effect determines what happens at the top of the ice sheet.

At the bottom, when the ice sheet is very thin, heat can be conducted through from the atmosphere to the base and make it melt – if the atmosphere is warm enough. As the ice sheet gets thicker, very little heat is conducted through. However, there are two important sources of heat for surface heating which results in “basal sliding”. One source is geothermal energy. This is around 0.1 W/m² which is very small unless we are dealing with an insulating material (like ice) and lots of time (like ice sheets). The other source is the shear stress in the ice sheet which can create a lot of heat via the mechanics of deformation.

Once the ice sheet is able to start sliding, the dynamics create a completely different result compared to an ice sheet “cold-pinned” to the rock underneath.

Some comments from Marshall and Clark:

Ice sheet deglaciation involves an amount of energy larger than that provided directly from high-latitude radiation forcing associated with orbital variations. Internal glaciologic, isostatic, and climatic feedbacks are thus essential to explain the deglaciation.

..Moreover, our results suggest that thermal enabling of basal flow does not occur in response to surface warming, which may explain why the timing of the Termination II occurred earlier than predicted by orbital forcing [Gallup et al., 2002].

Results suggest that basal temperature evolution plays an important role in setting the stage for glacial termination. To confirm this hypothesis, model studies need improved basal process physics to incorporate the glaciological mechanisms associated with ice sheet instability (surging, streaming flow).

..Our simulations suggest that a substantial fraction (60% to 80%) of the ice sheet was frozen to the bed for the first 75 kyr of the glacial cycle, thus strongly limiting basal flow. Subsequent doubling of the area of warm-based ice in response to ice sheet thickening and expansion and to the reduction in downward advection of cold ice may have enabled broad increases in geologically- and hydrologically-mediated fast ice flow during the last deglaciation.

Increased dynamical activity of the ice sheet would lead to net thinning of the ice sheet interior and the transport of large amounts of ice into regions of intense ablation both south of the ice sheet and at the marine margins (via calving). This has the potential to provide a strong positive feedback on deglaciation.

The timescale of basal temperature evolution is of the same order as the 100-kyr glacial cycle, suggesting that the establishment of warm-based ice over a large enough area of the ice sheet bed may have influenced the timing of deglaciation. Our results thus reinforce the notion that at a mature point in their life cycle, 100-kyr ice sheets become independent of orbital forcing and affect their own demise through internal feedbacks.

[Emphasis added]

In this article we will focus on a 2007 paper by Ayako Abe-Ouchi, T Segawa & Fuyuki Saito. This paper is essentially the same modeling approach used in Abe-Ouchi’s 2013 Nature paper.

The Ice Model

The ice sheet model has a time step of 2 years, with 1° grid from 30°N to the north pole, 1° longitude and 20 vertical levels.

Equations for the ice sheet include sliding velocity, ice sheet deformation, the heat transfer through the lithosphere, the bedrock elevation and the accumulation rate on the ice sheet.

Note, there is a reference that some of the model is based on work described in Sensitivity of Greenland ice sheet simulation to the numerical procedure employed for ice sheet dynamics, F Saito & A Abe-Ouchi, Ann. Glaciol., (2005) – but I don’t have access to this journal. (If anyone does, please email the paper to me at scienceofdoom – you know what goes here – gmail.com).

How did they calculate the accumulation on the ice sheet? There is an equation:

Acc=Aref×(1+dP)Ts

Ts is the surface temperature, dP is a measure of aridity and Aref is a reference value for accumulation. This is a highly parameterized method of calculating how much thicker or thinner the ice sheet is growing. The authors reference Marshall et al 2002 for this equation, and that paper is very instructive in how poorly understood ice sheet dynamics actually are.

Here is one part of the relevant section in Marshall et al 2002:

..For completeness here, note that we have also experimented with spatial precipitation patterns that are based on present-day distributions.

Under this treatment, local precipitation rates diminish exponentially with local atmospheric cooling, reflecting the increased aridity that can be expected under glacial conditions (Tarasov and Peltier, 1999).

Paleo-precipitation under this parameterization has the form:

P(λ,θ,t) = Pobs(λ,θ)(1+dp)ΔT(λ,θ,t) x exp[βp.max[hs(λ,θ,t)-ht,0]]       (18)

The parameter dP in this equation represents the percentage of drying per 1C; Tarasov and Peltier (1999) choose a value of 3% per °C; dp = 0:03.

[Emphasis added, color added to highlight the relevant part of the equation]

So dp is a parameter that attempts to account for increasing aridity in colder glacial conditions, and in their 2002 paper Marshall et al describe it as 1 of 4 “free parameters” that are investigated to see what effect they have on ice sheet development around the LGM.

Abe-Ouchi and co-authors took a slightly different approach that certainly seems like an improvement over Marshall et al 2002:

Abe-Ouchi-eqn11

So their value of aridity is just a linear function of ice sheet area – from zero to a fixed value, rather than a fixed value no matter the ice sheet size.

How is Ts calculated? That comes, in a way, from the atmospheric GCM, but probably not in a way that readers might expect. So let’s have a look at the GCM then come back to this calculation of Ts.

Atmospheric GCM Simulations

There were three groups of atmospheric GCM simulations, with parameters selected to try and tease out which factors have the most impact.

Group One: high resolution GCM - 1.1º latitude and longitude and 20 atmospheric vertical levels with fixed sea surface temperature. So there is no ocean model, the ocean temperature are prescribed. Within this group, four experiments:

  • A control experiment – modern day values
  • LGM (last glacial maximum) conditions for CO2 (note 1) and orbital parameters with
    • no ice
    • LGM ice extent but zero thickness
    • LGM ice extent and LGM thickness

So the idea is to compare results with and without the actual ice sheet so see how much impact orbital and CO2 values have vs the effect of the ice sheet itself – and then for the ice sheet to see whether the albedo or the elevation has the most impact. Why the elevation? Well, if an ice sheet is 1km thick then the surface temperature will be something like 6ºC colder. (Exactly how much colder is an interesting question because we don’t know what the lapse rate actually was). There will also be an effect on atmospheric circulation – you’ve stuck a “mountain range” in the path of wind so this changes the circulation.

Each of the four simulations was run for 11 or 13 years and the last 10 years’ results used:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 1

It’s clear from this simulation that the full result (left graphic) is mostly caused by the ice sheet (right graphic) rather than CO2, orbital parameters and the SSTs (middle graphic). And the next figure in the paper shows the breakdown between the albedo effect and the height of the ice sheet:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 2 – same color legend as figure 1

Now a lapse rate of 5K/km was used. What happens if the lapse rate of 9K/km was used instead? There were no simulations done with different lapse rates.

..Other lapse rates could be used which vary depending on the altitude or location, while a lapse rate larger than 7 K/km or smaller than 4 K/km is inconsistent with the overall feature. This is consistent with the finding of Krinner and Genthon (1999), who suggest a lapse rate of 5.5 K/km, but is in contrast with other studies which have conventionally used lapse rates of 8 K/km or 6.5 K/km to drive the ice sheet models..

Group Two – medium resolution GCM 2.8º latitude and longitude and 11 atmospheric vertical levels, with a “slab ocean” – this means the ocean is treated as one temperature through the depth of some fixed layer, like 50m. So it is allowing the ocean to be there as a heat sink/source responding to climate, but no heat transfer through to a deeper ocean.

There were five simulations in this group, one control (modern day everything) and four with CO2 & orbital parameters at the LGM:

  • no ice sheet
  • LGM ice extent, but flat
  • 12 kyrs ago ice extent, but flat
  • 12 kyrs ago ice extent and height

So this group takes a slightly more detailed look at ice sheet impact. Not surprisingly the simulation results give intermediate values for the ice sheet extent at 12 kyrs ago.

Group Three – medium resolution GCM as in group two, and ice sheets either at present day or LGM, with nine simulations covering different orbital values, different CO2 values of present day, 280 or 200 ppm.

There was also some discussion of the impact of different climate models. I found this fascinating because the difference between CCSM and the other models appears to be as great as the difference in figure 2 (above) which identifies the albedo effect as more significant than the lapse rate effect:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 3

And this naturally has me wondering about how much significance to put on the GCM simulation results shown in the paper. The authors also comment:

Based on these GCM results we conclude there remains considerable uncertainty over the actual size of the albedo effect.

Given there is also uncertainty over the lapse rate that actually occurred, it seems there is considerable uncertainty over everything.

Now let’s return to the ice sheet model, because so far we haven’t seen any output from the ice sheet model.

GCM Inputs into the Ice Sheet Model

The equation which calculates the change in accumulation on the ice sheet used a fairly arbitrary parameter dp, with (1+dp) raised to the power of Ts.

The ice sheet model has a 2 year time step. The GCM results don’t provide Ts across the surface grid every 2 years, they are snapshots for certain conditions. The ice sheet model uses this calculation for Ts:

Ts = Tref + ΔTice + ΔTco2 + ΔTinsol + ΔTnonlinear

Tref is the reference temperature which is present day climatology. The other ΔT (change in temperature) values are basically a linear interpolation from two values of the GCM simulations. Here is the ΔTCo2 value:

Abe-Ouchi-2007-eqn6

 

So think of it like this – we have found Ts at one value of CO2 higher and one value of CO2 lower from some snapshot GCM simulations. We plot a graph with Co2 on the x-axis and Ts on the y-axis with just two points on the graph from these two experiments and we draw a straight line between the two points.

To calculate Ts at say 50 kyrs ago we look up the CO2 value at 50 kyrs from ice core data, and read the value of TCO2 from the straight line on the graph.

Likewise for the other parameters. Here is ΔTinsol:

Abe-Ouchi-eqn7

 

So the method is extremely basic. Of course the model needs something..

Now, given that we have inputs for accumulation on the ice sheet, the ice sheet model can run. Here are the results. The third graph (3) is the sea level from proxy results so is our best estimate of reality, with (4) providing model outputs for different parameters of d0 (“desertification” or aridity) and lapse rate, and (5) providing outputs for different parameters of albedo and lapse rate:

From Abe-Ouchi et al 2007

From Abe-Ouchi et al 2007

Figure 4

There are three main points of interest.

Firstly, small changes in the parameters cause huge changes in the final results. The idea of aridity over ice sheets as just linear function of ice sheet size is very questionable itself. The idea of a constant lapse rate is extremely questionable. Together, using values that appear realistic, we can model much less ice sheet growth (sea level drop) or many times greater ice sheet growth than actually occurred.

Secondly, notice that the time of maximum ice sheet (lowest sea level) for realistic results show sea level starting to rise around 12 kyrs, rather than the actual 18 kyrs. This might be due to the impact of orbital factors which were at quite a low level (i.e., high latitude summer insolation was at quite a low level) when the last ice age finished, but have quite an impact in the model. Of course, we have covered this “problem” in a few previous articles in this series. In the context of this model it might be that the impact of the southern hemisphere leading the globe out of the last ice age is completely missing.

Thirdly – while this might be clear to some people, but for many new to this kind of model it won’t be obvious – the inputs for the model are some limits of the actual history. The model doesn’t simulate the actual start and end of the last ice age “by itself”. We feed into the GCM model a few CO2 values. We feed into the GCM model a few ice sheet extent and heights that (as best as can be reconstructed) actually occurred. The GCM gives us some temperature values for these snapshot conditions.

In the case of this ice sheet model, every 2 years (each time step of the ice sheet model) we “look up” the actual value of ice sheet extent and atmospheric CO2 and we linearly interpolate the GCM output temperatures for the current year. And then we crudely parameterize these values into some accumulation rate on the ice sheet.

Conclusion

This is our first foray into ice sheet models. It should be clear that the results are interesting but we are at a very early stage in modeling ice sheets.

The problems are:

  • the computational load required to run a GCM coupled with an ice sheet model over 120 kyrs is much too high, so it can’t be done
  • the resulting tradeoff uses a few GCM snapshot values to feed linearly interpolated temperatures into a parameterized accumulation equation
  • the effect of lapse rate on the results is extremely large and the actual value for lapse rate over ice sheets is very unlikely to be a constant and is also not known
  • our understanding of ice sheet fundamental equations are still at an early stage, as readers can see by reviewing the first two papers below, especially the second one

 Articles in this Series

Part One - An introduction

Part Two – Lorenz - one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton - how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff - how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes - and in a bit more detail

Part Six – “Hypotheses Abound” - lots of different theories that confusingly go by the same name

Part Seven – GCM I - early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap - my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II - more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III - very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV - very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age - a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age - latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination - very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II - looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data - getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers - reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II - remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I - explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin - what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

References

Basal temperature evolution of North American ice sheets and implications for the 100-kyr cycle, SJ Marshall & PU Clark, GRL (2002) – free paper

North American Ice Sheet reconstructions at the Last Glacial Maximum, SJ Marshall, TS James, GKC Clarke, Quaternary Science Reviews (2002) – free paper

Climatic Conditions for modelling the Northern Hemisphere ice sheets throughout the ice age cycle, A Abe-Ouchi, T Segawa, and F Saito, Climate of the Past (2007) – free paper

Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume, Ayako Abe-Ouchi, Fuyuki Saito, Kenji Kawamura, Maureen E. Raymo, Jun’ichi Okuno, Kunio Takahashi & Heinz Blatter, Nature (2013) – paywall paper

Notes

Note 1 – the value of CO2 used in these simulations was 200 ppm, while CO2 at the LGM was actually 180 ppm. Apparently this value of 200 ppm was used in a major inter-comparison project (the PMIP), but I don’t know the reason why. PMIP = Paleoclimate Modelling Intercomparison Project, Joussaume and Taylor, 1995.

Read Full Post »

In Thirteen – Terminator II we had a cursory look at the different “proxies” for temperature and ice volume/sea level. And we’ve considered some issues around dating of proxies.

There are two main proxies we have used so far to take a look back into the ice ages:

  • δ18O in deep ocean cores in the shells of foraminifera – to measure ice volume
  • δ18O in the ice in ice cores (Greenland and Antarctica) – to measure temperature

Now we want to take a closer look at the proxies themselves. It’s a necessary subject if we want to understand ice ages, because the proxies don’t actually measure what they might be assumed to measure. This is a separate issue from the dating: of ice; of gas trapped in ice; and of sediments in deep ocean cores.

If we take samples of ocean water, H2O, and measure the proportion of the oxygen isotopes, we find (Ferronsky & Polyakov 2012):

  • 16O – 99.757 %
  • 17O –   0.038%
  • 18O –   0.205%

There is another significant water isotope, Deuterium – aka, “heavy hydrogen” – where the water molecule is HDO, also written as 1H2HO – instead of H2O.

The processes that affect ratios of HDO are similar to the processes that affect the ratios of H218O, and consequently either isotope ratio can provide a temperature proxy for ice cores. A value of δD equates, very roughly, to 10x a value of δ18O, so mentally you can use this ratio to convert from δ18O to δD (see note 1).

In Note 2 I’ve included some comments on the Dole effect, which is the relationship between the ocean isotopic composition and the atmospheric oxygen isotopic composition. It isn’t directly relevant to the discussion of proxies here, because the ocean is the massive reservoir of 18O and the amount in the atmosphere is very small in comparison (1/1000). However, it might be of interest to some readers and we will return to the atmospheric value later when looking at dating of Antarctic ice cores.

Terminology and Definitions

The isotope ratio, δ18O, of ocean water = 2.005 ‰, that is, 0.205 %. This is turned into a reference, known as Vienna Standard Mean Ocean Water. So with respect to VSMOW, δ18O, of ocean water = 0. It’s just a definition. The change is shown as δ, the Greek symbol for delta, very commonly used in maths and physics to mean “change”.

The values of isotopes are usually expressed in terms of changes from the norm, that is, from the absolute standard. And because the changes are quite small they are expressed as parts per thousand = per mil = ‰, instead of percent, %.

So as δ18O changes from 0 (ocean water) to -50‰ (typically the lowest value of ice in Antarctica), the proportion of 18O goes from 0.20% (2.0‰) to 0.19% (1.9‰).

If the terminology is confusing think of the above example as a 5% change. What is 5% of 20? Answer is 1; and 20 – 1 = 19. So the above example just says if we reduce the small amount, 2 parts per thousand of 18O by 5% we end up with 1.9 parts per thousand.

Here is a graph that links the values together:

From Hoef 2009

From Hoefs 2009

Figure 1

Fractionation, or Why Ice Sheets are So Light

We’ve seen this graph before – the δ18O (of ice) in Greenland (NGRIP) and Antarctica (EDML) ice sheets against time:

From EPICA 2006

From EPICA 2006

Figure 2

Note that the values of δ18O from Antarctica (EDML – top line) through the last 150 kyrs are from about -40 to -52 ‰. And the values from Greenland (NGRIP – black line in middle section) are from about -32 to -44 ‰.

There are some standard explanations around – like this link - but the I’m not sure the graphic alone quite explains it – unless you understand the subject already..

If we measure the 18O concentration of a body of water, then we measure the 18O concentration of the water vapor above it, we find that the water vapor value has 18O at about -10 ‰ compared with the body of water. We write this as δ18O = -10 ‰. That is, the water vapor is a little lighter, isotopically speaking, than the ocean water.

The processes (fractionation) that cause this are easy to reproduce in the lab:

  • during evaporation, the lighter isotopes evaporate preferentially
  • during precipitation, the heavier isotopes precipitate preferentially

(See note 3).

So let’s consider the journey of a parcel of water vapor evaporated somewhere near the equator. The water vapor is a little reduced in 18O (compared with the ocean) due to the evaporation process. As the parcel of air travels away from the equator it rises and cools and some of the water vapor condenses. The initial rain takes proportionately more 18O than is in the parcel – so the parcel of air gets depleted in 18O. It keeps moving away from the equator, the air gets progressively colder, it keeps raining out, and the further it goes the less the proportion of 18O remains in the parcel of air. By the time precipitation forms in polar regions the water or ice is very light isotopically, that is, δ18O is the most negative it can get.

As a very simplistic idea of water vapor transport, this explains why the ice sheets in Greenland and Antarctica have isotopic values that are very low in 18O. Let’s take a look at some data to see how well such a simplistic idea holds up..

The isotopic composition of precipitation:

From Gat 2010

From Gat 2010

Figure 3 – Click to Enlarge

We can see the broad result represented quite well – the further we are in the direction of the poles the lower the isotopic composition of precipitation.

In contrast, when we look at local results in some detail we don’t see such a tidy picture. Here are some results from Rindsberger et al (1990) from central and northern Israel:

From Rindsberger et al 1990

From Rindsberger et al 1990

Figure 4

From Rindsberger et al 1990

From Rindsberger et al 1990

Figure 5

The authors comment:

It is quite surprising that the seasonally averaged isotopic composition of precipitation converges to a rather well-defined value, in spite of the large differences in the δ value of the individual precipitation events which show a range of 12‰ in δ18O.. At Bet-Dagan.. from which we have a long history.. the amount weighted annual average is δ18O = 5.07 ‰ ± 0.62 ‰ for the 19 year period of 1965-86. Indeed the scatter of ± 0.6‰ in the 19-year long series is to a significant degree the result of a 4-year period with lower δ values, namely the years 1971-75 when the averaged values were δ18O = 5.7 ‰ ± 0.2 ‰. That period was one of worldwide climate anomalies. Evidently the synoptic pattern associated with the precipitation events controls both the mean isotopic values of the precipitation and its variability.

The seminal 1964 paper by Willi Dansgaard is well worth a read for a good overview of the subject:

As pointed out.. one cannot use the composition of the individual rain as a direct measure of the condensation temperature. Nevertheless, it has been possible to show a simple linear correlation between the annual mean values of the surface temperature and the δ18O content in high latitude, non-continental precipitation. The main reason is that the scattering of the individual precipitation compositions, caused by the influence of numerous meteorological parameters, is smoothed out when comparing average compositions at various locations over a sufficiently long period of time (a whole number of years).

The somewhat revised and extended correlation is shown in fig. 3..

From Dansgaard 1964

From Dansgaard 1964

Figure 6

So we appear to have a nice tidy picture when looking at annual means, a little bit like the (article) figure 3 from Gat’s 2010 textbook.

Before “muddying the waters” a little, let’s have a quick look at ocean values.

Ocean δ18O

We can see that the ocean, as we might expect, is much more homogenous, especially the deep ocean. Note that these results are δD (think, about 10x the value of δ18O):

From Ferronsky & Polyakov (2012)

From Ferronsky & Polyakov (2012)

Figure 7 – Click to enlarge

And some surface water values of δD (and also salinity), where we see a lot more variation, again as might expect:

From Ferronsky & Polyakov 2012

From Ferronsky & Polyakov 2012

Figure 8

If we do a quick back of the envelope calculation, using the fact that the sea level change between the last glacial maximum (LGM) and the current interglacial was about 120m, the average ocean depth is 3680m we expect a glacial-interglacial change in the ocean of about 1.5 ‰.

This is why the foraminifera near the bottom of the ocean, capturing 18O from the ocean, are recording ice volume, whereas the ice cores are recording atmospheric temperatures.

Note as well that during the glacial, with more ice locked up in ice sheets, the value of ocean δ18O will be higher. So colder atmospheric temperatures relate to lower values of δ18O in precipitation, but – due to the increase in ice, depleted in 18O - higher values of ocean δ18O.

Muddying the Waters

Hoefs 2009, gives a good summary of the different factors in isotopic precipitation:

The first detailed evaluation of the equilibrium and nonequilibrium factors that determine the isotopic composition of precipitation was published by Dansgaard (1964). He demonstrated that the observed geographic distribution in isotope composition is related to a number of environmental parameters that characterize a given sampling site, such as latitude, altitude, distance to the coast, amount of precipitation, and surface air temperature.

Out of these, two factors are of special significance: temperature and the amount of precipitation. The best temperature correlation is observed in continental regions nearer to the poles, whereas the correlation with amount of rainfall is most pronounced in tropical regions as shown in Fig. 3.15.

The apparent link between local surface air temperature and the isotope composition of precipitation is of special interest mainly because of the potential importance of stable isotopes as palaeoclimatic indicators. The amount effect is ascribed to gradual saturation of air below the cloud, which diminishes any shift to higher δ18O-values caused by evaporation during precipitation.

[Emphasis added]

From Hoefs 2009

From Hoefs 2009

Figure 9

The points that Hoefs make indicate some of the problems relating to using δ18O as the temperature proxy. We have competing influences that depend on the source and journey of the air parcel responsible for the precipitation. What if circulation changes?

For readers who have followed the past discussions here on water vapor (e.g., see Clouds & Water Vapor – Part Two) this is a similar kind of story. With water vapor, there is a very clear relationship between ocean temperature and absolute humidity, so long as we consider the boundary layer. But what happens when the air rises high above that – then the amount of water vapor at any location in the atmosphere is dependent on the past journey of air, and as a result the amount of water vapor in the atmosphere depends on large scale circulation and large scale circulation changes.

The same question arises with isotopes and precipitation.

The ubiquitous Jean Jouzel and his colleagues (including Willi Dansgaard) from their 1997 paper:

In Greenland there are significant differences between temperature records from the East coast and the West coast which are still evident in 30 yr smoothed records. The isotopic records from the interior of Greenland do not appear to follow consistently the temperature variations recorded at either the east coast or the west coast..

This behavior may reflect the alternating modes of the North Atlantic Oscillation..

They [simple models] are, however, limited to the study of idealized clouds and cannot account for the complexity of large convective systems, such as those occurring in tropical and equatorial regions. Despite such limitations, simple isotopic models are appropriate to explain the main characteristics of dD and d18O in precipitation, at least in middle and high latitudes where the precipitation is not predominantly produced by large convective systems.

Indeed, their ability to correctly simulate the present-day temperature-isotope relationships in those regions has been the main justification of the standard practice of using the present day spatial slope to interpret the isotopic data in terms of records of past temperature changes.

Notice that, at least for Antarctica, data and simple models agree only with respect to the temperature of formation of the precipitation, estimated by the temperature just above the inversion layer, and not with respect to the surface temperature, which owing to a strong inversion is much lower..

Thus one can easily see that using the spatial slope as a surrogate of the temporal slope strictly holds true only if the characteristics of the source have remained constant through time.

[Emphases added]

If all the precipitation occurs during warm summer months, for example, the “annual δ18O” will naturally reflect a temperature warmer than Ts [annual mean]..

If major changes in seasonality occur between climates, such as a shift from summer-dominated to winter- dominated precipitation, the impact on the isotope signal could be large..it is the temperature during the precipitation events that is imprinted in the isotopic signal.

Second, the formation of an inversion layer of cold air up to several hundred meters thick over polar ice sheets makes the temperature of formation of precipitation warmer than the temperature at the surface of the ice sheet. Inversion forms under a clear sky.. but even in winter it is destroyed rapidly if thick cloud moves over a site..

As a consequence of precipitation intermittancy and of the existence of an inversion layer, the isotope record is only a discrete and biased sampling of the surface temperature and even of the temperature at the atmospheric level where the precipitation forms. Current interpretation of paleodata implicitly assumes that this bias is not affected by climate change itself.

Now onto the oceans, surely much simpler, given the massive well-mixed reservoir of 18O?

Mix & Ruddiman (1984):

The oxygen-isotopic composition of calcite is dependent on both the temperature and the isotopic composition of the water in which it is precipitated

..Because he [Shackleton] analyzed benthonic, instead of planktonic, species he could assume minimal temperature change (limited by the freezing point of deep-ocean water). Using this constraint, he inferred that most of the oxygen-isotope signal in foraminifera must be caused by changes in the isotopic composition of seawater related to changing ice volume, that temperature changes are a secondary effect, and that the isotopic composition of mean glacier ice must have been about -30 ‰.

This estimate has generally been accepted, although other estimates of the isotopic composition have been made by Craig (-17‰); Eriksson (-25‰), Weyl (-40‰) and Dansgaard & Tauber (≤30‰)

..Although Shackleton’s interpretation of the benthonic isotope record as an ice-volume/sea- level proxy is widely quoted, there is considerable disagreement between ice-volume and sea- level estimates based on δ18O and those based on direct indicators of local sea level. A change in δ18O of 1.6‰ at δ(ice) = – 35‰ suggests a sea-level change of 165 m.

..In addition, the effect of deep-ocean temperature changes on benthonic isotope records is not well constrained. Benthonic δ18O curves with amplitudes up to 2.2 ‰ exist (Shackleton, 1977; Duplessy et al., 1980; Ruddiman and McIntyre, 1981) which seem to require both large ice- volume and temperature effects for their explanation.

Many other heavyweights in the field have explained similar problems.

We will return to both of these questions in the next article.

Conclusion

Understanding the basics of isotopic changes in water and water vapor is essential to understand the main proxies for past temperatures and past ice volumes. Previously we have looked at problems relating to dating of the proxies, in this article we have looked at the proxies themselves.

There is good evidence that current values of isotopes in precipitation and ocean values give us a consistent picture that we can largely understand. The question about the past is more problematic.

I started looking seriously at proxies as a means to perhaps understand the discrepancies for key dates of ice age terminations between radiometric dating and ocean cores (see Thirteen – Terminator II). Sometimes the more you know, the less you understand..

Articles in the Series

Part One - An introduction

Part Two – Lorenz - one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton - how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff - how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes - and in a bit more detail

Part Six – “Hypotheses Abound” - lots of different theories that confusingly go by the same name

Part Seven – GCM I - early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap - my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II - more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III - very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV - very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age - a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age - latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination - very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II - looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data - getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers - reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II - comparing the results if we take the Huybers dataset and tie the last termination to the date implied by various radiometric dating

Eighteen – “Probably Nonlinearity” of Unknown Origin - what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I - looking at the state of ice sheet models

References

Isotopes of the Earth’s Hydrosphere, VI Ferronsky & VA Polyakov, Springer (2012)

Isotope Hydrology – A Study of the Water Cycle, Joel R Gat, Imperial College Press (2010)

Stable Isotope Geochemistry, Jochen Hoefs, Springer (2009)

Patterns of the isotopic composition of precipitation in time and space: data from the Israeli storm water collection program, M Rindsberger, Sh Jaffe, Sh Rahamim and JR Gat, Tellus (1990) – free paper

Stable isotopes in precipitation, Willi Dansgaard, Tellus (1964) – free paper

Validity of the temperature reconstruction from water isotopes in ice cores, J Jouzel, RB Alley, KM Cuffey, W Dansgaard, P Grootes, G Hoffmann, SJ Johnsen, RD Koster, D Peel, CA Shuman, M Stievenard, M Stuiver, J White, Journal of Geophysical Research (1997) – free paper

Oxygen Isotope Analyses and Pleistocene Ice Volumes, Mix & Ruddiman, Quaternary Research (1984)  - free paper

- and on the Dole effect, only covered in Note 2:

The Dole effect and its variations during the last 130,000 years as measured in the Vostok ice core, Michael Bender, Todd Sowers, Laurent Labeyrie, Global Biogeochemical Cycles (1994) – free paper

A model of the Earth’s Dole effect, Georg Hoffmann, Matthias Cuntz, Christine Weber, Philippe Ciais, Pierre Friedlingstein, Martin Heimann, Jean Jouzel, Jörg Kaduk, Ernst Maier-Reimer, Ulrike Seibt & Katharina Six, Global Biogeochemical Cycles (2004) – free paper

The isotopic composition of atmospheric oxygen Boaz Luz & Eugeni Barkan, Global Biogeochemical Cycles (2011) – free paper

Notes

Note 1: There is a relationship between δ18O and δD which is linked to the difference in vapor pressures between H2O and HDO in one case and H216O and H218O in the other case.

δD = 8 δ18O + 10 – known as the Global Meteoric Water Line.

The equation is more of a guide and real values vary sufficiently that I’m not really clear about its value. There are lengthy discussions of it and the variations from it in Ferronsky & Polyakov.

Note 2: The Dole effect

When we measure atmospheric oxygen, we find that the δ18O = 23.5 ‰ with respect to the oceans (VSMOW) – this is the Dole effect

So, oxygen in the atmosphere has a greater proportion of 18O than the ocean

Why?

How do the atmosphere and ocean exchange oxygen? In essence, photosynthesis turns sunlight + water (H2O) + carbon dioxide (CO2) –> sugar + oxygen (O2).

Respiration turns sugar + oxygen –> water + carbon dioxide + energy

The isotopic composition of the water in photosynthesis affects the resulting isotopic composition in the atmospheric oxygen.

The reason the Dole effect exists is well understood, but the reason why the value comes out at 23.5‰ is still under investigation. This is because the result is the global aggregate of lots of different processes. So we might understand the individual processes quite well, but that doesn’t mean the global value can be calculated accurately.

It is also the case that δ18O of atmospheric O2 has varied in the past – as revealed first of all in the Vostok ice core from Antarctica.

Michael Bender and his colleagues had a go at calculating the value from first principles in 1994. As they explain (see below), although it might seem as though their result is quite close to the actual number it is not a very successful result at all. Basically due to the essential process you start at 20‰ and should get to 23.5‰, but they o to 20.8‰.

Bender et al 1994:

The δ18O of O2.. reflects the global responses of the land and marine biospheres to climate change, albeit in a complex manner.. The magnitude of the Dole effect mainly reflects the isotopic composition of O2 produced by marine and terrestrial photosynthesis, as well as the extent to while the heavy isotope is discriminated against during respiration..

..Over the time period of interest here, photosynthesis and respiration are the most important reactions producing and consuming O2. The isotopic composition of O2 in air must therefore be understood in terms of isotope fractionation associated with these reactions.

The δ18O of O2 produced by photosynthesis is similar to that of the source water. The δ18O of O2 produced by marine plants is thus 0‰. The δ18O of O2 produced on the continents has been estimated to lie between +4 and +8‰. These elevated δ18O values are the result of elevated leaf water δ18O values resulting from evapotranspiration.

..The calculated value for the Dole effect is then the productivity-weighted values of the terrestrial and marine Dole effects minus the stratospheric diminution: +20.8‰. This value is considerably less than observed (23.5‰). The difference between the expected value and the observed value reflects errors in our estimates and, conceivably, unrecognized processes.

Then they assess the Vostok record, where the main question is less about why the Dole effect varies apparently with precession (period of about 20 kyrs), than why the variation is so small. After all, if marine and terrestrial biosphere changes are significant from interglacial to glacial then surely those changes would reflect more strongly in the Dole effect:

Why has the Dole effect been so constant? Answering this question is impossible at the present time, but we can probably recognize the key influences..

They conclude:

Our ability to explain the magnitude of the contemporary Dole effect is a measure of our understanding of the global cycles of oxygen and water. A variety of recent studies have improved our understanding of many of the principles governing oxygen isotope fractionation during photosynthesis and respiration.. However, our attempt to quantitively account for the Dole effect in terms of these principles was not very successful.. The agreement is considerably worse than it might appear given the fact that respiratory isotope fractionation alone must account for ~20‰ of the stationary enrichment of the 18O of O2 compared with seawater..

..[On the Vostok record] Our results show that variation in the Dole effect have been relatively small during most of the last glacial-interglacial cycle. These small changes are not consistent with large glacial increases in global oceanic productivity.

[Emphasis added]

Georg Hoffmann and his colleagues had another bash 10 years later and did a fair bit better:

The Earth’s Dole effect describes the isotopic 18O/16O-enrichment of atmospheric oxygen with respect to ocean water, amounting under today’s conditions to 23.5‰. We have developed a model of the Earth’s Dole effect by combining the results of three- dimensional models of the oceanic and terrestrial carbon and oxygen cycles with results of atmospheric general circulation models (AGCMs) with built-in water isotope diagnostics.

We obtain a range from 22.4‰ to 23.3‰ for the isotopic enrichment of atmospheric oxygen. We estimate a stronger contribution to the global Dole effect by the terrestrial relative to the marine biosphere in contrast to previous studies. This is primarily caused by a modeled high leaf water enrichment of 5–6‰. Leaf water enrichment rises by ~1‰ to 6–7‰ when we use it to fit the observed 23.5‰ of the global Dole effect.

Very recently Luz & Barkan (2011), backed up by lots of new experimental work produced a slightly closer estimate with some revisions of the Hoffman et al results:

Based on the new information on the biogeochemical mechanisms involved in the global oxygen cycle, as well as new and more precise experimental data on oxygen isotopic fractionation in various processes obtained over the last 15 years, we have reevaluated the components of the Dole effect.Our new observations on marine oxygen isotope effects, as well as, new findings on photosynthetic fractionation by marine organisms lead to the important conclusion that the marine, terrestrial and the global Dole effects are of similar magnitudes.

This result allows answering a long‐standing unresolved question on why the magnitude of the Dole effect of the last glacial maximum is so similar to the present value despite enormous environmental differences between the two periods. The answer is simple: if DEmar [marine Dole effect] and DEterr [terrestrial Dole effect] are similar, there is no reason to expect considerable variations in the magnitude of the Dole effect as the result of variations in the ratio terrestrial to marine O2 production.

Finally, the widely accepted view that the magnitude of the Dole effect is controlled by the ratio of land‐to‐sea productivity must be changed. Instead of the land‐sea control, past variations in the Dole effect are more likely the result of changes in low‐latitude hydrology and, perhaps, in structure of marine phytoplankton communities.

[Emphasis added]

Note 3:

Jochen Hoefs (2009):

Under equilibrium conditions at 25ºC, the fractionation factors for evaporating water are 1.0092 for 18O and 1.074 for D. However under natural conditions, the actual isotopic composition of water is more negative than the predicted equilibrium values due to kinetic effects.

The discussion of kinetic effects gets a little involved and I don’t think is really necessary to understand – the values of isotopic fractionation during evaporation and condensation are well understood. The confounding factors around what the proxies really measure relate to the journey (i.e. temperature history) and mixing of the various air parcels as well as the temperature of air relating to the precipitation event – is the surface temperature, the inversion temperature, both?

Read Full Post »

In Wonderland, Radiative Forcing and the Rate of Inflation we looked at the definition of radiative forcing and a few concepts around it:

  • why the instantaneous forcing is different from the adjusted forcing
  • what adjusted forcing is and why it’s a more useful concept
  • why the definition of the tropopause affects the value
  • GCM results usually don’t use radiative forcing as an input

In this article we will look at some results using the Wonderland model.

Remember the Wonderland model is not the earth. But the same is also true of “real” GCMs with geographical boundaries that match the earth as we know it. They are not the earth either. All models have limitations. This is easy to understand in principle. It is challenging to understand in the specifics of where the limitations are, even for specialists – and especially for non-specialists.

What the Wonderland model provides is a coarse geography with earth-like layout of land and ocean, plus of course, physics that follows the basic equations. And using this model we can get a sense of how radiative forcing is related to temperature changes when the same value of radiative forcing is applied via different mechanisms.

In the 1997 paper I think that Hansen, Sato & Ruedy did a decent job of explaining the limitations of radiative forcing, at least as far as the Wonderland climate model is able to assist us with that understanding. Remember as well that, in general, results we see from GCMs do not use radiative forcing. Instead they calculate from first principles – or parameterized first principles.

Doubling CO2

Now there’s a lot in this first figure, it can be a bit overwhelming. We’ll take it one step at a time. We double CO2 overnight – in Wonderland – and we see various results. The left half of the figure is all about flux while the right half is all about temperature:

From Hansen et al 1997

From Hansen et al 1997

Figure 1 – Green text added – Click to Expand

On the top line, the first two graphs are the net flux change, as a function of height and latitude. First left – instantaneous; second left – adjusted. These two cases were explained in the last article.

The second left is effectively the “radiative forcing”, and we can see that the above the tropopause (at about 200 mbar) the net flux change with height is constant. This is because the stratosphere has come into radiative balance. Refer to the last article for more explanation. On the right hand side, with all feedbacks from this one change in Wonderland, we can see the famous predicted “tropospheric hot spot” and the cooling of the stratosphere.

We see in the bottom two rows on the right the expected temperature change :

  • second row – change in temperature as a function of latitude and season (where temperature is averaged across all longitudes)
  • third row – change in temperature as a function of latitude and longitude (averaged annually)

It’s interesting to see the larger temperature increases predicted near the poles. I’m not sure I really understand the mechanisms driving that. Note that the radiative forcing is generally higher in the tropics and lower at the poles, yet the temperature change is the other way round.

Increasing Solar Radiation by 2%

Now let’s take a look at a comparison exercise, increasing solar radiation by 2%.

The responses to these comparable global forcings, 2xCO2 & +2% S0, are similar in a gross sense, as found by previous investigators. However, as we show in the sections below, the similarity of the responses is partly accidental, a cancellation of two contrary effects. We show in section 5 that the climate model (and presumably the real world) is much more sensitive to a forcing at high latitudes than to a forcing at low latitudes; this tends to cause a greater response for 2xCO2 (compare figures 4c & 4g); but the forcing is also more sensitive to a forcing that acts at the surface and lower troposphere than to a forcing which acts higher in the troposphere; this favors the solar forcing (compare figures 4a & 4e), partially offsetting the latitudinal sensitivity.

We saw figure 4 in the previous article, repeated again here for reference:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 2

In case the above comment is not clear, absorbed solar radiation is more concentrated in the tropics and a minimum at the poles, whereas CO2 is evenly distributed (a “well-mixed greenhouse gas”). So a similar average radiative change will cause a more tropical effect for solar but a more even effect for CO2.

We can see that clearly in the comparable graphic for a solar increase of 2%:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 3 - Green text added - Click to Expand

We see that the change in net flux is higher at the surface than the 2xCO2 case, and is much more concentrated in the tropics.

We also see the predicted tropospheric hot spot looking pretty similar to the 2xCO2 tropospheric hot spot (see note 1).

But unlike the cooler stratosphere of the 2xCO2 case, we see an unchanging stratosphere for this increase in solar irradiation.

These same points can also be seen in figure 2 above (figure 4 from Hansen et al).

Here is the table which compares radiative forcing (instantaneous and adjusted), no feedback temperature change, and full-GCM calculated temperature change for doubling CO2, increasing solar by 2% and reducing solar by 2%:

From Hansen et al 1997

From Hansen et al 1997

Figure 4 – Green text added – Click to Expand

The value R (far right of table) is the ratio of the predicted temperature change from a given forcing divided by the predicted temperature change from the 2% increase in solar radiation.

Now the paper also includes some ozone changes which are pretty interesting, but won’t be discussed here (unless we have questions from people who have read the paper of course).

“Ghost” Forcings

The authors then go on to consider what they call ghost forcings:

How does the climate response depend on the time and place at which a forcing is applied? The forcings considered above all have complex spatial and temporal variations. For example, the change of solar irradiance varies with time of day, season, latitude, and even longitude because of zonal variations in ground albedo and cloud cover. We would like a simpler test forcing.

We define a “ghost” forcing as an arbitrary heating added to the radiative source term in the energy equation.. The forcing, in effect, appears magically from outer space at an atmospheric level, latitude range, season and time of day. Usually we choose a ghost forcing with a global and annual mean of 4 W/m², making it comparable to the 2xCO2 and +2% S0 experiments.

In the following table we see the results of various experiments:

Hansen et al (1997)

Hansen et al (1997)

Figure 5 – Click to Expand

We note that the feedback factor for the ghost forcing varies with the altitude of the forcing by about a factor of two. We also note that a substantial surface temperature response is obtained even when the forcing is located entirely within the stratosphere. Analysis of these results requires that we first quantify the effect of cloud changes. However, the results can be understood qualitatively as follows.

Consider ΔTs in the case of fixed clouds. As the forcing is added to successively higher layers, there are two principal competing effects. First, as the heating moves higher, a larger fraction of the energy is radiated directly to space without warming the surface, causing ΔTs to decline as the altitude of the forcing increases. However, second, warming of a given level allows more water vapor to exist there, and at the higher levels water vapor is a particularly effective greenhouse gas. The net result is that ΔTs tends to decline with the altitude of the forcing, but it has a relative maximum near the tropopause.

When clouds are free to change the surface temperature change depends even more on the altitude of the forcing (figure 8). The principal mechanism is that heating of a given layer tends to decrease large-scale cloud cover within that layer. The dominant effect of decreased low-level clouds is a reduced planetary albedo, thus a warming, while the dominant effect of decreased high clouds is a reduced greenhouse effect, thus a cooling. However, the cloud cover, the cloud cover changes and the surface temperature sensitivity to changes may depend on characteristics of the forcing other than altitude, e.g. latitude, so quantitive evaluation requires detailed examination of the cloud changes (section 6).

Conclusion

Radiative forcing is a useful concept which gives a headline idea about the imbalance in climate equilibrium caused by something like a change in “greenhouse” gas concentration.

GCM calculations of temperature change over a few centuries do vary significantly with the exact nature of the forcing – primarily its vertical and geographical distribution. This means that a calculated radiative forcing of, say, 1 W/m² from two different mechanisms (e.g. ozone and CFCs) would (according to GCMs) not necessarily produce the same surface temperature change.

References

Radiative forcing and climate response, Hansen, Sato & Ruedy, Journal of Geophysical Research (1997) – free paper

Notes

Note 1: The reason for the predicted hot spot is more water vapor causes a lower lapse rate – which increases the temperature higher up in the troposphere relative to the surface. This change is concentrated in the tropics because the tropics are hotter and, therefore, have much more water vapor. The dry polar regions cannot get a lapse rate change from more water vapor because the effect is so small.

Any increase in surface temperature is predicted to cause this same change.

With limited research on my part, the idealized picture of the hotspot as shown above is not actually the real model results. The top graph is the “just CO2″ graph, and the bottom graph is the “CO2 + aerosols” – the second graph is obviously closer to the real case:

From Santer et al 1996

From Santer et al 1996

Many people have asked for my comment on the hot spot, but apart from putting forward an opinion I haven’t spent enough time researching this topic to understand it. From time to time I do dig in, but it seems that there are about 20 papers that need to be read to say something useful on the topic. Unfortunately many of them are heavy in stats and my interest wanes.

Read Full Post »

Radiative forcing is a “useful” concept in climate science.

But while it informs it also obscures and many people are confused about its applicability. Also many people are confused about why stratospheric adjustment takes place and what that means. And why does the definition of the tropopause, which is a concept that doesn’t have one definite meaning, affect this all important concept of radiative forcing. Surely there is a definition which is clear and unambiguous?

So there are a few things we will attempt to understand in this article.

The Rate of Inflation and Other Stories

The value of radiative forcing (however it is derived) has the same usefulness as the rate of inflation, or the exchange rate as measured by a basket of currencies (with relevant apologies to all economists reading this article).

The rate of inflation tells you something about how prices are increasing but in the end it is a complex set of relationships reduced to a single KPI.

It’s quite possible for the rate of inflation to be the same value in two different years, and yet one important group of the country in question to see no increase in their spending in the first year yet a significant increase in their spending costs in the second year. That’s the problem with reducing a complex problem to one number.

However, the rate of inflation apparently has some value despite being a single KPI. And so it is with radiative forcing.

The good news is, when we get the results from a GCM, we can be sure the value of radiative forcing wasn’t actually used. Radiative forcing is more to inform the public and penniless climate scientists who don’t have access to a GCM.

Wonderland, the Simple Climate Model

The more precision you put into a GCM the slower it runs. So comparing 100′s of different cases can be impossible. Such is the dilemma of a climate scientist with access to a supercomputer running a GCM but a long queue of funded but finger-tapping climate scientists behind him or her.

Wonderland is a compromise model and is described in Wonderland Climate Model by Hansen et al (1997). This model includes some basic geography that is similar to the earth as we know it. It is used to provide insight into radiative forcing basics.

The authors explain:

A climate model provides a tool which allows us to think about, analyze, and experiment with a facsimile of the climate system in ways which we could not or would not want to experiment with the real world. As such, climate modeling is complementary to basic theory, laboratory experiments and global observations.

Each of these tools has severe limitations, but together, especially in iterative combinations they allow our understanding to advance. Climate models, even though very imperfect, are capable of containing much of the complexity of the real world and the fundamental principles from which that complexity arises.

Thus models can help structure the discussions and define needed observations, experiments and theoretical work. For this purpose it is desirable that the stable of modeling tools include global climate models which are fast enough to allow the user to play games, to make mistakes and rerun the experiments, to run experiments covering hundreds or thousands of simulated years, and to make the many model runs needed to explore results over the full range of key parameters. Thus there is great incentive for development of a highly efficient global climate model, i.e., a model which numerically solves the fundamental equations for atmospheric structure and motion.

Here is Wonderland, from a geographical point of view:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 1

Wonderland is then used in Radiative Forcing and Climate Response, Hansen, Sato & Ruedy (1997). The authors say:

We examine the sensitivity of a climate model to a wide range of radiative forcings, including change of solar irradiance, atmospheric CO2, O3, CFCs, clouds, aerosols, surface albedo, and “ghost” forcing introduced at arbitrary heights, latitudes, longitudes, season, and times of day.

We show that, in general, the climate response, specifically the global mean temperature change, is sensitive to the altitude, latitude, and nature of the forcing; that is, the response to a given forcing can vary by 50% or more depending on the characteristics of the forcing other than its magnitude measured in watts per square meter.

In other words, radiative forcing has its limitations.

Definition of Radiative Forcing

The authors explain a few different approaches to the definition of radiative forcing. If we can understand the difference between these definitions we will have a much clearer view of atmospheric physics. From here, the quotes and figures will be from Radiative Forcing and Climate Response, Hansen, Sato & Ruedy (1997) unless otherwise stated.

Readers who have seen the IPCC 2001 (TAR) definition of radiative forcing may understand the intent behind this 1997 paper. Up until that time different researchers used inconsistent definitions.

The authors say:

The simplest useful definition of radiative forcing is the instantaneous flux change at the tropopause. This is easy to compute because it does not require iterations. This forcing is called “mode A” by WMO [1992]. We refer to this forcing as the “instantaneous forcing”, Fi, using the nomenclature of Hansen et al [1993c]. In a less meaningful alternative, Fi is computed at the top of the atmosphere; we include calculations of this alternative for 2xCO2 and +2% S0 for the sake of comparison.

An improved measure of radiative forcing is obtained by allowing the stratospheric temperature to adjust to the presence of the perturber, to a radiative equilibrium profile, with the tropospheric temperature held fixed. This forcing is called “mode B” by WMO [1992]; we refer to it here as the “adjusted forcing”, Fa [Hansen et al 1993c].

The rationale for using the adjusted forcing is that the relaxation time of the stratosphere is only several months [Manabe & Strickler, 1964], compared to several decades for the troposphere [Hansen et al 1985], and thus the adjusted forcing should be a better measure of the expected climate response for forcings which are present at least several months..The adjusted forcing can be calculated at the top of the atmosphere because the net radiative flux is constant throughout the stratosphere in radiative equilibrium. The calculated Fa depends on where the tropopause level is specified. We specify this level as 100 mbar from the equator to 40° latitude, changing to 189 mbar there, and then increasing linearly to 300 mbar at the poles.

[Emphasis added].

This explanation might seem confusing or abstract so I will try and explain.

Let’s say we have a sudden increase in a particular GHG (see note 1). We can calculate the change in radiative transfer through the atmosphere with a given temperature profile and concentration profile of absorbers with little uncertainty. This means we can see immediately the reduction in outgoing longwave radiation (OLR). And the change in absorption of solar radiation.

Now the question becomes – what happens in the next 1 day, 1 month, 1 year, 10 years, 100 years?

Small changes in net radiation (solar absorbed – OLR) will have an equilibrium effect over many decades at the surface because of the thermal inertia of the oceans (the heat capacity is very high).

The issue that everyone found when they reviewed this problem – the radiative forcing on day 1 was different from the radiative forcing on day 90.

Why?

Because the changes in net absorption above the tropopause (the place where convection stops and let’s review that definition a little later) affect the temperature of the stratosphere very quickly. So the stratosphere quickly adjusts to the new world order and of course this changes the radiative forcing. It’s like (in non-technical terms) the stratosphere responded very quickly and “bounced out” some of the radiative forcing in the first month or two.

So the stratosphere, with little heat capacity, quickly adapts to the radiative changes and moves back into radiative equilibrium. This changes the “radiative forcing” and so if we want to work out the changes over the next 10-100 years there is little point in considering the radiative forcing on day 1, but maybe if the quick responders sort themselves out in 60 days we can wait for the quick responders to settle down and pick the radiative forcing number after 90-120 days.

This is the idea behind the definition.

Let’s look at this in pictures. In the graph below the top line is for doubling CO2 (the line below is for increasing solar by 2%), and the top left is the flux change through the atmosphere for instantaneous and for adjusted. The red line is the “adjusted” value:

From Hansen (1997)

From Radiative Forcing & Climate Response, Hansen et al (1997)

Figure 2 – Click to expand

This red line is the value of flux change after the stratosphere has adjusted to the radiative forcing. Why is the red line vertical?

The reason is simple.

The stratosphere is now in temperature equilibrium because energy in = energy out at all heights. With no convection in the stratosphere this is the same as radiation absorbed = radiation emitted at all heights. Therefore, the net flux change with height must be zero.

If we plotted separately the up and down flux we would find that they have a slope, but the slope of the up and down would be the same. Net absorption of radiation going up balances net emission of radiation going down – more on this in Visualizing Atmospheric Radiation – Part Eleven – Stratospheric Cooling.

Another important point, we can see in the top left graph that the instantaneous net flux at the tropopause (i.e., the net flux on day one) is different from the net flux at the tropopause after adjustment (i.e., after the stratosphere has come into radiative balance).

But once the stratosphere has come into balance we could use the TOA net flux, or the tropopause net flux – it would not matter because both are the same.

Result of Radiative Forcing

Now let’s look at 4 different ways to think about radiative forcing, using the temperature profile as our guide to what is happening:

From Hansen et al (1997)

From Radiative Forcing & Climate Response, Hansen et al (1997)

Figure 3 – Click to expand

On the left, case a, instantaneous forcing. This is the result of the change in net radiation absorbed vs height on day one. Temperature doesn’t change instantaneously so it’s nice and simple.

On the next graph, case b, adjusted forcing. This is the temperature change resulting from net radiation absorbed after the stratosphere has come into equilibrium with the new world order, but the troposphere is held fixed. So by definition the tropospheric temperature is identical in case b to case a.

On the next graph, case c, no feedback response of temperature. Now we allow the tropospheric temperature to change until such time as the net flux at the tropopause has gone back to zero. But during this adjustment we have held water vapor, clouds and the lapse rate in the troposphere at the same values as before the radiative forcing.

On the last graph, case d, all feedback response of temperature. Now we let the GCM take over and calculate how water vapor, clouds and the lapse rate respond. And as with case c, we wait until the temperature has increased sufficiently that net tropopause flux has gone back to zero.

What Definition for the Tropopause and Why does it Matter?

We’ve seen that if we use adjusted forcing that the radiative forcing is the same at TOA and at the tropopause. And the adjusted forcing is the IPCC 2001 definition. So why use the forcing at the tropopause? And why does the definition of the tropopause matter?

The first question is easy. We could use the forcing at TOA, it wouldn’t matter so long as we have allowed the stratosphere to come into radiative equilibrium (which takes a few months). As far as I can tell, my opinion, it’s more about the history of how we arrived at this point. If you want to run a climate model to calculate the radiative forcing without stratospheric equilibrium then, on day one, the radiative forcing at the tropopause is usually pretty close to the value calculated after stratospheric equilibrium is reached.

So:

  1. Calculate the instantaneous forcing at the tropopause and get a value close to the authoritative “radiative forcing” – with the benefit of minimal calculation resources
  2. Calculate the adjusted forcing at the tropopause or TOA to get the authoritative “radiative forcing”

And lastly, why then does the definition of the tropopause matter?

The reason is simple, but not obvious. We are holding the tropospheric temperature constant, and letting the stratospheric temperature vary. The tropopause is the dividing line. So if we move the dividing line up or down we change the point where the temperatures adjust and so, of course, this affects the “adjusted forcing”. This is explained in some detail in Forster et al (1997) in section 4, p.556 (see reference below).

For reference, three definitions of the tropopause are found in Freckleton et al (1998):

  • the level at which the lapse rate falls below 2K/km
  • the point at which the lapse rate changes sign, i.e., the temperature minimum
  • the top of convection

Conclusion

Understanding what radiative forcing means requires understanding a few basics.

The value of radiative forcing depends upon the somewhat arbitrary definition of the location of the tropopause. Some papers like Freckleton et al (1998) have dived into this subject, to show the dependence of the radiative forcing for doubling CO2 on this definition.

We haven’t covered it in this article, but the Hansen et al (1997) paper showed that radiative forcing is not a perfect guide to how climate responds (even in the idealized world of GCMs). That is, the same radiative forcing applied via different mechanisms can lead to different temperature responses.

Is it a useful parameter? Is the rate of inflation a useful parameter in economics? Usefulness is more a matter of opinion. What is more important at the start is to understand how the parameter is calculated and what it can tell us.

References

Radiative forcing and climate response, Hansen, Sato & Ruedy, Journal of Geophysical Research (1997) – free paper

Wonderland Climate Model, Hansen, Ruedy, Lacis, Russell, Sato, Lerner, Rind & Stone, Journal of Geophysical Research, (1997) – paywall paper

Greenhouse gas radiative forcing: Effect of averaging and inhomogeneities in trace gas distribution, Freckleton et al, QJR Meteorological Society (1998) – paywall paper

On aspects of the concept of radiative forcing, Forster, Freckleton & Shine, Climate Dynamics (1997) – free paper

Notes

Note 1: The idea of an instantaneous increase in a GHG is a thought experiment to make it easier to understand the change in atmospheric radiation. If instead we consider the idea of a 1% change per year, then we have a more difficult problem. (Of course, GCMs can quite happily work with a real-world slow change in GHGs. And they can quite happily work with a sudden change).

Read Full Post »

The earth’s surface is not a black-body. A blackbody has an emissivity and absorptivity = 1.0, which means that it absorbs all incident radiation and emits according to the Planck law.

The oceans, covering over 70% of the earth’s surface, have an emissivity of about 0.96. Other areas have varying emissivity, going down to about 0.7 for deserts. (See note 1).

A lot of climate analyses assume the surface has an emissivity of 1.0.

Let’s try and qualify the effect of this assumption.

The most important point to understand is that if the emissivity of the surface, ε, is less than 1.0 it means that the surface also reflects some atmospheric radiation.

Let’s first do a simple calculation with nice round numbers.

Say the surface is at a temperature, Ts=289.8 K. And the atmosphere emits downward flux = 350 (W/m²).

  • If ε = 1.0 the surface emits 400. And it reflects 0. So a total upward radiation of 400.
  • If ε = 0.8 the surface emits 320. And it reflects 70 (350 x 0.2). So a total upward radiation of 390.

So even though we are comparing a case where the surface reduces its emission by 20%, the upward radiation from the surface is only reduced by 2.5%.

Now the world of atmospheric radiation is very non-linear as we have seen in previous articles in this series. The atmosphere absorbs very strongly in some wavelength regions and is almost transparent in other regions. So I was intrigued to find out what the real change would be for different atmospheres as surface emissivity is changed.

To do this I used the Matlab model already created and explained – in brief in Part Two and with the code in Part Five – The Code (note 2). The change in surface emissivity is assumed to be wavelength independent (so if ε = 0.8, it is the case across all wavelengths).

I used some standard AFGL (air force geophysics lab) atmospheres. A description of some of them can be seen in Part Twelve – Heating Rates (note 3).

For the tropical atmosphere:

  • ε = 1.0, TOA OLR = 280.9   (top of atmosphere outgoing longwave radiation)
  • ε = 0.8, TOA OLR = 278.6
  • Difference = 0.8%

Here is the tropical atmosphere spectrum:

Atmospheric-radiation-14b-tropical-atm-TOA-emissivity-0.8vs1.0

Figure 1

We can see that the difference occurs in the 800-1200 cm-1 region (8-12 μm), the so-called “atmospheric window” – see Kiehl & Trenberth and the Atmospheric Window. We will come back to the reasons why in a moment.

For reference, an expanded view of the area with the difference:

Atmospheric-radiation-14b-tropical-atm-TOA-emissivity-0.8vs1.0-expanded

Figure 2

Now the mid-latitude summer atmosphere:

  • ε = 1.0, TOA OLR = 276.9
  • ε = 0.8, TOA OLR = 272.4
  • Difference = 1.6%

And the mid-latitude winter atmosphere:

  • ε = 1.0, TOA OLR = 227.9
  • ε = 0.8, TOA OLR = 217.4
  • Difference = 4.6%

Here is the spectrum:

Atmospheric-radiation-14c-midlat-winter-atm-TOA-emissivity-0.8vs1.0

Figure 3

We can see that the same region is responsible and the difference is much greater.

The sub-arctic summer:

  • ε = 1.0, TOA OLR = 259.8
  • ε = 0.8, TOA OLR = 252.7
  • Difference = 2.7%

The sub-arctic winter:

  • ε = 1.0, TOA OLR = 196.8
  • ε = 0.8, TOA OLR = 186.9
  • Difference = 5.0%

Atmospheric-radiation-14c-subarctic-winter-atm-TOA-emissivity-0.8vs1.0

Figure 4

We can see that the surface emissivity of the tropics has a negligible difference on OLR. The higher latitude winters have a 5% change for the same surface emissivity change, and the higher latitude summers have around 2-3%.

The reasoning is simple.

For the tropics, the hot humid atmosphere radiates quite close to a blackbody, even in the “window region” due to the water vapor continuum. We can see this explained in detail in Part Ten – “Back Radiation”.

So any “missing” radiation from a non-blackbody surface is made up by reflection of atmospheric radiation (where the radiating atmosphere is almost at the same temperature as the surface).

When we move to higher latitudes the “window region” becomes more transparent, and so the “missing” radiation cannot be made up by reflection of atmospheric radiation in this wavelength region. This is because the atmosphere is not emitting in this “window” region.

And the effect is more pronounced in the winters in high latitudes because the atmosphere is colder and so there is even less water vapor.

Now let’s see what happens when we do a “radiative forcing” calculation – we will do a comparison of TOA OLR at 360 ppm CO2 – 720 ppm at two different emissivities for the tropical atmosphere. That is, we will calculate 4 cases:

  • 360 ppm at ε=1.0
  • 720  ppm at ε=1.0
  • 360 ppm at ε=0.8
  • 720  ppm at ε=0.8

And, at both ε=1.0 & ε=0.8 we subtract the OLR at 360ppm from OLR at 720ppm and plot both differenced emissivity results on the same graph:

Atmospheric-radiation-14fg-tropical-atm-2xCO2-TOA-emissivity-0.8vs1.0

 

Figure 5

We see that both comparisons look almost identical – we can’t distinguish between them on this graph. So let’s subtract one from the other. That is, we plot (360ppm-720ppm)@ε=1.0 – (360ppm – 720ppm)@ε=0.8:

Atmospheric-radiation-14h-tropical-atm-2xCO2-1xCO2-emissivity-0.8-1.0

 

Figure 6 – same units as figure 5

So it’s clear that in this specific case of calculating the difference in CO2 from 360ppm to 720ppm it doesn’t matter whether we use surface emissivity = 1.0 or 0.8.

Conclusion

The earth’s surface is not a blackbody. No one in climate science thinks it is. But for a lot of basic calculations assuming it is a blackbody doesn’t have a big impact on the TOA radiation – for the reasons outlined above. And it has even less impact on the calculations of changes in CO2.

The tropics, from 30°S to 30°N, are about half the surface area of the earth. And with a typical tropical atmosphere, a drop in surface emissivity from 1.0 to 0.8 causes a TOA OLR change of less than 1%.

Of course, it could get more complicated than the calculations we have seen in this article. Over deserts in the tropics, where the surface emissivity actually gets below 0.8, water vapor is also low and therefore the resulting TOA flux change will be higher (as a result of using actual surface emissivity vs black body emissivity).

I haven’t delved into the minutiae of GCMs to find out what they assume about surface emissivity and, if they do use 1.0, what calculations have been done to quantify the impact.

The average surface emissivity of the earth is much higher than 0.8. I just picked that value as a reference.

The results shown in this article should help to clarify that the effect of surface emissivity less than 1.0 is not as large as might be expected.

Notes

Note 1: Emissivity and absorptivity are wavelength dependent phenomena. So these values are relevant for the terrestrial wavelengths of 4-50μm.

Note 2: There was a minor change to the published code to allow for atmospheric radiation being reflected by the non-black surface. This hasn’t been updated to the relevant article because it’s quite minor. Anyone interested in the details, just ask.

In this model, the top of atmosphere is at 10 hPa.

Some outstanding issues remain in my version of the model, like whether the diffusivity improvement is correct or needs improvement, and the Voigt profile (important in the mid-upper stratosphere) is still not used. These issues will have little or no effect on the question addressed in this article.

Note 3: For speed, I only considered water vapor and CO2 as “greenhouse” gases. No ozone was used. To check, I reran the tropical atmosphere with ozone at the values prescribed in that AFGL atmosphere. The difference between ε = 1.0 and ε = 0.8 was 0.7% – less than with no ozone (0.8%). This is because ozone reduces the transparency of the “atmospheric window” region.

Read Full Post »

In an earlier article on water vapor we saw that changing water vapor in the upper troposphere has a disproportionate effect on outgoing longwave radiation (OLR). Here is one example from Spencer & Braswell 1997:

Spencer and Braswell (1997)

From Spencer & Braswell (1997)

Figure 1

The upper troposphere is very dry, and so the mass of water vapor we need to change OLR by a given W/m² is small by comparison with the mass of water vapor we need to effect the same change in or near the boundary layer (i.e., near to the earth’s surface). See also Visualizing Atmospheric Radiation – Part Four – Water Vapor.

This means that when we are interested in climate feedback and how water vapor concentration changes with surface temperature changes, we are primarily interested in the changes in upper tropospheric water vapor (UTWV).

Upper Tropospheric Water Vapor

A major problem with analyzing UTWV is that most historic measurements are poor for this region. The upper troposphere is very cold and very dry – two issues that cause significant problems for radiosondes.

The atmospheric infrared sounder (AIRS) was launched in 2002 on the Aqua satellite and this instrument is able to measure temperature and water vapor with vertical resolution similar to that obtained from radiosondes. At the same time, because it is on a satellite we get the global coverage that is not available with radiosondes and the ability to measure the very cold, very dry upper tropospheric atmosphere.

Gettelman & Fu (2008) focused on the tropics and analysed the relationship (covariance) between surface temperature and UTWV from AIRS over 2002-2007, and then compared this with the results of the CAM climate model using prescribed (actual) surface temperature from 2001-2004 (note 1):

This study will build upon previous estimates of the water vapor feedback, by focusing on the observed response of upper-tropospheric temperature and humidity (specific and relative humidity) to changes in surface temperatures, particularly ocean temperatures. Similar efforts have been performed before (see below), but this study will use new high vertical resolution satellite measurements and compare them to an atmospheric general circulation model (GCM) at similar resolution.

The water vapor feedback arises largely from the tropics where there is a nearly moist adiabatic profile. If the profile stays moist adiabatic in response to surface temperature changes, and if the relative humidity (RH) is unchanged because of the supply of moisture from the oceans and deep convection to the upper troposphere, then the upper-tropospheric specific humidity will increase.

[Emphasis added]

They describe the objective:

The goal of this work is a better understanding of specific feedback processes using better statistics and vertical resolution than has been possible before. We will compare satellite data over a short (4.5 yr) time record to a climate model at similar space and time resolution and examine the robustness of results with several model simulations. The hypothesis we seek to test is whether water vapor in the model responds to changes in surface temperatures in a manner similar to the observations. This can be viewed as a necessary but not sufficient condition for the model to reproduce the upper-tropospheric water vapor feedback caused by external forcings such as anthropogenic greenhouse gas emissions.

[Emphasis added].

The results are for relative humidity (RH) on the left and absolute humidity on the right:

From Gettelman & Fu (2008)

From Gettelman & Fu (2008)

Figure 2

The graphs show that change in 250 mbar RH with temperature is statistically indistinguishable from zero. For those not familiar with the basics, if RH stays constant with rising temperature it is the same as increasing “specific humidity” – which means an increased mixing ratio of water vapor in the atmosphere. And we see this is the right hand graph.

Figure 1a has considerable scatter, but in general, there is little significant change of 250-hPa relative humidity anomalies with anomalies in the previous month’s surface temperature. The slope is not significantly different than zero in either AIRS observations (1.9 ± 1.9% RH/°C) or CAM (1.4 ± 2.8% RH/°C).

The situation for specific humidity in Fig. 1b indicates less scatter, and is a more fundamental measurement from AIRS (which retrieves specific humidity and temperature separately). In Fig. 1b, it is clear that 250- hPa specific humidity increases with increasing averaged surface temperature in both AIRS observations and CAM simulations. At 250 hPa this slope is 20 ± 8 ppmv/°C for AIRS and 26 ± 11 ppmv/°C for CAM. This is nearly 20% of background specific humidity per degree Celsius at 250 hPa.

The observations and simulations indicate that specific humidity increases with surface temperatures (Fig. 1b). The increase is nearly identical to that required to maintain constant relative humidity (the sloping dashed line in Fig. 1b) for changes in upper-tropospheric temperature. There is some uncertainty in this constant RH line, since it depends on calculations of saturation vapor mixing ratio that are nonlinear, and the temperature used is a layer (200–250 hPa) average.

The graphs below show the change in each variable as surface temperature is altered as a function of pressure (height). The black line is the measurement (AIRS).

So the right side graph shows that, from AIRS data of 4 years, specific humidity increases with surface temperature in the upper troposphere:

From Gettelman & Fu (2008)

From Gettelman & Fu (2008)

Figure 3 – Click to Enlarge

There are a number of model runs using CAM with different constraints. This is a common theme in climate science – researchers attempting to find out what part of the physics (at least as far as the climate model can reproduce it) contributes the most or least to a given effect. The paper has no paywall, so readers are recommended to review the whole paper.

Conclusion

The question of how water vapor responds to increasing surface temperature is a critical one in climate research. The fundamentals are discussed in earlier articles, especially Clouds and Water Vapor – Part Two - and much better explained in the freely available paper Water Vapor Feedback and Global Warming, Held and Soden (2000).

One of the key points is that the response of water vapor in the planetary boundary layer (the bottom layer of the atmosphere) is a lot easier to understand than the response in the “free troposphere”. But how water vapor changes in the free troposphere is the important question. And the water vapor concentration in the free troposphere is dependent on the global circulation, making it dependent on the massive complexity of atmospheric dynamics.

Gettelman and Fu attempt to answer this question for the first half decade’s worth of quality satellite observation and they find a result that is similar to that produced by GCMs.

Many people outside of climate science believe that GCMs have “positive feedback” or “constant relative humidity” programmed in. Delving into a climate model is a technical task, but the details are freely available – e.g., Description of the NCAR Community Atmosphere Model (CAM 3.0), W.D. Collins (2004). It’s clear to me that relative humidity is not prescribed in climate models – both from the equations used and from the results that are produced in many papers. And people like the great Isaac Held, a veteran of climate modeling and atmospheric dynamics, also state the same. So, readers who believe otherwise – come forward with evidence.

Still, that’s a different story from acknowledging that climate models attempt to calculate humidity from some kind of physics but believing that these climate models get it wrong. That is of course very possible.

At least from this paper we can see that over this short time period, not subject to strong ENSO fluctuations or significant climate change, the satellite date shows upper tropospheric humidity increasing with surface temperature. And the CAM model produces similar results.

Articles in this Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses - answering some questions about Part One

Part Two - some introductory ideas about water vapor including measurements

Part Three - effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four - discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert - focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres - demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

References

Observed and Simulated Upper-Tropospheric Water Vapor Feedback, Gettelman & Fu, Journal of Climate (2008) – free paper

How Dry is the Tropical Free Troposphere? Implications for Global Warming Theory, Spencer & Braswell, Bulletin of the American Meteorological Society (1997) – free paper

Notes

Note 1 - The authors note: “..Model SSTs may be slightly different from the data, but represent a partially overlapping period..”

I asked Andrew Gettelman why the model was run for a different time period than the observations and he said that the data (in the form needed for running CAM) was not available at that time.

Read Full Post »

Many curiosity values in atmospheric physics take on new life in the blogosphere. One of them is the value in Kiehl & Trenberth 1997 for the “atmospheric window” flux:

From Kiehl & Trenberth (1997)

From Kiehl & Trenberth (1997)

Figure 1

Here is the update in 2009 by Trenberth, Fasullo & Kiehl:

From Trenberth, Fasullo & Kiehl (2009)

From Trenberth, Fasullo & Kiehl (2009)

Figure 2

The “atmospheric window” value is probably the value in KT97 which has the least attention paid to it in the paper, and the least by climate science. That’s because it isn’t actually used in any calculations of note.

What is the Atmospheric Window?

The “atmospheric window” itself is a term in common use in climate science. The atmosphere is quite opaque to longwave radiation (terrestrial radiation) but the region from 8-12 μm has relatively few absorption lines by “greenhouse” gases. This means that much of the surface radiation emitted in this wavelength region makes it to the top of atmosphere (TOA).

The story is a little more complex for two reasons:

  1. The 8-12μm region has significant absorption by water vapor due to the water vapor continuum. See Visualizing Atmospheric Radiation – Part Ten – “Back Radiation” for more on both the window and the continuum
  2. Outside of the 8-12 region there is some transparency in the atmosphere at particular wavelengths

The term in KT97 was not clearly defined, but what we are really interested in is what value of surface emitted radiation is transmitted through to TOA – from any wavelength, regardless of whether it happens to be in the 8-12 μm region.

Calculating the Value

One blog that I visited recently had many commenters whose expectation was that upward emitted radiation by the surface would be exactly equal to the downward emitted radiation by the atmosphere + the “atmospheric window” value.

To illustrate this expectation let’s use the values from figure 2 (the 2009 paper) – note that all of these figures are globally annually averaged:

  • Upward radiation from the surface = 396 W/m²
  • Downward radiation from the atmosphere (DLR or “back radiation”) = 333 W/m²
  • These commenters appear to think the atmospheric window value is probably really 63 W/m² – and thus the surface and lower atmosphere are in a “radiative balance”

This can’t be the case for fairly elementary reasons – but let’s look at that later.

In Visualizing Atmospheric Radiation – Part Two I describe the basics of a MATLAB line by line calculation of radiative transfer in the atmosphere. And Part Five – The Code gives the specifics, including the code.

Running v0.10.4 I used some “standard atmospheres” (examples in Part Twelve – Heating Rates) and calculated the flux from the surface to TOA:

  • Tropical – 28 W/m² (52 W/m²)
  • Midlatitude summer – 40 W/m² (58 W/m²)
  • Midlatitude winter – 59 W/m² (62 W/m²)
  • Subarctic summer – 50 W/m² (61 W/m²)
  • Subartic winter – 55 W/m² (56 W/m²)
  • US Standard 1976 – 65 W/m² (72 W/m²)

These are all clear sky values, and the values in brackets are the values calculated without the continuum absorption to show its effect. Clear skies are, globally annually averaged, about 38% of the sky.

These values are quite a bit lower than the values found in the new paper we discuss in this article, and at this stage I’m not sure why.

This paper is: Outgoing Longwave Radiation due to Directly Transmitted Surface Emission, Costa & Shine (2012):

This short article is intended to be a pedagogical discussion of one component of the KT97 figure [which was not updated in Trenberth et al. (2009)], which is the amount of longwave radiation labeled ‘‘atmospheric window.’’ KT97 estimate this component to be 40 W/m² compared to the total outgoing longwave radiation (OLR) of 235 W/m²; however, KT97 make clear that their estimate is ‘‘somewhat ad hoc’’ rather than the product of detailed calculations. The estimate was based on their calculation of the clear-sky OLR in the 8–12 μm wavelength region of 99 W/m² and an assumption that no such radiation can directly exit the atmosphere from the surface when clouds are present. Taking the observed global-mean cloudiness to be 62%, their value of 40 W/m² follows from rounding 99 x (1 – 0.62).

They comment:

Presumably the reason why KT97, and others, have not explicitly calculated this term is that the methods of vertical integration of the radiative transfer equation in most radiation codes compute the net effect of surface emission and absorption and emission by the atmosphere, rather than each component separately. In the calculations presented here we explicitly calculate the upward irradiance at the top of the atmosphere due to surface emission: we will call this the surface transmitted irradiance (STI).

In other words, the value in the KT97 paper is not needed for any radiative transfer calculations, but let’s try and work out a more accurate value anyway.

First, how the clear sky values vary with latitude:

Costa-Shine-fig3-2012

Figure 3 – Clear sky values

Note that the dashed line is “imaginary physics”. The water vapor continuum exists but it is very interesting to see what effect it contributes. This is seen by calculating the effect as if it didn’t exist.

We see that in the tropics STI is very low. This is because the effect of the continuum is dependent on the square of the water vapor concentration, which itself is strongly dependent on the temperature of the atmosphere.

The continuum absorption is so strong in the tropics that STIclr in polar regions (which is only modestly influenced by the continuum) is typically 40% higher than the tropical values.Figure 3 shows the zonal and annual mean of the STIclr to emphasize the role of the continuum. The STIclr neglecting the continuum (dash-dotted line) is generally more than 80 W/m² at all latitudes, with maxima in the northern subtropics (mostly associated with the Sahara desert), but with little latitudinal gradient throughout the tropics and subtropics; the tropical values are reduced by more than 50% when the continuum is included (dashed lines). The effect of the continuum clearly diminishes outside of the tropics and is responsible for only around a 10% reduction in STIclr at high latitudes.

Interestingly, these more detailed calculations yield global-mean values of STIclr of 66 and 100 W/m², with and without the continuum, very close to the values (65 and 99 W/m²) computed using the single global-mean profile, in spite of the potential nonlinearities due to the vapor pressure–squared dependence of the self-continuum.

For people unfamiliar with the issue of non-linearity – if we take an “average climate” and do some calculations on it, the result will usually be different from taking lots of location data, doing the calculations on each, and averaging the results of the calculations. Climate is non-linear. However, in this case, the calculated value of STIclr on an “average climate” does turn out to be similar to the average of STIclr when calculated from climate values in each part of the globe.

We can appreciate a little more about the impact of the continuum on this atmospheric window if we look at the details of the calculation vs wavelength:

From Costa & Shine (2012)

From Costa & Shine (2012)

Figure 4 – Highlighted orange text added

Here is the regional breakdown:

From Costa & Shine (2012)

From Costa & Shine (2012)

Figure 5 – Clear and All-sky values – Orange highlighted text added

Note that conventionally in climate science clear sky results are the climate without clouds (i.e., a subset), whereas ‘cloudy sky’ results are the results with both clear and cloudy (i.e., all values).

The authors comment:

When including clouds, the STI is reduced further (Fig. 2c) because clouds absorb strongly throughout the infrared window. In regions of high cloud amount, such as the midlatitude storm tracks, the STI is reduced from a clear-sky value of 70 W/m² to less than 10 W/m². As expected, values are less affected in desert regions. The subtropics are now the main source of the global mean STI. The effect of clouds is to reduce the STI from its clear-sky value of 66 W/m² by two-thirds to a value of about 22 W/m²

Method

They state:

Clear-sky STI (STIclr) is calculated by using the line by line model Reference Forward Model (RFM) version 4.22 (Dudhia 1997) in the wavenumber domain 10–3000 cm-1 (wavelengths 3.33–1000 mm) at a spectral resolution of 0.005 cm-1. The version of RFM used here incorporates the Clough–Kneizys–Davies (CKD) water vapor continuum model (version 2.4); although this has been superseded by the MT-CKD model, the changes in the midinfrared window (see, e.g., Firsov and Chesnokova 2010) are rather small and unlikely to change our estimate by more than 1 W/m²..

..Irradiances are calculated at a spatial resolution of 10° latitude and longitude using a climatology of annual mean profiles of pressure, water vapor, temperature, and cloudiness described in Christidis et al. (1997). Although slightly dated, the global-mean column water amount is within about 1% of more recent climatologies.

Carbon dioxide, methane, and nitrous oxide are assumed to be well mixed with mixing ratios of 365, 1.72, and 0.312 ppmv, respectively. Other greenhouse gases are not considered since their radiative forcing is less than 0.4 W/m² (e.g., Solomon et al. 2007; Schmidt et al. 2010); we have performed an approximate estimate of the effect of 1 ppbv of chlorofluorocarbon 12 (CFC12) (to approximate the sum of all halocarbons in the atmosphere) on the STIclr and the effect is less than 1%.

Likewise, aerosols are not considered. It is the larger mineral dust particles that are more likely to have an impact in this spectral region; estimates of the impact of aerosol on the OLR are typically around 0.5 W/m² (e.g., Schmidt et al. 2010). The impact on the STI will depend on, for example, the height of aerosol layers and the aerosol radiative properties and is likely a larger effect than the CFCs if they are mostly at lower altitudes; this is discussed further in section 4. The surface is assumed to have an emittance of unity.

And later in assumptions:

Our assumption that the surface emits as a blackbody could also be examined, using emerging datasets on the spectral variation of the surface emittance (which can deviate significantly from unity and be as low as 0.75 in the 1000–1200 cm-1 spectral region, in desert regions; e.g., Zhou et al. 2011; Vogel et al. 2011). Some decision would need to made, then, as to whether or not infrared radiation reflected by surfaces with emittances less than zero should be included in the STI term as this reflection partially compensates for the reduced emission. Although locally important, the effect of nonunity emittances on the global-mean STI is likely to be only a few percent.

The point here is that if we consider the places with emissivity less than 1.0 should we calculate the value of flux reaching TOA without absorption from both surface emission AND surface reflection? Or just surface emission? If we include the reflected atmospheric radiation then the result is not so different. This is something I might try to demonstrate in the Visualizing Atmospheric Radiation series.

As is standard in radiative transfer calculations, spherical geometry is taken into consideration via the diffusivity approximation, as outlined in this comment.

Why The Atmosphere and The Surface cannot be Exchanging Equal Amounts of Radiation

This is quite easy to understand. I’ll invent some numbers which are nice round numbers to make it easier.

Let’s say the surface radiates 400 and has an emissivity of 1.0 (implying Ts=289.8 K). The atmosphere has an overall transmissivity of 0.1 (10%). That means 360 is absorbed by the atmosphere and 40 is transmitted to TOA unimpeded. For the radiative balance required/desired by the earlier mentioned commenters the atmosphere must be emitting 360.

Thus, under these fictional conditions, the surface is absorbing 360 from the atmosphere. The atmosphere is absorbing 360 from the surface. Some bloggers are happy.

Now, how does the atmosphere, with a transmissivity of 10%, emit 360? We need to know the atmosphere’s emissivity. For an atmosphere – a gas – energy must be transmitted, absorbed or reflected. Longwave radiation experiences almost no reflection from the atmosphere. So we end up with a nice simple formula:

Transmissivity, t = 100% – absorptivity

 Absorptivity, a = 90%.

What is emissivity? It turns out, explained in Planck, Stefan-Boltzmann, Kirchhoff and LTE, that emissivity = absorptivity (for the same wavelength).

Therefore, emissivity of the atmosphere, e = 90%.

So what temperature of the atmosphere, Ta, at an emissivity of 90% will radiate 360? The answer is simple (from the Stefan Boltzmann equation, E=eσTa4, where σ=5.67×10-8):

Ta = 289.8 K

So, if the atmosphere is exactly the same temperature as the surface then they will exchange equal amounts of radiation. And if not, they won’t. Now the atmosphere is not at one temperature so it makes it a bit harder to work out what the right temperature is. And the full calculation comes from the radiative transfer equations, but the same conclusion is reached with lots of maths – unless the atmosphere is at the same temperature as the surface then they will not exchange equal amounts of radiation.

Conclusion

The authors say:

This study presents what we believe to be the most detailed estimate of the surface contribution to the clear and cloudy-sky OLR. This contribution is called the surface transmitted irradiance (STI). The global- and annual- mean STI is found to be 22 W/m². The purpose of producing the value is mostly pedagogical and is stimulated by the value of 40 W/m² shown on the often-used summary figures produced by KT97 and Trenberth et al. (2009).
As a result of this changed value, of course the standard energy balance diagram shown in KT97 and TFK09 needs some adjustments.

Related Articles

References

Earth’s Annual Global Mean Energy Budget, Kiehl & Trenberth, Bulletin of the American Meteorological Society (1997) – free paper

Earth’s Global Energy Budget, Trenberth, Fasullo & Kiehl, Bulletin of the American Meteorological Society (2009) – free paper

Outgoing Longwave Radiation due to Directly Transmitted Surface Emission, Costa & Shine, Journal of the Atmospheric Sciences (2012) – paywall paper

Read Full Post »

In Part Two we covered quite a bit of ground. At the end we looked at the first calculation of heating rates. The values calculated were a little different in magnitude from results in a textbook, but the model was still in a rudimentary phase.

After numerous improvements – outlined in Part Five – The Code, I got around to adding some “standard atmospheres” so we can see some comparisons and at least see where this model departs from other more accurate models.

First, what are heating rates? Within the context of this model we are currently thinking about the longwave radiative heating rates, which really means this:

If the only part of climate physics that was actually working was “longwave radiation” (terrestrial radiation) then how fast would different parts of the atmosphere heat up or cool down?

As we will see this mechanism (terrestrial radiation) mostly results in a net cooling for each part of the atmosphere.

The atmosphere also absorbs solar radiation – not shown in these graphs – which acts in the opposite direction and provides a heating.

Lastly, the sun warms the surface and convection transfers heat much more efficiently from the surface to the lower atmosphere – and this makes up the balance.

So, with longwave heating (cooling) curves, we are consider one mechanism of how heat is transferred.

Second, what is “longwave radiation”? This is a conventional description of the radiation emitted by the climate system, specifically the fact that its wavelength is almost all above 4 μm. The other significant radiation component in the climate system is “shortwave radiation”, which by convention means radiation below 4 μm. See The Sun and Max Planck Agree – Part Two for more.

Third, what is a “standard atmosphere”? It’s just a kind of average, useful for inter-comparisons, and for evaluation of various climate mechanisms around ideal cases. In this case, I used the AFGL (air force geophysics lab) models which are also used in the LBLTRM (line by line radiative transfer model).

Here is a graph for tropical conditions of heating rate vs height – and with a breakdown between the rates caused by water vapor, CO2 and O3:

Atmospheric-radiation-13c-Heating-rates-tropical-each-H2O-CO2-O3

Figure 1

Notice that the heating rate is mostly negative, so the atmosphere is cooling via radiation – which means for this atmospheric profile water vapor, CO2 and ozone have a net effect of emitting more terrestrial radiation out than they absorb via these gases.

Here is a textbook comparison:

From Petty (2006)

From Petty (2006)

Figure 2

And a set of graphs detailing the tropical condition for temperature, pressure, density and GHG concentrations:

Atmospheric-radiation-13a-Tropical-profile-temperature-gases-density

Figure 3 – Click to enlarge

Now some comparisons of the overall heating rates for 3 different profiles:

Atmospheric-radiation-13d-Heating-rates-3-atmospheres

Figure 4

Here is a textbook comparison:

From Petty (2006)

From Petty (2006)

Figure 5

So we can see that the MATLAB model created here from first principles and using the HITRAN database of absorption and emission lines is quite close to other calculated standards.

In fact, the differences are small except in the mid-stratosphere and we may find that this is due to slight differences in the model atmosphere used, or as a result of not using the Voigt profile (this is an important but technical area of atmospheric radiation – line shapes and how they change with pressure and temperature in the atmosphere – see for example Part Eight – CO2 Under Pressure).

Pekka Pirilä has been running this MATLAB model as well, has helped with numerous improvements and has just implemented the Voigt profile so we will shortly find out if the line shape is a contributor to any differences.

For reference, here are the profiles of the other two conditions shown in figure 4: Midlatitude summer & Subarctic summer:

Atmospheric-radiation-13h-Midlatitude-summer-profile-temperature-gases-density

Figure 6 – Click to enlarge

Atmospheric-radiation-13e-Subarctic-summer-profile-temperature-gases-density

Figure 7 – Click to enlarge

Related Articles

Part One - some background and basics

Part Two - some early results from a model with absorption and emission from basic physics and the HITRAN database

Part Three – Average Height of Emission - the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions

Part Four – Water Vapor - results of surface (downward) radiation and upward radiation at TOA as water vapor is changed

Part Five – The Code - code can be downloaded, includes some notes on each release

Part Six – Technical on Line Shapes - absorption lines get thineer as we move up through the atmosphere..

Part Seven – CO2 increases - changes to TOA in flux and spectrum as CO2 concentration is increased

Part Eight – CO2 Under Pressure - how the line width reduces (as we go up through the atmosphere) and what impact that has on CO2 increases

Part Nine – Reaching Equilibrium - when we start from some arbitrary point, how the climate model brings us back to equilibrium (for that case), and how the energy moves through the system

Part Ten – “Back Radiation” - calculations and expectations for surface radiation as CO2 is increased

Part Eleven – Stratospheric Cooling - why the stratosphere is expected to cool as CO2 increases

Part Thirteen – Surface Emissivity - what happens when the earth’s surface is not a black body – useful to understand seeing as it isn’t..

References

AFGL atmospheric constituent profiles (0.120 km), by GP Anderson et al (1986)

A First Course in Atmospheric Radiation, Grant Petty, Sundog Publishing (2006)

The data used to create these graphs comes from the HITRAN database.

The HITRAN 2008 molecular spectroscopic database, by L.S. Rothman et al, Journal of Quantitative Spectroscopy & Radiative Transfer (2009)

The HITRAN 2004 molecular spectroscopic database, by L.S. Rothman et al., Journal of Quantitative Spectroscopy & Radiative Transfer (2005)

Read Full Post »

Understanding atmospheric radiation is not so simple. But now we have a line by line model of absorption and emission of radiation in the atmosphere we can do some “experiments”. See Part Two and Part Five – The Code.

Many people think that models are some kind of sham and climate scientists should be out there doing real experiments. Well, models aren’t a sham and climate scientist are out there doing lots of experiments. Various articles on Science of Doom have outlined some of the very detailed experiments that have been done by atmospheric physicists, aka climate scientists.

When you want to understand why some aspect of a climate mechanism works the way it does, or what happens if something changes then usually you have to resort to a mathematical model of that part of the climate.

You can’t suddenly increase the amount of a major GHG across the planet, or slow down the planetary rotation to ½ its normal speed. Well, not without a sizable investment, a health and safety risk, possible inconvenience to a lot of people and, at some stage, awkward government investigations.

You can’t stop the atmosphere emitting radiation or test a stratosphere that gets cooler with height. But you can attempt to model it.

Mathematical models all have their limitations. We have to understand what the model can tell us and what it can’t tell us. We have to understand what presuppositions are built into the model and what can change in real life that is not being modeled in the maths. It’s all about context.

(Well-designed) models are not correct and are not incorrect. They are informative if we understand their limitations and capabilities.

In contrast to mathematical models built around the physics of climate mechanisms, many people commenting in the blog world (or even writing blogs) have a vague mental model of how climate works. This of course is way way ahead of a climate model built on physics. It has the advantage of not being written down in equations so that no one can challenge it and seemingly plausible hand-waving argument 1 can be traded against hand-waving argument 2. Unfortunately, on this blog we don’t have the luxury of those resources and – where experiments are not available or not possible – we will have to evaluate the results of mathematical models built on physics and observations.

All the above is not an endorsement of what GCMs tell us. And not an indictment. Hopefully no one reading the above paragraphs came to either conclusion.

When I first built the line by line model it had more limitations than today. One early problem was the stratosphere. In real life the temperature of the stratosphere increases with height. In the model the temperature decreased with height.

This was expected. O2 and O3 absorb solar radiation (primarily ultraviolet) and warm mainly the middle layers of the stratosphere. But the model didn’t have this physics. The model, at this stage, primarily modeled the absorption and emission of terrestrial (aka ‘longwave’) radiation by the atmosphere.

So, after a few versions a very crude model of solar absorption was added. Unfortunately, this solar absorption model still did not create a stratosphere that increased with temperature. This was quite disappointing.

Then commenter Uli pointed out that the model had too much stratospheric water vapor and I added a new parameter to the model which allowed stratospheric water vapor to be set differently from the free troposphere. (So far I’ve been using a realistic level of 6ppmv).

The result was happily that the stratosphere, left to its own (model) devices, started increasing with temperature. The starting point is simply a temperature profile dictated to the model, and the finish point is how the physics ends up calculating the final temperature profile:

Atmospheric-radiation-11a-temp-profile-strat-wv

Figure 1 – A warmer stratosphere and a happier climate model

At the same time, I’ve been updating the model so that it can run to some kind of equilbrium and then various GHGs can be changed.

This was to calculate “radiative forcing” under various scenarios, and specifically I wanted to show how energy moved around in the climate system after a “bump” in something like CO2. This is something that many many people can’t get right in their heads. One of the objectives of the model is to show bit by bit how the increased CO2 causes a reduction in net outgoing radiation, and how that in turn pushes up the atmospheric and surface temperature.

On this journey, once the model stratosphere was behaving a little like its real-life big brother it occurred to me that maybe we could answer the question of why the stratosphere was expected to cool with increased CO2.

See Stratospheric Cooling for some background.

Previously I have worked under the assumption that there are lots of competing “terms” in the energy balance equation for how the stratosphere responds to more CO2 and so simple conceptual models are not going to help.

Now the Science of Doom Climate Model (SoDCM) comes to the rescue.

In fact, while I was waiting for lots of simulations to finish on the PC I was reading again the fascinating Radiative Forcing and Climate Response, by Hansen, Sato & Ruedy, JGR (1997) – free paper – and in a groundhog day experience realized I didn’t understand their flux graphs resulting from various GCM simulations. So the SoDCM allowed me to solve my own conceptual problems.

Maybe.

Let’s take a look at stratospheric cooling.

Understanding Flux Curves

In this simulation:

  • CO2 at 280 ppm
  • no ozone, CH4 or NO2 for longwave absorption
  • boundary layer humidity at 80%
  • free tropospheric humidity at 40%
  • stratospheric water vapor at 6 ppmv
  • tropopause at 200 hPa
  • top of atmosphere (TOA) at 1 hPa
  • solar radiation at 242 W/m² with some absorbed in the stratosphere and troposphere as shown in figure 1 of Part Nine – Reaching Equilibrium

The surface temperature reached equilibrium at 281K and the tropopause was at 11 km:

Atmospheric-radiation-12c-temperature-profile

Figure 2

The equilibrium was reached by running the model for 500 (model) days, with timesteps of 2 hours. The ocean depth was only 5 meters simply to allow the model to get to equilibrium quicker (note 1).

Then at 500 days the CO2 concentration was doubled to 560 ppm and we capture a number of different values from the timestep before the increase and the timestep after the increase.

Let’s take a look at the up and down fluxes through the atmosphere. See also figure 6 of Part Two. In this case we can see pre- and post-2xCO2, but let’s first just try and understand what these flux vs height graphs actually mean:

Atmospheric-radiation-12a-flux-profile-pre-post-2xCO2

Figure 3 – Understanding the Basics

If flux just stays constant (vertical line) through a section of the atmosphere what does it mean?

It means there is no net absorption. It could mean that the atmosphere is transparent to that radiation. It could mean that the atmosphere emits exactly the same amount that it absorbs. Or some of both. Either way, no change = no net radiation absorbed.

Take a look in figure 3 at the (pre-CO2 doubling) upward flux above 10km (in the stratosphere). About 237 W/m² enters the bottom of the stratosphere and about 242 W/m² leaves the top of atmosphere. So the stratosphere is 5 W/m² worse off and from the first law of thermodynamics this either cools the stratosphere or something else is supplying this energy.

Now take a look at the (pre-CO2) downward flux in the stratosphere. At the top of atmosphere there is no downward longwave radiation because there is no source of this radiation outside of the atmosphere. So downward flux = 0 at TOA.

At the bottom of the stratosphere, about 27 W/m² is leaving. So zero is entering and 27 W/m² is leaving – this means that the stratosphere is worse off by 27 W/m².

If we add up the upward and downward longwave fluxes through the stratosphere we find that there is a net loss of about 32 W/m². This means that if the stratosphere is in equilibrium some other source must be supplying 32 W/m².

In this case it is the solar absorption of radiation.

If we were considering the troposphere it would most likely be convection from the surface or lower atmosphere that would be balancing any net radiation loss from higher up in the troposphere.

So, to recap:

  • think about the direction radiation is travelling in:
    • if it is reducing in the direction it is travelling then energy is absorbed into that section of the atmosphere
    • if it is increasing in the direction it is travelling then energy is being lost from that section of the atmosphere
  • if plots of flux against height are vertical that means there is no change in energy in that region
  • if flux vs height is constant (vertical) then it either means
    • the atmosphere is transparent to that radiation, OR
    • the atmosphere is isothermal in that region (emission is balanced by absorption)

Take another look at figure 3 below 10km:

  1. The upward radiation is reducing with height – energy is absorbed by each level of the atmosphere. This is a net heating.
  2. The downward radiation is increasing – energy is lost from each level of the atmosphere. This is a net cooling.
  3. The slope of the curves is not equal. This is because energy is transferred via convection in the troposphere.

Understanding these concepts is essential to understanding radiation in the atmosphere.

Upward Flux from Changes in CO2

Let’s take a closer look at the upward and downward changes due to doubling CO2. So the “pre” curve is the atmosphere in a nice equilibrium condition. And the “post” curve is immediately after CO2 has been doubled, long before any equilibrium has been reached.

Let’s zoom in on the upward fluxes in the stratosphere pre- and immediately post-CO2 doubling:

Atmospheric-radiation-12a-flux-profile-pre-post-2xCO2-highlight-up-stratosphere

Figure 4

Even though the curves are roughly parallel from 10km through to 30km you should be able to see that there is a larger gradient on the post-2xCO2 curve. So pre-CO2 increase, the stratosphere loses a net upward of about 5 W/m², and after CO2 increase the stratosphere loses a net upward of about 6 W/m².

This means more CO2 increases the cooling of the stratosphere when we consider the upward flux. So now the question is, WHY?

If we want to understand the answer, the most useful ingredient is to look at the spectral characteristics of pre- and post. Here we take the radiation leaving at TOA and subtract the radiation entering at the tropopause. So we are considering the net energy lost (why lost? because this calculation is energy out – energy in), and as a function of wavenumber.

Here is the spectral graph of energy lost by the stratosphere due to upwards radiation, before the CO2 increase:

Atmospheric-radiation-12g-upward-spectrum-21-13-pre

Figure 5

The post-CO2 doubling looks very similar so here is a comparison graph, with a slight smoothing (moving average window) just to allow us to see a little more clearly the main differences:

Atmospheric-radiation-12f-upward-spectrum-21-13-pre-and-post-smoothed

Figure 6

So we see that in the case of post-2xCO2, the energy lost is a little higher, and it is in the wavenumber region where CO2 emits strongly. CO2′s peak absorption/emission is at 667 cm-1 (15 μm).

Just to confirm, here is the difference – post-2xCO2 minus pre-2xCO2 and not smoothed:

Atmospheric-radiation-12h-upward-spectrum-21-13-post-less-pre

Figure 7

We can see that the main regions of CO2 absorption and emission are the reason. And we note that the temperature of the stratosphere is increasing with height.

So the reason is clear – due to principles outlined earlier in Part Two. Because the stratospheric temperature increases with height, the net emission (i.e., emission less absorption) of radiation, as we go up through the stratosphere will be a progressively higher value. And once we increase the amount of CO2, this net emission will increase even further.

This is what we see in the spectral intensity – the net change in stratospheric emission [(out-in)2xCO2 - (out-in)1xCO2] increases due to the emission in the main CO2 bands.

Downward Flux from Changes in CO2

Here is what we see when we zoom in on the downward flux in the stratosphere:

Atmospheric-radiation-12a-flux-profile-pre-post-2xCO2-highlight-down-stratosphere

Figure 8

Of course, as already mentioned, the downward longwave flux at TOA must be zero.

This time it is conceptually easier to understand the change from more CO2. There’s one little fly in the understanding ointment, but let’s come to that later.

So when we think about the cooling of the stratosphere from downward flux it’s quite easy. Coming in at the top is zero. Coming out of the bottom (pre-CO2 increase) is about 27 W/m². Coming out of the bottom (post-2xCO2) is about 30 W/m². So increasing CO2 causes a cooling of about 3 W/m² due to changes in downward flux.

Here is the spectral flux (unsmoothed) downward out of the bottom of the tropopause, pre- and post-2xCO2:

Atmospheric-radiation-12d-downward-spectrum-tropopause-pre-post

Figure 9

And as with figure 7, below is the difference in downward intensity as a result of 2xCO2. This is post less pre, so the positive value overall means a cooling – as we saw in the total flux change in figure 8.

The cause is still due to the CO2 band but the specifics are a little different from the upward change. Here the center of the CO2 band has zero effect. But the “wings” of the CO2 band – around 600 cm-1 and 700 cm-1 are the places causing the effect:

Atmospheric-radiation-12d-downward-spectrum-tropopause-delta-pre-post

Figure 10

The temperature is reducing as we go downwards so the emission from the center of the CO2 band cannot be increasing as we go downward. If we look back at figure 7 for the upward direction, the temperature is increasing upward so the emission from the center of the CO2 band must be increasing.

And the conceptual fly in the ointment alluded to earlier – this one can be confusing (or simple..) – if the starting flux at TOA is zero and the temperature decreases downward surely the downward flux only gets less? Less than zero? Instead, think of the whole stratosphere as a body. It must emit radiation due to its temperature and emissivity. It can’t absorb any radiation from above (because there is none), so it must emit some downward radiation. As its emissivity increases with more GHGs it must emit more radiation into the troposphere. It’s simple really.

Let’s now finalize this story by considering the net change in flux with height due to CO2 increases. Here if “net” is increasing with height it means absorption or heating. And if “net” is reducing with height it means emission or cooling. See note 2 where the details are explained.

So the blue line (upward flux) decreasing from the tropopause up to TOA means that the change in flux is cooling the stratosphere. And likewise for the green line (downward flux). This is just the results already shown as spectral changes now shown as flux changes:

Atmospheric-radiation-12b-delta-flux-profile-pre-post-2xCO2

Figure 11

Net Effect

If we combine figure 11 for the total net effect of doubling CO2:

Atmospheric-radiation-12b-delta-flux-profile-pre-post-2xCO2-total

Figure 12

From the tropopause at 11km through to TOA we can see that the combined change in flux due to CO2 doubling causes a cooling of the stratosphere. (And from the surface up to the tropopause we see a heating of the troposphere).

By comparison, here is an extract from Hansen et al (1997):

From Hansen et al (1997)

From Hansen et al (1997)

Figure 13

The highlighted instantaneous graph is the one for comparison with figure 12.

This is the case before the stratosphere has relaxed into equilibrium. Note that the “adjusted” graph – stratospheric equilibrium – has a vertical line for ΔF vs height, which simply means that the stratosphere is, in that case, in radiative equilibrium.

Notice as well that the magnitude of my graph is a lot higher. There may be a lot of reasons for that, including that fact that mine is one specific case rather than some climatic mean, and also that the absorption of solar radiation in my model has been treated very crudely. (Lots of other factors include missing GHGs like CH4, N2O, etc).

Reasons

So we have seen that the net emission of radiation by CO2 bands is what causes the cooling from upward radiation and the cooling from downward radiation when CO2 is increased.

For further insight, I amended the model so that on the timestep before and just after equilibrium the stratosphere was:

A) snapped back to an isothermal case, with the temperature set at the tropopause temperature just calculated

B) forced into a cooling at 4 K/km (c.f. the troposphere with a lapse rate of 6.5 K/km)

Case A, temperature profile just before and after equilibrium:

Atmospheric-radiation-12k-isothermal-temperature-profile

Figure 14

And the comparison to figure 11:

Atmospheric-radiation-12b-delta-flux-profile-pre-post-2xCO2-isothermal-stratosphere

Figure 15

We can see that the downward flux change is similar to figure 11, but the upward flux is different. It is fairly constant through the stratosphere. This is not surprising. The flux from below is either transmitted straight through, or is absorbed and re-emitted at the same temperature. So no change to upward flux.

But the downward flux only results from the emission from the stratosphere (nothing transmitted through from above). As CO2 is increased the emissivity of the atmosphere increases and so emission of radiation from the stratosphere increases. The fact that the stratospheric temperature is isothermal has a small effect as can be seen by comparing the green curve on figures 15 & 11. But it isn’t very significant.

Now let’s consider case B. First the temperature profile:

Atmospheric-radiation-12n-declining-strato-temperature-profile

Figure 16

Now the net flux graph:

Atmospheric-radiation-12p-delta-flux-profile-pre-post-2xCO2-cool-stratosphere

Figure 17

Here we see that the effect of increased CO2 on the upward flux is now a heating in the stratosphere. And the net change in downward flux still has a cooling effect.

Atmospheric-radiation-12o-delta-flux-profile-pre-post-2xCO2-total-cool-stratosphere

Figure 18

Here we see that for a stratosphere where temperature reduces with altitude, doubling CO2 would not have a noticeable effect on the stratospheric temperature. Depending on the temperature profile (and other factors) there might be a slight cooling or a slight heating.

Conclusion

This is a subject where it’s easy to confuse readers – along with the article writer. Possibly no one that was unclear before made it the whole way and said “ok, got it”.

Hopefully, if you only made it only part of the way through, you now have a better grasp of some of the principles.

The reasons behind stratospheric cooling due to increased GHGs have been difficult to explain even for very knowledgeable atmospheric physicists (e.g., one of many).

I think I can explain stratospheric cooling under increasing CO2. I think I can see that other factors like the exact temperature profile of the stratosphere on any given day/month and the water vapor profile (not shown in this article) will also affect the change in stratospheric temperature from increasing CO2.

If the bewildering complexity of up/down, in-out, net of in-out, net of in-out for 2xCO2-original CO2 has left you baffled please feel free to ask questions. This is not an easy topic. I was baffled. I have 4 pages of notes with little graphs and have rewritten the equations in note 2 at least 5 times to try and get the meaning clear – and am still expecting someone to point out a sign error.

Related Articles

Part One - some background and basics

Part Two - some early results from a model with absorption and emission from basic physics and the HITRAN database

Part Three – Average Height of Emission - the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions

Part Four – Water Vapor - results of surface (downward) radiation and upward radiation at TOA as water vapor is changed

Part Five – The Code - code can be downloaded, includes some notes on each release

Part Six – Technical on Line Shapes - absorption lines get thineer as we move up through the atmosphere..

Part Seven – CO2 increases - changes to TOA in flux and spectrum as CO2 concentration is increased

Part Eight – CO2 Under Pressure - how the line width reduces (as we go up through the atmosphere) and what impact that has on CO2 increases

Part Nine – Reaching Equilibrium - when we start from some arbitrary point, how the climate model brings us back to equilibrium (for that case), and how the energy moves through the system

Part Ten – “Back Radiation” - calculations and expectations for surface radiation as CO2 is increased

Part Twelve – Heating Rates - heating rate (‘C/day) for various levels in the atmosphere – especially useful for comparisons with other models.

References

The data used to create these graphs comes from the HITRAN database.

The HITRAN 2008 molecular spectroscopic database, by L.S. Rothman et al, Journal of Quantitative Spectroscopy & Radiative Transfer (2009)

The HITRAN 2004 molecular spectroscopic database, by L.S. Rothman et al., Journal of Quantitative Spectroscopy & Radiative Transfer (2005)

Radiative Forcing and Climate Response, by Hansen, Sato & Ruedy, JGR (1997) – free paper

Notes

Note 1: The relative heat capacity of the ocean vs the atmosphere has a huge impact on the climate dynamics. But in this simulation we were interested in reaching an equilibrium for a given CO2 concentration & solar absorption – and then seeing what happened to radiative balance immediately after a bump in CO2 concentration.

For this requirement it isn’t so important to have the right ocean depth needed for decent dynamic modeling.

Note 2: The treatment of upward and downward flux can get bewildering. The easiest approach is to just consider the change in flux in the direction in which it is travelling. But because upward and downward are in opposite directions, F↑ is in the direction of z, and F↓ is in the opposite direction to z, so heating and cooling are in opposite directions.

Due to changing GHGs:

If F↑(z)2xCO2 – F↑(z) < 0 => Heating below height z (less flux escaping);

F↑(z)2xCO2 - F↑(z) > 0  => Cooling below height z

If F↓(z)2xCO2 – F↓(z) < 0 => Cooling below height z (less flux entering);

F↓(z)2xCO2 - F↓(z) > 0  => Heating below height z

So for example for figure 11 – the net upward = F↑(z) - F↑(z)2xCO2 & net downward = F↓(z)2xCO2 - F↓(z)

Flux “divergence”

dF↑(z)/dz < 0  => Heating of that part of the atmosphere (upward flux is reducing due to being absorbed)

dF↓(z)/dz < 0  => Cooling of that part of the atmosphere (downward flux is increasing as we go down due to more being emitted, or rewritten is very strange English to match the equation: downward flux is decreasing in the upward direction)

Read Full Post »

We have mostly looked at the upward spectra at the top of atmosphere (TOA) as various conditions are changed. There’s a good reason for this focus – the outgoing longwave radiation (OLR) determines how much the climate system cools to space.

Over a given timescale this either matches absorbed solar radiation or the planet is heating or cooling. So it is changes in OLR (or absorbed solar) that really affect the heat balance in the climate.

By comparison, the trend in downward longwave radiation (DLR) at the surface is more a result of overall planetary heating and cooling. But of course, the climate is a lot more complex than indicated by that last statement.

Let’s take a look. Note that Part Four – Water Vapor already has some graphs of how the DLR or “back radiation” changes with water vapor concentration.

Here is the DLR for 4 different surface temperatures. In each case there is a lapse rate of 6.5 K/km, the boundary layer humidity (BLH) = 100%, the free tropospheric humidity (FTH) = 40% and there were 10 atmospheric layers in the model with a top of atmosphere at 50 hPa. More about the model in Part Two and Part Five – The Code.

The top graph is the real case, the bottom graph is without the effect of the water vapor continuum:

Atmospheric-radiation-10a-DLR-4-temps-with-without-continuum

Figure 1

The continuum operates over the whole range of terrestrial wavelengths of interest, but its main impact is in the “atmospheric window region” between 800-1200 cm-1. This window region doesn’t have many strong absorption lines so absorption from any other cause has a big effect.

As we can see, the “window” is very dependent on temperature – which is mainly a result of the amount of water vapor. It’s clearer when we look at the spectral difference between the two cases for each of the temperatures:

Atmospheric-radiation-10b-DLR-4-temps-delta-continuum

Figure 2

Notice that the 273 K (0 °C) condition is almost unaffected by the continuum. This is because the effect is dependent on the amount of water vapor squared. And the amount of water vapor is strongly dependent on temperature.

Let’s look at the total flux for both cases and compare with a reference of blackbody emission from the bottom layer of atmosphere (in this case 400m above the surface so about 2.6°C cooler than the surface, and see note 1):

Atmospheric-radiation-10c-DLR-4-temps-flux-vs-bb

Figure 3

This shows that once we are above a surface temperature of 300 K (27 °C) with high boundary layer humidity the radiation from atmosphere to surface is getting close to blackbody emission. The graph also demonstrates that most of that change is due to the continuum.

Now good emitters are also good absorbers. So here is another way of looking at the same effect - the % of surface radiation in the 800-1200 cm-1 window region that makes it to the top of atmosphere (without being absorbed anywhere along the way):

Atmospheric-radiation-10d-TOA-percent-through-window-4-temps

Figure 4

These were all with CO2 at 360ppm (and N2O at 319 ppbv, CH4 at 1775 ppbv and no ozone).

Let’s look at how changing CO2 concentration affects these results.

Atmospheric-radiation-10e-DLR-TOA-280-560

Figure 5

This is a very important graph – what does it show?

  • while different surface temperatures have quite different TOA radiation to space – the change in CO2 causes a fairly constant change in this radiation
  • changing CO2 has much less effect on the DLR (radiation from the atmosphere to the surface), and as the temperature increases this effect is even more reduced

Let’s look at the “delta”:

Atmospheric-radiation-10f-Delta-DLR-TOA-280-560

Figure 6 – [Corrected Jan 23]

This shows clearly how the change in atmospheric DLR due to doubling CO2 is very much a function of surface temperature. And at the same time, the change in TOA radiation (“OLR”) is almost independent of surface temperature.

From the information presented in this article on how DLR is affected by water vapor at high temperatures the first point shouldn’t be surprising. And from the explanation in Part Four – Water Vapor both points shouldn’t be surprising.

For interest, here are the two DLR spectrum for 280 ppm & 560 ppm at 288 K, and below, the difference:

Atmospheric-radiation-10g-DLR-spectrum-288K-280-560

Figure 7

Conclusion

The surface energy balance is very important for determining the dynamics of surface heat transfer, including initiating convection. As the temperature gets up to 30°C the ability of the surface to radiate to space is reduced to a very low value.

“Deep convection” which drives the tropical circulation is mostly initiated in these very hot surface conditions.

The effect of changing CO2 on atmospheric radiation to the surface (DLR) is small. With high boundary layer relative humidity, water vapor masks out most of the effect of changing CO2 in hotter surface conditions.

But the effect of increasing CO2 on the TOA radiation balance is completely different. High surface humidities have little or no effect on this TOA balance. And there, doubling CO2 has a significant impact (all other things being equal) as shown in figure 12 of Part Seven – CO2 increases.

Working out radiation balance through the atmosphere in your head is difficult. Most people attempting it don’t have the right “calibration points”.

The fundamental physics is straightforward, at least in terms of the values of absorption and emission of radiation (not the “why”). But calculating the result requires computing effort and an integration (summation) across:

  • multiple layers at different temperatures and concentrations
  • the hundreds of thousands of absorption/emission lines of multiple GHGs
  • a large range of wavenumbers

Related Articles

Part One - some background and basics

Part Two - some early results from a model with absorption and emission from basic physics and the HITRAN database

Part Three – Average Height of Emission - the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions

Part Four – Water Vapor - results of surface (downward) radiation and upward radiation at TOA as water vapor is changed

Part Five – The Code - code can be downloaded, includes some notes on each release

Part Six – Technical on Line Shapes - absorption lines get thineer as we move up through the atmosphere..

Part Seven – CO2 increases - changes to TOA in flux and spectrum as CO2 concentration is increased

Part Eight – CO2 Under Pressure - how the line width reduces (as we go up through the atmosphere) and what impact that has on CO2 increases

Part Nine – Reaching Equilibrium - when we start from some arbitrary point, how the climate model brings us back to equilibrium (for that case), and how the energy moves through the system

Part Eleven – Stratospheric Cooling - why the stratosphere is expected to cool as CO2 increases

Part Twelve – Heating Rates - heating rate (‘C/day) for various levels in the atmosphere – especially useful for comparisons with other models.

References

The data used to create these graphs comes from the HITRAN database.

The HITRAN 2008 molecular spectroscopic database, by L.S. Rothman et al, Journal of Quantitative Spectroscopy & Radiative Transfer (2009)

The HITRAN 2004 molecular spectroscopic database, by L.S. Rothman et al., Journal of Quantitative Spectroscopy & Radiative Transfer (2005)

Notes

Note 1: This model looks at the range of wavenumbers 200-2,500 cm-1, which equates to 4-50μm, to ease up the calculation effort required. This means that when we sum up the contribution from all calculated wavelengths we are missing some bits. So for example, if we calculate the emission of thermal radiation by a surface at 288K with an emissivity of 1.0 we calculate 390 W/m² – the “blackbody flux”.

But with our “restricted view” of the spectrum we will instead calculate 376 W/m².

Almost all of the “missing spectrum” is in the far infra-red (longer wavelengths/lower wavenumbers), and is subject to relatively high absorption from water vapor.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 260 other followers