Feeds:
Posts
Comments

In Part Two we looked at one paper by Lorenz from 1968 where he put forward the theory that climate might be “intransitive”. In common parlance we could write this as “climate might be chaotic” (even though there is a slight but important difference between the two definitions).

In this article we will have a bit of a look at the history of the history of climate – that is, a couple of old papers about ice ages.

These papers are quite dated and lots of new information has since come to light, and of course thousands of papers have since been written about the ice ages. So why a couple of old papers? It helps to create some context around the problem. These are “oft-cited”, or seminal, papers, and understanding ice ages is so complex that it is probably easiest to set out an older view as some kind of perspective.

At the very least, it helps get my thinking into order. Whenever I try to understand a climate problem I usually end up trying to understand some of the earlier oft-cited papers because most later papers rely on that context without necessarily repeating it.

Variations in the Earth’s Orbit: Pacemaker of the Ice Ages by JD Hays, J Imbrie, NJ Shackleton (1976) is referenced by many more recent papers that I’ve read – and, according to Google Scholar, cited by 2,656 other papers – that’s a lot in climate science.

For more than a century the cause of fluctuations in the Pleistocene ice sheets has remained an intriguing and unsolved scientific mystery. Interest in this problem has generated a number of possible explanations.

One group of theories invokes factors external to the climate system, including variations in the output of the sun, or the amount of solar energy reaching the earth caused by changing concentrations of interstellar dust; the seasonal and latitudinal distribution of incoming radiation caused by changes in the earth’s orbital geometry; the volcanic dust content of the atmosphere; and the earth’s magnetic field. Other theories are based on internal elements of the system believed to have response times sufficiently long to yield fluctuations in the range 10,000 to 1,000,000 years.

Such features include the growth and decay of ice sheets, the surging of the Antarctic ice sheet; the ice cover of the Arctic Ocean; the distribution of carbon dioxide between atmosphere and ocean; and the deep circulation of the ocean.

Additionally, it has been argued that as an almost intransitive system, climate could alternate between different states on an appropriate time scale without the intervention of any external stimulus or internal time constant.

This last idea is referenced as Lorenz 1968, the paper we reviewed in Part Two.

The authors note that previous work has provided evidence of orbital changes being involved in climate change, and make an interesting comment that we will see has not changed in the intervening 38 years:

The first [problem] is the uncertainty in identifying which aspects of the radiation budget are critical to climate change. Depending on the latitude and season considered most significant, grossly different climate records can be predicted from the same astronomical data..

Milankovitch followed Koppen and Wegener’s view that the distribution of summer insolation at 65°N should be critical to the growth and decay of ice sheets.. Kukla pointed out weaknesses.. and suggested that the critical time may be Sep and Oct in both hemispheres.. As a result, dates estimated for the last interglacial on the basis of these curves have ranged from 80,000 to 180,000 years ago.

The other problem at that time was the lack of quality data on the dating of various glacials and interglacials:

The second and more critical problem in testing the orbital theory has been the uncertainty of geological chronology. Until recently, the inaccuracy of dating methods limited the interval over which a meaningful test could be made to the last 150,000 years.

This paper then draws on some newer, better quality data for the last few hundred thousand years of temperature history. By the way, Hays was (and is) a Professor of Geology, Imbrie was (and is) a Professor of Oceanography and Shackleton was at the time in Quarternary Research, later a professor in the field.

Brief Introduction to Orbital Parameters that Might Be Important

Now, something we will look at in a later article, probably Part Four, is exactly what changes in solar insolation are caused by changes in the earth’s orbital geometry. But as an introduction to that question, there are three parameters that vary and are linked to climate change:

  1. Eccentricity, e, (how close is the earth’s orbit to a circle) – currently 0.0167
  2. Obliquity, ε, (the tilt of the earth’s axis) – currently 23.439°
  3. Precession, ω, (how close is the earth to the sun in June or December) – currently the earth is closest to the sun on January 3rd

The first, eccentricity, is the only one that changes the total amount of solar insolation received at top of atmosphere in a given year. Note that a constant solar insolation at the top of atmosphere can be a varying solar absorbed radiation if more or less of that solar radiation happens to be reflected off, say, ice sheets, due to, say, obliquity.

The second, obliquity, or tilt, affects the difference between summer and winter TOA insolation. So it affects seasons and, specifically, the strength of seasons.

The third, precession, affects the amount of radiation received at different times of the year (moderated by item 1, eccentricity). So if the earth’s orbit was a perfect circle this parameter would disappear. When the earth is closest to the sun in June/July the Northern Hemisphere summer is stronger and the SH summer is weaker, and vice versa for winters.

So eccentricity affects total TOA insolation, while obliquity and precession change its distribution in season and latitude. However, variations in solar insolation at TOA depend on e² and so the total variation in TOA radiation has, over a very long period, only been only 0.1%.

This variation is very small and yet the strongest “orbital signal” in the ice age record is that of eccentricity. A problem, that even for the proponents of this theory, has not yet been solved.

Last Interglacial Climates, by a cast of many including George J. Kukla, Wallace S. Broecker, John Imbrie, Nicholas J. Shackleton:

At the end of the last interglacial period, over 100,000 yr ago, the Earth’s environments, similar to those of today, switched into a profoundly colder glacial mode. Glaciers grew, sea level dropped, and deserts expanded. The same transition occurred many times earlier, linked to periodic shifts of the Earth’s orbit around the Sun. The mechanism of this change, the most important puzzle of climatology, remains unsolved.

[Emphasis added].

History Cores

Our geological data comprise measurements of three climatically sensitive parameters in two deep-sea sediment cores. These cores were taken from an area where previous work shows that sediment is accumulating fast enough to preserve information at the frequencies of interest. Measurements of one variable, the per mil enrichment of oxygen 18 (δ18O), make it possible to correlate these records with others throughout the world, and to establish that the sediment studied accumulated without significant hiatuses and at rates which show no major fluctuations..

.. From several hundred cores studied stratigraphically by the CLIMAP project, we selected two whose location and properties make them ideal for testing the orbital hypothesis. Most important, they contain together a climatic record that is continuous, long enough to be statistically useful (450,000 years) and characterized by accumulation rates fast enough (>3 cm per 1,000 years) to resolve climatic fluctuations with periods well below 20,000 years.

The cores were located in the Southern Indian ocean. What is interesting about the cores is that 3 different mechanisms are captured from each location, including δ18O isotopes which should be a measure of ice sheets globally and temperature in the ocean at the location of the cores.

Hays, Imbrie & Shackleton (1976)

Hays, Imbrie & Shackleton (1976)

Figure 1

There is much discussion about the dating of the cores. In essence, other information allows a few transitions to be dated, while the working assumption is that within these transitions the sediment accumulation is at a constant rate.

Although uniform sedimentation is an ideal which is unlikely to prevail precisely anywhere, the fact that the characteristics of the oxygen isotope record are present throughout the cores suggests that there can be no substantial lacunae, while the striking resemblance to records from distant areas shows that there can be no gross distortion of accumulation rate.

Spectral Analysis

The key part of their analysis is a spectral analysis of the data, compared with a spectral analysis of the “astronomical forcing”.

The authors say:

.. we postulate a single, radiation-climate system which transforms orbital inputs into climatic outputs. We can therefore avoid the obligation of identifying the physical mechanism of climatic response and specify the behavior of the system only in general terms. The dynamics of our model are fixed by assuming that the system is a time-invariant, linear system – that is, that its behavior in the time domain can be described by a linear differential equation with constant coefficients. The response of such a system in the frequency domain is well known: frequencies in the output match those of the input, but their amplitudes are modulated at different frequencies according to a gain function. Therefore, whatever frequencies characterize the orbital signals, we will expect to find them emphasized in paleoclimatic spectra (except for frequencies so high they would be greatly attenuated by the time constants of response)..

My translation – let’s compare the orbital spectrum with the historical spectrum without trying to formulate a theory and see how the two spectra compare.

The orbital effects:

From Hays et al (1976)

From Hays et al (1976)

Figure 2

The historical data:

From Hays et al (1976)

From Hays et al (1976)

Figure 3

We have also calculated spectra for two time series recording variations in insolation [their fig 4 – our fig 2], one for 55°S and the other for 60°N. To the nearest 1,000 years, the three dominant cycles in these spectra (41,000, 23,000 and 19,000 years) correspond to those observed in the spectra for obliquity and precession.

This result, although expected, underscores two important points. First, insolation spectra are characterized by frequencies reflecting obliquity and precession, but not eccentricity.

Second, the relative importance of the insolation components due to obliquity and precession varies with latitude and season.

[Emphasis added]

In commenting on the historical spectra they say:

Nevertheless, five of the six spectra calculated are characterized by three discrete peaks, which occupy the same parts of the frequency range in each spectrum. Those correspond to periods from 87,000 to 119,000 years are labeled a; 37,000 to 47,000 years b; and 21,000 to 24,000 years c. This suggest that the b and c peaks represent a response to obliquity and precession variation, respectively.

Note that the major cycle shown in the frequency spectrum is the 100,000 peak.

There is a lot of discussion in their paper of the data analysis, please have a read of their paper to learn more. The detail probably isn’t so important for current understanding.

The authors conclude:

Over the frequency range 10,000 to 100,000 cycle per year, climatic variance of these records is concentrated in three discrete spectral peaks at periods of 23,000, 42,000, and approximately 100,000 years. These peaks correspond to the dominant periods of the earth’s solar orbit and contain respectively about 10, 25 and 50% of the climatic variance.

The 42,000-year climatic component has the same period as variations in the obliquity of the earth’s axis and retains a constant phase relationship with it.

The 23,000-year portion of the variance displays the same periods (about 23,000 and 19,000 years) as the quasi-periodic precession index.

The dominant 100,000 year climatic component has an average period close to, and is in phase with, orbital eccentricity. Unlike the correlations between climate and the higher frequency orbital variations (which can be explained on the assumption that the climate system responds linearly to orbital forcing) an explanation of the correlations between climate and eccentricity probably requires an assumption of non-linearity.

It is concluded that changes in the earth’s orbital geometry are the fundamental cause of the succession of Quarternary ice ages.

Things were looking good for explanations of the ice ages in 1975..

For those who want to understand more recent evaluation of the spectral analysis of temperature history vs orbital forcing, check out papers by Carl Wunsch from 2003, 2004 and 2005, e.g. The spectral description of climate change including the 100 ky energyClimate Dynamics (2003).

A Few Years Later

Here are a few comments from Imbrie & Imbrie (1980):

Since the work of Croll and Milankovitch, many investigations have been aimed at the central question of the astronomical theory of the ice ages:

Do changes in orbital geometry cause changes in climate that are geologically detectable?

On the one hand, climatologists have attacked the problem theoretically by adjusting the boundary conditions of energy-balance models, and then observing the magnitude of the calculated response. If these numerical experiments are viewed narrowly as a test of the astronomical theory, they are open to question because the models used contain untested parameterizations of important physical processes. Work with early models suggested that the climatic response to orbital changes was too small to account for the succession of Pleistocene ice ages. But experiments with a new generation of models suggest that orbital variations are sufficient to account for major changes in the size of Northern Hemisphere ice sheets..

..In 1968, Broecker et al. (34, 35) pointed out that the curve for summertime irradiation at 45°N was a much better match to the paleoclimatic records of the past 150,000 years than the curve for 65°N chosen by Milankovitch..

Current Status. This is not to say that all important questions have been answered. In fact, one purpose of this article is to contribute to the solution of one of the remaining major problems: the origin and history of the 100,000-year climatic cycle.

At least over the past 600,000 years, almost all climatic records are dominated by variance components in a narrow frequency band centered near a 100,000-year cycle. Yet a climatic response at these frequencies is not predicted by the Milankovitch version of the astronomical theory – or any other version that involves a linear response..

..Another problem is that most published climatic records that are more than 600,000 years old do not exhibit a strong 100,000-year cycle..

The goal of our modeling effort has been to simulate the climatic response to orbital variations over the past 500,000 years. The resulting model fails to simulate four important aspects of this record. It fails to produce sufficient 100k power; it produces too much 23k and 19k power; it produces too much 413k power; and it loses its match with the record around the time of the last 413k eccentricity minimum, when values of e [eccentricity] were low and the amplitude of the 100k eccentricity cycle was much reduced..

..The existence of an unstable fixed point makes tuning an extremely sensitive task. For example, Weertman notes that changing the value of one parameter by less than 1 percent of its physically allowed range made the difference between a glacial regime and an interglacial regime in one portion of an experimental run while leaving the rest virtually unchanged..

This would be a good example of Lorenz’s concept of an almost intransitive system (one whose characteristics over long but finite intervals of time depend strongly on initial conditions).

Once again the spectre of the Eminent Lorenz is raised. We will see in later articles that with much more sophisticated models it is not easy to create an ice-age, or to turn an ice-age into an inter-glacial.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

References

Variations in the Earth’s Orbit: Pacemaker of the Ice Ages, JD Hays, J Imbrie & NJ Shackleton, Science (1976)

Modeling the Climatic Response to Orbital Variations, John Imbrie & John Z. Imbrie, Science (1980)

Last Interglacial Climates, Kukla et al, Quaternary Research (2002)

A really long time ago I wrote Ghosts of Climates Past. I’ve read a lot of papers on the ice ages and inter-glacials but never got to the point of being able to write anything coherent.

This post is my attempt to get myself back into gear – after a long time being too busy to write any articles.

Here is what the famous Edward Lorenz said in his 1968 paper, Climatic Determinism – the opening paper at a symposium titled Causes of Climatic Change:

The often-accepted hypothesis that the physical laws governing the behavior of an atmosphere determine a unique climate is examined critically. It is noted that there are some physical systems (transitive systems) whose statistics taken over infinite time intervals are uniquely determined by the governing laws and the environmental conditions, and other systems (intransitive systems) where this is not the case.

There are also certain transitive systems (almost intransitive systems) whose statistics taken over very long but finite intervals differ considerably from one such interval to another. The possibility that long-term climatic changes may result from the almost-intransitivity of the atmosphere rather than from environmental changes is suggested.

The language might be obscure to many readers. But he makes it clear in the paper:

lorenz-1968-1

Here Lorenz describes transitive systems – that is,  starting conditions do not determine the future state of the climate. Instead, the physics and the “outside influences” or forcings (such as the solar radiation incident on the planet) determine the future climate.

lorenz-1968-2

Here Lorenz introduces the well-known concept of “chaotic systems” where different initial conditions result in different long term results. (Note that there can be chaotic systems where different initial conditions produce different time-series results but the same statistical results over a period of time – so the term intransitive is a more restrictive term, see the paper for more details).

lorenz-1968-3

lorenz-1968-4

lorenz-1968-5

Well, interesting stuff from the eminent Lorenz.

A later paper, Kagan, Maslova & Sept (1994), commented on (perhaps inspired by) Lorenz’s 1968 paper and produced some interesting results from quite a simple model:

Kagan et al 1994-2 Kagan et al 1994-1

That is, a few coupled systems, working together can produce profound shifts in the Earth’s climate with periods like 80,000 years.

In case anyone thinks it’s just obscure foreign journals that comment approvingly on Lorenz’s work, the well-published climate skeptic James Hansen had this to say:

The variation of the global-mean annual-mean surface air temperature during the 100-year control run is shown in Figure 1. The global mean temperature at the end of the run is very similar to that at the beginning, but there is substantial unforced variability on all time scales that can be examined, that is, up to decadal time scales. Note that an unforced change in global temperature of about 0.4°C (0.3°C, if the curve is smoothed with a 5-year running mean) occurred in one 20-year period (years 50-70). The standard deviation about the 100-year mean is 0.11°C. This unforced variability of global temperature in the model is only slightly smaller than the observed variability of global surface air temperature in the past century, as discussed in section 5. The conclusion that unforced (and unpredictable) climate variability may account for a large portion of climate change has been stressed by many researchers; for example, Lorenz [1968], Hasselmann [1976] and Robock [1978].

[Emphasis added].

And here is their Figure 1, the control run, from that paper:

Hansen et al 1998

In later articles we will look at some of the theories of Milankovitch cycles. Confusingly, many different theories, mostly inconsistent with each other, all go by the same name.

Articles in the Series

Part One – An introduction

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

References

Climatic Determinism, Edward Lorenz (1968)

Discontinuous auto-oscillations of the ocean thermohaline circulation and internal variability of the climate system, Kagan, Maslova & Sept (1994)

Global Climate Changes as Forecast by Goddard Institute for Space Studies Three-Dimensional Model, Hansen et al (1998)

In Wonderland, Radiative Forcing and the Rate of Inflation we looked at the definition of radiative forcing and a few concepts around it:

  • why the instantaneous forcing is different from the adjusted forcing
  • what adjusted forcing is and why it’s a more useful concept
  • why the definition of the tropopause affects the value
  • GCM results usually don’t use radiative forcing as an input

In this article we will look at some results using the Wonderland model.

Remember the Wonderland model is not the earth. But the same is also true of “real” GCMs with geographical boundaries that match the earth as we know it. They are not the earth either. All models have limitations. This is easy to understand in principle. It is challenging to understand in the specifics of where the limitations are, even for specialists – and especially for non-specialists.

What the Wonderland model provides is a coarse geography with earth-like layout of land and ocean, plus of course, physics that follows the basic equations. And using this model we can get a sense of how radiative forcing is related to temperature changes when the same value of radiative forcing is applied via different mechanisms.

In the 1997 paper I think that Hansen, Sato & Ruedy did a decent job of explaining the limitations of radiative forcing, at least as far as the Wonderland climate model is able to assist us with that understanding. Remember as well that, in general, results we see from GCMs do not use radiative forcing. Instead they calculate from first principles – or parameterized first principles.

Doubling CO2

Now there’s a lot in this first figure, it can be a bit overwhelming. We’ll take it one step at a time. We double CO2 overnight – in Wonderland – and we see various results. The left half of the figure is all about flux while the right half is all about temperature:

From Hansen et al 1997

From Hansen et al 1997

Figure 1 – Green text added – Click to Expand

On the top line, the first two graphs are the net flux change, as a function of height and latitude. First left – instantaneous; second left – adjusted. These two cases were explained in the last article.

The second left is effectively the “radiative forcing”, and we can see that the above the tropopause (at about 200 mbar) the net flux change with height is constant. This is because the stratosphere has come into radiative balance. Refer to the last article for more explanation. On the right hand side, with all feedbacks from this one change in Wonderland, we can see the famous predicted “tropospheric hot spot” and the cooling of the stratosphere.

We see in the bottom two rows on the right the expected temperature change :

  • second row – change in temperature as a function of latitude and season (where temperature is averaged across all longitudes)
  • third row – change in temperature as a function of latitude and longitude (averaged annually)

It’s interesting to see the larger temperature increases predicted near the poles. I’m not sure I really understand the mechanisms driving that. Note that the radiative forcing is generally higher in the tropics and lower at the poles, yet the temperature change is the other way round.

Increasing Solar Radiation by 2%

Now let’s take a look at a comparison exercise, increasing solar radiation by 2%.

The responses to these comparable global forcings, 2xCO2 & +2% S0, are similar in a gross sense, as found by previous investigators. However, as we show in the sections below, the similarity of the responses is partly accidental, a cancellation of two contrary effects. We show in section 5 that the climate model (and presumably the real world) is much more sensitive to a forcing at high latitudes than to a forcing at low latitudes; this tends to cause a greater response for 2xCO2 (compare figures 4c & 4g); but the forcing is also more sensitive to a forcing that acts at the surface and lower troposphere than to a forcing which acts higher in the troposphere; this favors the solar forcing (compare figures 4a & 4e), partially offsetting the latitudinal sensitivity.

We saw figure 4 in the previous article, repeated again here for reference:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 2

In case the above comment is not clear, absorbed solar radiation is more concentrated in the tropics and a minimum at the poles, whereas CO2 is evenly distributed (a “well-mixed greenhouse gas”). So a similar average radiative change will cause a more tropical effect for solar but a more even effect for CO2.

We can see that clearly in the comparable graphic for a solar increase of 2%:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 3 – Green text added – Click to Expand

We see that the change in net flux is higher at the surface than the 2xCO2 case, and is much more concentrated in the tropics.

We also see the predicted tropospheric hot spot looking pretty similar to the 2xCO2 tropospheric hot spot (see note 1).

But unlike the cooler stratosphere of the 2xCO2 case, we see an unchanging stratosphere for this increase in solar irradiation.

These same points can also be seen in figure 2 above (figure 4 from Hansen et al).

Here is the table which compares radiative forcing (instantaneous and adjusted), no feedback temperature change, and full-GCM calculated temperature change for doubling CO2, increasing solar by 2% and reducing solar by 2%:

From Hansen et al 1997

From Hansen et al 1997

Figure 4 – Green text added – Click to Expand

The value R (far right of table) is the ratio of the predicted temperature change from a given forcing divided by the predicted temperature change from the 2% increase in solar radiation.

Now the paper also includes some ozone changes which are pretty interesting, but won’t be discussed here (unless we have questions from people who have read the paper of course).

“Ghost” Forcings

The authors then go on to consider what they call ghost forcings:

How does the climate response depend on the time and place at which a forcing is applied? The forcings considered above all have complex spatial and temporal variations. For example, the change of solar irradiance varies with time of day, season, latitude, and even longitude because of zonal variations in ground albedo and cloud cover. We would like a simpler test forcing.

We define a “ghost” forcing as an arbitrary heating added to the radiative source term in the energy equation.. The forcing, in effect, appears magically from outer space at an atmospheric level, latitude range, season and time of day. Usually we choose a ghost forcing with a global and annual mean of 4 W/m², making it comparable to the 2xCO2 and +2% S0 experiments.

In the following table we see the results of various experiments:

Hansen et al (1997)

Hansen et al (1997)

Figure 5 – Click to Expand

We note that the feedback factor for the ghost forcing varies with the altitude of the forcing by about a factor of two. We also note that a substantial surface temperature response is obtained even when the forcing is located entirely within the stratosphere. Analysis of these results requires that we first quantify the effect of cloud changes. However, the results can be understood qualitatively as follows.

Consider ΔTs in the case of fixed clouds. As the forcing is added to successively higher layers, there are two principal competing effects. First, as the heating moves higher, a larger fraction of the energy is radiated directly to space without warming the surface, causing ΔTs to decline as the altitude of the forcing increases. However, second, warming of a given level allows more water vapor to exist there, and at the higher levels water vapor is a particularly effective greenhouse gas. The net result is that ΔTs tends to decline with the altitude of the forcing, but it has a relative maximum near the tropopause.

When clouds are free to change the surface temperature change depends even more on the altitude of the forcing (figure 8). The principal mechanism is that heating of a given layer tends to decrease large-scale cloud cover within that layer. The dominant effect of decreased low-level clouds is a reduced planetary albedo, thus a warming, while the dominant effect of decreased high clouds is a reduced greenhouse effect, thus a cooling. However, the cloud cover, the cloud cover changes and the surface temperature sensitivity to changes may depend on characteristics of the forcing other than altitude, e.g. latitude, so quantitive evaluation requires detailed examination of the cloud changes (section 6).

Conclusion

Radiative forcing is a useful concept which gives a headline idea about the imbalance in climate equilibrium caused by something like a change in “greenhouse” gas concentration.

GCM calculations of temperature change over a few centuries do vary significantly with the exact nature of the forcing – primarily its vertical and geographical distribution. This means that a calculated radiative forcing of, say, 1 W/m² from two different mechanisms (e.g. ozone and CFCs) would (according to GCMs) not necessarily produce the same surface temperature change.

References

Radiative forcing and climate response, Hansen, Sato & Ruedy, Journal of Geophysical Research (1997) – free paper

Notes

Note 1: The reason for the predicted hot spot is more water vapor causes a lower lapse rate – which increases the temperature higher up in the troposphere relative to the surface. This change is concentrated in the tropics because the tropics are hotter and, therefore, have much more water vapor. The dry polar regions cannot get a lapse rate change from more water vapor because the effect is so small.

Any increase in surface temperature is predicted to cause this same change.

With limited research on my part, the idealized picture of the hotspot as shown above is not actually the real model results. The top graph is the “just CO2” graph, and the bottom graph is the “CO2 + aerosols” – the second graph is obviously closer to the real case:

From Santer et al 1996

From Santer et al 1996

Many people have asked for my comment on the hot spot, but apart from putting forward an opinion I haven’t spent enough time researching this topic to understand it. From time to time I do dig in, but it seems that there are about 20 papers that need to be read to say something useful on the topic. Unfortunately many of them are heavy in stats and my interest wanes.

Radiative forcing is a “useful” concept in climate science.

But while it informs it also obscures and many people are confused about its applicability. Also many people are confused about why stratospheric adjustment takes place and what that means. And why does the definition of the tropopause, which is a concept that doesn’t have one definite meaning, affect this all important concept of radiative forcing. Surely there is a definition which is clear and unambiguous?

So there are a few things we will attempt to understand in this article.

The Rate of Inflation and Other Stories

The value of radiative forcing (however it is derived) has the same usefulness as the rate of inflation, or the exchange rate as measured by a basket of currencies (with relevant apologies to all economists reading this article).

The rate of inflation tells you something about how prices are increasing but in the end it is a complex set of relationships reduced to a single KPI.

It’s quite possible for the rate of inflation to be the same value in two different years, and yet one important group of the country in question to see no increase in their spending in the first year yet a significant increase in their spending costs in the second year. That’s the problem with reducing a complex problem to one number.

However, the rate of inflation apparently has some value despite being a single KPI. And so it is with radiative forcing.

The good news is, when we get the results from a GCM, we can be sure the value of radiative forcing wasn’t actually used. Radiative forcing is more to inform the public and penniless climate scientists who don’t have access to a GCM.

Wonderland, the Simple Climate Model

The more precision you put into a GCM the slower it runs. So comparing 100’s of different cases can be impossible. Such is the dilemma of a climate scientist with access to a supercomputer running a GCM but a long queue of funded but finger-tapping climate scientists behind him or her.

Wonderland is a compromise model and is described in Wonderland Climate Model by Hansen et al (1997). This model includes some basic geography that is similar to the earth as we know it. It is used to provide insight into radiative forcing basics.

The authors explain:

A climate model provides a tool which allows us to think about, analyze, and experiment with a facsimile of the climate system in ways which we could not or would not want to experiment with the real world. As such, climate modeling is complementary to basic theory, laboratory experiments and global observations.

Each of these tools has severe limitations, but together, especially in iterative combinations they allow our understanding to advance. Climate models, even though very imperfect, are capable of containing much of the complexity of the real world and the fundamental principles from which that complexity arises.

Thus models can help structure the discussions and define needed observations, experiments and theoretical work. For this purpose it is desirable that the stable of modeling tools include global climate models which are fast enough to allow the user to play games, to make mistakes and rerun the experiments, to run experiments covering hundreds or thousands of simulated years, and to make the many model runs needed to explore results over the full range of key parameters. Thus there is great incentive for development of a highly efficient global climate model, i.e., a model which numerically solves the fundamental equations for atmospheric structure and motion.

Here is Wonderland, from a geographical point of view:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 1

Wonderland is then used in Radiative Forcing and Climate Response, Hansen, Sato & Ruedy (1997). The authors say:

We examine the sensitivity of a climate model to a wide range of radiative forcings, including change of solar irradiance, atmospheric CO2, O3, CFCs, clouds, aerosols, surface albedo, and “ghost” forcing introduced at arbitrary heights, latitudes, longitudes, season, and times of day.

We show that, in general, the climate response, specifically the global mean temperature change, is sensitive to the altitude, latitude, and nature of the forcing; that is, the response to a given forcing can vary by 50% or more depending on the characteristics of the forcing other than its magnitude measured in watts per square meter.

In other words, radiative forcing has its limitations.

Definition of Radiative Forcing

The authors explain a few different approaches to the definition of radiative forcing. If we can understand the difference between these definitions we will have a much clearer view of atmospheric physics. From here, the quotes and figures will be from Radiative Forcing and Climate Response, Hansen, Sato & Ruedy (1997) unless otherwise stated.

Readers who have seen the IPCC 2001 (TAR) definition of radiative forcing may understand the intent behind this 1997 paper. Up until that time different researchers used inconsistent definitions.

The authors say:

The simplest useful definition of radiative forcing is the instantaneous flux change at the tropopause. This is easy to compute because it does not require iterations. This forcing is called “mode A” by WMO [1992]. We refer to this forcing as the “instantaneous forcing”, Fi, using the nomenclature of Hansen et al [1993c]. In a less meaningful alternative, Fi is computed at the top of the atmosphere; we include calculations of this alternative for 2xCO2 and +2% S0 for the sake of comparison.

An improved measure of radiative forcing is obtained by allowing the stratospheric temperature to adjust to the presence of the perturber, to a radiative equilibrium profile, with the tropospheric temperature held fixed. This forcing is called “mode B” by WMO [1992]; we refer to it here as the “adjusted forcing”, Fa [Hansen et al 1993c].

The rationale for using the adjusted forcing is that the relaxation time of the stratosphere is only several months [Manabe & Strickler, 1964], compared to several decades for the troposphere [Hansen et al 1985], and thus the adjusted forcing should be a better measure of the expected climate response for forcings which are present at least several months..The adjusted forcing can be calculated at the top of the atmosphere because the net radiative flux is constant throughout the stratosphere in radiative equilibrium. The calculated Fa depends on where the tropopause level is specified. We specify this level as 100 mbar from the equator to 40° latitude, changing to 189 mbar there, and then increasing linearly to 300 mbar at the poles.

[Emphasis added].

This explanation might seem confusing or abstract so I will try and explain.

Let’s say we have a sudden increase in a particular GHG (see note 1). We can calculate the change in radiative transfer through the atmosphere with a given temperature profile and concentration profile of absorbers with little uncertainty. This means we can see immediately the reduction in outgoing longwave radiation (OLR). And the change in absorption of solar radiation.

Now the question becomes – what happens in the next 1 day, 1 month, 1 year, 10 years, 100 years?

Small changes in net radiation (solar absorbed – OLR) will have an equilibrium effect over many decades at the surface because of the thermal inertia of the oceans (the heat capacity is very high).

The issue that everyone found when they reviewed this problem – the radiative forcing on day 1 was different from the radiative forcing on day 90.

Why?

Because the changes in net absorption above the tropopause (the place where convection stops and let’s review that definition a little later) affect the temperature of the stratosphere very quickly. So the stratosphere quickly adjusts to the new world order and of course this changes the radiative forcing. It’s like (in non-technical terms) the stratosphere responded very quickly and “bounced out” some of the radiative forcing in the first month or two.

So the stratosphere, with little heat capacity, quickly adapts to the radiative changes and moves back into radiative equilibrium. This changes the “radiative forcing” and so if we want to work out the changes over the next 10-100 years there is little point in considering the radiative forcing on day 1, but maybe if the quick responders sort themselves out in 60 days we can wait for the quick responders to settle down and pick the radiative forcing number after 90-120 days.

This is the idea behind the definition.

Let’s look at this in pictures. In the graph below the top line is for doubling CO2 (the line below is for increasing solar by 2%), and the top left is the flux change through the atmosphere for instantaneous and for adjusted. The red line is the “adjusted” value:

From Hansen (1997)

From Radiative Forcing & Climate Response, Hansen et al (1997)

Figure 2 – Click to expand

This red line is the value of flux change after the stratosphere has adjusted to the radiative forcing. Why is the red line vertical?

The reason is simple.

The stratosphere is now in temperature equilibrium because energy in = energy out at all heights. With no convection in the stratosphere this is the same as radiation absorbed = radiation emitted at all heights. Therefore, the net flux change with height must be zero.

If we plotted separately the up and down flux we would find that they have a slope, but the slope of the up and down would be the same. Net absorption of radiation going up balances net emission of radiation going down – more on this in Visualizing Atmospheric Radiation – Part Eleven – Stratospheric Cooling.

Another important point, we can see in the top left graph that the instantaneous net flux at the tropopause (i.e., the net flux on day one) is different from the net flux at the tropopause after adjustment (i.e., after the stratosphere has come into radiative balance).

But once the stratosphere has come into balance we could use the TOA net flux, or the tropopause net flux – it would not matter because both are the same.

Result of Radiative Forcing

Now let’s look at 4 different ways to think about radiative forcing, using the temperature profile as our guide to what is happening:

From Hansen et al (1997)

From Radiative Forcing & Climate Response, Hansen et al (1997)

Figure 3 – Click to expand

On the left, case a, instantaneous forcing. This is the result of the change in net radiation absorbed vs height on day one. Temperature doesn’t change instantaneously so it’s nice and simple.

On the next graph, case b, adjusted forcing. This is the temperature change resulting from net radiation absorbed after the stratosphere has come into equilibrium with the new world order, but the troposphere is held fixed. So by definition the tropospheric temperature is identical in case b to case a.

On the next graph, case c, no feedback response of temperature. Now we allow the tropospheric temperature to change until such time as the net flux at the tropopause has gone back to zero. But during this adjustment we have held water vapor, clouds and the lapse rate in the troposphere at the same values as before the radiative forcing.

On the last graph, case d, all feedback response of temperature. Now we let the GCM take over and calculate how water vapor, clouds and the lapse rate respond. And as with case c, we wait until the temperature has increased sufficiently that net tropopause flux has gone back to zero.

What Definition for the Tropopause and Why does it Matter?

We’ve seen that if we use adjusted forcing that the radiative forcing is the same at TOA and at the tropopause. And the adjusted forcing is the IPCC 2001 definition. So why use the forcing at the tropopause? And why does the definition of the tropopause matter?

The first question is easy. We could use the forcing at TOA, it wouldn’t matter so long as we have allowed the stratosphere to come into radiative equilibrium (which takes a few months). As far as I can tell, my opinion, it’s more about the history of how we arrived at this point. If you want to run a climate model to calculate the radiative forcing without stratospheric equilibrium then, on day one, the radiative forcing at the tropopause is usually pretty close to the value calculated after stratospheric equilibrium is reached.

So:

  1. Calculate the instantaneous forcing at the tropopause and get a value close to the authoritative “radiative forcing” – with the benefit of minimal calculation resources
  2. Calculate the adjusted forcing at the tropopause or TOA to get the authoritative “radiative forcing”

And lastly, why then does the definition of the tropopause matter?

The reason is simple, but not obvious. We are holding the tropospheric temperature constant, and letting the stratospheric temperature vary. The tropopause is the dividing line. So if we move the dividing line up or down we change the point where the temperatures adjust and so, of course, this affects the “adjusted forcing”. This is explained in some detail in Forster et al (1997) in section 4, p.556 (see reference below).

For reference, three definitions of the tropopause are found in Freckleton et al (1998):

  • the level at which the lapse rate falls below 2K/km
  • the point at which the lapse rate changes sign, i.e., the temperature minimum
  • the top of convection

Conclusion

Understanding what radiative forcing means requires understanding a few basics.

The value of radiative forcing depends upon the somewhat arbitrary definition of the location of the tropopause. Some papers like Freckleton et al (1998) have dived into this subject, to show the dependence of the radiative forcing for doubling CO2 on this definition.

We haven’t covered it in this article, but the Hansen et al (1997) paper showed that radiative forcing is not a perfect guide to how climate responds (even in the idealized world of GCMs). That is, the same radiative forcing applied via different mechanisms can lead to different temperature responses.

Is it a useful parameter? Is the rate of inflation a useful parameter in economics? Usefulness is more a matter of opinion. What is more important at the start is to understand how the parameter is calculated and what it can tell us.

References

Radiative forcing and climate response, Hansen, Sato & Ruedy, Journal of Geophysical Research (1997) – free paper

Wonderland Climate Model, Hansen, Ruedy, Lacis, Russell, Sato, Lerner, Rind & Stone, Journal of Geophysical Research, (1997) – paywall paper

Greenhouse gas radiative forcing: Effect of averaging and inhomogeneities in trace gas distribution, Freckleton et al, QJR Meteorological Society (1998) – paywall paper

On aspects of the concept of radiative forcing, Forster, Freckleton & Shine, Climate Dynamics (1997) – free paper

Notes

Note 1: The idea of an instantaneous increase in a GHG is a thought experiment to make it easier to understand the change in atmospheric radiation. If instead we consider the idea of a 1% change per year, then we have a more difficult problem. (Of course, GCMs can quite happily work with a real-world slow change in GHGs. And they can quite happily work with a sudden change).

The earth’s surface is not a black-body. A blackbody has an emissivity and absorptivity = 1.0, which means that it absorbs all incident radiation and emits according to the Planck law.

The oceans, covering over 70% of the earth’s surface, have an emissivity of about 0.96. Other areas have varying emissivity, going down to about 0.7 for deserts. (See note 1).

A lot of climate analyses assume the surface has an emissivity of 1.0.

Let’s try and qualify the effect of this assumption.

The most important point to understand is that if the emissivity of the surface, ε, is less than 1.0 it means that the surface also reflects some atmospheric radiation.

Let’s first do a simple calculation with nice round numbers.

Say the surface is at a temperature, Ts=289.8 K. And the atmosphere emits downward flux = 350 (W/m²).

  • If ε = 1.0 the surface emits 400. And it reflects 0. So a total upward radiation of 400.
  • If ε = 0.8 the surface emits 320. And it reflects 70 (350 x 0.2). So a total upward radiation of 390.

So even though we are comparing a case where the surface reduces its emission by 20%, the upward radiation from the surface is only reduced by 2.5%.

Now the world of atmospheric radiation is very non-linear as we have seen in previous articles in this series. The atmosphere absorbs very strongly in some wavelength regions and is almost transparent in other regions. So I was intrigued to find out what the real change would be for different atmospheres as surface emissivity is changed.

To do this I used the Matlab model already created and explained – in brief in Part Two and with the code in Part Five – The Code (note 2). The change in surface emissivity is assumed to be wavelength independent (so if ε = 0.8, it is the case across all wavelengths).

I used some standard AFGL (air force geophysics lab) atmospheres. A description of some of them can be seen in Part Twelve – Heating Rates (note 3).

For the tropical atmosphere:

  • ε = 1.0, TOA OLR = 280.9   (top of atmosphere outgoing longwave radiation)
  • ε = 0.8, TOA OLR = 278.6
  • Difference = 0.8%

Here is the tropical atmosphere spectrum:

Atmospheric-radiation-14b-tropical-atm-TOA-emissivity-0.8vs1.0

Figure 1

We can see that the difference occurs in the 800-1200 cm-1 region (8-12 μm), the so-called “atmospheric window” – see Kiehl & Trenberth and the Atmospheric Window. We will come back to the reasons why in a moment.

For reference, an expanded view of the area with the difference:

Atmospheric-radiation-14b-tropical-atm-TOA-emissivity-0.8vs1.0-expanded

Figure 2

Now the mid-latitude summer atmosphere:

  • ε = 1.0, TOA OLR = 276.9
  • ε = 0.8, TOA OLR = 272.4
  • Difference = 1.6%

And the mid-latitude winter atmosphere:

  • ε = 1.0, TOA OLR = 227.9
  • ε = 0.8, TOA OLR = 217.4
  • Difference = 4.6%

Here is the spectrum:

Atmospheric-radiation-14c-midlat-winter-atm-TOA-emissivity-0.8vs1.0

Figure 3

We can see that the same region is responsible and the difference is much greater.

The sub-arctic summer:

  • ε = 1.0, TOA OLR = 259.8
  • ε = 0.8, TOA OLR = 252.7
  • Difference = 2.7%

The sub-arctic winter:

  • ε = 1.0, TOA OLR = 196.8
  • ε = 0.8, TOA OLR = 186.9
  • Difference = 5.0%

Atmospheric-radiation-14c-subarctic-winter-atm-TOA-emissivity-0.8vs1.0

Figure 4

We can see that the surface emissivity of the tropics has a negligible difference on OLR. The higher latitude winters have a 5% change for the same surface emissivity change, and the higher latitude summers have around 2-3%.

The reasoning is simple.

For the tropics, the hot humid atmosphere radiates quite close to a blackbody, even in the “window region” due to the water vapor continuum. We can see this explained in detail in Part Ten – “Back Radiation”.

So any “missing” radiation from a non-blackbody surface is made up by reflection of atmospheric radiation (where the radiating atmosphere is almost at the same temperature as the surface).

When we move to higher latitudes the “window region” becomes more transparent, and so the “missing” radiation cannot be made up by reflection of atmospheric radiation in this wavelength region. This is because the atmosphere is not emitting in this “window” region.

And the effect is more pronounced in the winters in high latitudes because the atmosphere is colder and so there is even less water vapor.

Now let’s see what happens when we do a “radiative forcing” calculation – we will do a comparison of TOA OLR at 360 ppm CO2 – 720 ppm at two different emissivities for the tropical atmosphere. That is, we will calculate 4 cases:

  • 360 ppm at ε=1.0
  • 720  ppm at ε=1.0
  • 360 ppm at ε=0.8
  • 720  ppm at ε=0.8

And, at both ε=1.0 & ε=0.8 we subtract the OLR at 360ppm from OLR at 720ppm and plot both differenced emissivity results on the same graph:

Atmospheric-radiation-14fg-tropical-atm-2xCO2-TOA-emissivity-0.8vs1.0

 

Figure 5

We see that both comparisons look almost identical – we can’t distinguish between them on this graph. So let’s subtract one from the other. That is, we plot (360ppm-720ppm)@ε=1.0 – (360ppm – 720ppm)@ε=0.8:

Atmospheric-radiation-14h-tropical-atm-2xCO2-1xCO2-emissivity-0.8-1.0

 

Figure 6 – same units as figure 5

So it’s clear that in this specific case of calculating the difference in CO2 from 360ppm to 720ppm it doesn’t matter whether we use surface emissivity = 1.0 or 0.8.

Conclusion

The earth’s surface is not a blackbody. No one in climate science thinks it is. But for a lot of basic calculations assuming it is a blackbody doesn’t have a big impact on the TOA radiation – for the reasons outlined above. And it has even less impact on the calculations of changes in CO2.

The tropics, from 30°S to 30°N, are about half the surface area of the earth. And with a typical tropical atmosphere, a drop in surface emissivity from 1.0 to 0.8 causes a TOA OLR change of less than 1%.

Of course, it could get more complicated than the calculations we have seen in this article. Over deserts in the tropics, where the surface emissivity actually gets below 0.8, water vapor is also low and therefore the resulting TOA flux change will be higher (as a result of using actual surface emissivity vs black body emissivity).

I haven’t delved into the minutiae of GCMs to find out what they assume about surface emissivity and, if they do use 1.0, what calculations have been done to quantify the impact.

The average surface emissivity of the earth is much higher than 0.8. I just picked that value as a reference.

The results shown in this article should help to clarify that the effect of surface emissivity less than 1.0 is not as large as might be expected.

Notes

Note 1: Emissivity and absorptivity are wavelength dependent phenomena. So these values are relevant for the terrestrial wavelengths of 4-50μm.

Note 2: There was a minor change to the published code to allow for atmospheric radiation being reflected by the non-black surface. This hasn’t been updated to the relevant article because it’s quite minor. Anyone interested in the details, just ask.

In this model, the top of atmosphere is at 10 hPa.

Some outstanding issues remain in my version of the model, like whether the diffusivity improvement is correct or needs improvement, and the Voigt profile (important in the mid-upper stratosphere) is still not used. These issues will have little or no effect on the question addressed in this article.

Note 3: For speed, I only considered water vapor and CO2 as “greenhouse” gases. No ozone was used. To check, I reran the tropical atmosphere with ozone at the values prescribed in that AFGL atmosphere. The difference between ε = 1.0 and ε = 0.8 was 0.7% – less than with no ozone (0.8%). This is because ozone reduces the transparency of the “atmospheric window” region.

In an earlier article on water vapor we saw that changing water vapor in the upper troposphere has a disproportionate effect on outgoing longwave radiation (OLR). Here is one example from Spencer & Braswell 1997:

Spencer and Braswell (1997)

From Spencer & Braswell (1997)

Figure 1

The upper troposphere is very dry, and so the mass of water vapor we need to change OLR by a given W/m² is small by comparison with the mass of water vapor we need to effect the same change in or near the boundary layer (i.e., near to the earth’s surface). See also Visualizing Atmospheric Radiation – Part Four – Water Vapor.

This means that when we are interested in climate feedback and how water vapor concentration changes with surface temperature changes, we are primarily interested in the changes in upper tropospheric water vapor (UTWV).

Upper Tropospheric Water Vapor

A major problem with analyzing UTWV is that most historic measurements are poor for this region. The upper troposphere is very cold and very dry – two issues that cause significant problems for radiosondes.

The atmospheric infrared sounder (AIRS) was launched in 2002 on the Aqua satellite and this instrument is able to measure temperature and water vapor with vertical resolution similar to that obtained from radiosondes. At the same time, because it is on a satellite we get the global coverage that is not available with radiosondes and the ability to measure the very cold, very dry upper tropospheric atmosphere.

Gettelman & Fu (2008) focused on the tropics and analysed the relationship (covariance) between surface temperature and UTWV from AIRS over 2002-2007, and then compared this with the results of the CAM climate model using prescribed (actual) surface temperature from 2001-2004 (note 1):

This study will build upon previous estimates of the water vapor feedback, by focusing on the observed response of upper-tropospheric temperature and humidity (specific and relative humidity) to changes in surface temperatures, particularly ocean temperatures. Similar efforts have been performed before (see below), but this study will use new high vertical resolution satellite measurements and compare them to an atmospheric general circulation model (GCM) at similar resolution.

The water vapor feedback arises largely from the tropics where there is a nearly moist adiabatic profile. If the profile stays moist adiabatic in response to surface temperature changes, and if the relative humidity (RH) is unchanged because of the supply of moisture from the oceans and deep convection to the upper troposphere, then the upper-tropospheric specific humidity will increase.

[Emphasis added]

They describe the objective:

The goal of this work is a better understanding of specific feedback processes using better statistics and vertical resolution than has been possible before. We will compare satellite data over a short (4.5 yr) time record to a climate model at similar space and time resolution and examine the robustness of results with several model simulations. The hypothesis we seek to test is whether water vapor in the model responds to changes in surface temperatures in a manner similar to the observations. This can be viewed as a necessary but not sufficient condition for the model to reproduce the upper-tropospheric water vapor feedback caused by external forcings such as anthropogenic greenhouse gas emissions.

[Emphasis added].

The results are for relative humidity (RH) on the left and absolute humidity on the right:

From Gettelman & Fu (2008)

From Gettelman & Fu (2008)

Figure 2

The graphs show that change in 250 mbar RH with temperature is statistically indistinguishable from zero. For those not familiar with the basics, if RH stays constant with rising temperature it is the same as increasing “specific humidity” – which means an increased mixing ratio of water vapor in the atmosphere. And we see this is the right hand graph.

Figure 1a has considerable scatter, but in general, there is little significant change of 250-hPa relative humidity anomalies with anomalies in the previous month’s surface temperature. The slope is not significantly different than zero in either AIRS observations (1.9 ± 1.9% RH/°C) or CAM (1.4 ± 2.8% RH/°C).

The situation for specific humidity in Fig. 1b indicates less scatter, and is a more fundamental measurement from AIRS (which retrieves specific humidity and temperature separately). In Fig. 1b, it is clear that 250- hPa specific humidity increases with increasing averaged surface temperature in both AIRS observations and CAM simulations. At 250 hPa this slope is 20 ± 8 ppmv/°C for AIRS and 26 ± 11 ppmv/°C for CAM. This is nearly 20% of background specific humidity per degree Celsius at 250 hPa.

The observations and simulations indicate that specific humidity increases with surface temperatures (Fig. 1b). The increase is nearly identical to that required to maintain constant relative humidity (the sloping dashed line in Fig. 1b) for changes in upper-tropospheric temperature. There is some uncertainty in this constant RH line, since it depends on calculations of saturation vapor mixing ratio that are nonlinear, and the temperature used is a layer (200–250 hPa) average.

The graphs below show the change in each variable as surface temperature is altered as a function of pressure (height). The black line is the measurement (AIRS).

So the right side graph shows that, from AIRS data of 4 years, specific humidity increases with surface temperature in the upper troposphere:

From Gettelman & Fu (2008)

From Gettelman & Fu (2008)

Figure 3 – Click to Enlarge

There are a number of model runs using CAM with different constraints. This is a common theme in climate science – researchers attempting to find out what part of the physics (at least as far as the climate model can reproduce it) contributes the most or least to a given effect. The paper has no paywall, so readers are recommended to review the whole paper.

Conclusion

The question of how water vapor responds to increasing surface temperature is a critical one in climate research. The fundamentals are discussed in earlier articles, especially Clouds and Water Vapor – Part Two – and much better explained in the freely available paper Water Vapor Feedback and Global Warming, Held and Soden (2000).

One of the key points is that the response of water vapor in the planetary boundary layer (the bottom layer of the atmosphere) is a lot easier to understand than the response in the “free troposphere”. But how water vapor changes in the free troposphere is the important question. And the water vapor concentration in the free troposphere is dependent on the global circulation, making it dependent on the massive complexity of atmospheric dynamics.

Gettelman and Fu attempt to answer this question for the first half decade’s worth of quality satellite observation and they find a result that is similar to that produced by GCMs.

Many people outside of climate science believe that GCMs have “positive feedback” or “constant relative humidity” programmed in. Delving into a climate model is a technical task, but the details are freely available – e.g., Description of the NCAR Community Atmosphere Model (CAM 3.0), W.D. Collins (2004). It’s clear to me that relative humidity is not prescribed in climate models – both from the equations used and from the results that are produced in many papers. And people like the great Isaac Held, a veteran of climate modeling and atmospheric dynamics, also state the same. So, readers who believe otherwise – come forward with evidence.

Still, that’s a different story from acknowledging that climate models attempt to calculate humidity from some kind of physics but believing that these climate models get it wrong. That is of course very possible.

At least from this paper we can see that over this short time period, not subject to strong ENSO fluctuations or significant climate change, the satellite date shows upper tropospheric humidity increasing with surface temperature. And the CAM model produces similar results.

Articles in this Series

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data

References

Observed and Simulated Upper-Tropospheric Water Vapor Feedback, Gettelman & Fu, Journal of Climate (2008) – free paper

How Dry is the Tropical Free Troposphere? Implications for Global Warming Theory, Spencer & Braswell, Bulletin of the American Meteorological Society (1997) – free paper

Notes

Note 1 – The authors note: “..Model SSTs may be slightly different from the data, but represent a partially overlapping period..”

I asked Andrew Gettelman why the model was run for a different time period than the observations and he said that the data (in the form needed for running CAM) was not available at that time.

Measurements of outgoing longwave radiation (OLR) are essential for understanding many aspects of climate. Many people are confused about the factors that affect OLR. And its rich variability is often not appreciated.

There have been a number of satellite projects since the late 1970’s, with the highlight (prior to 2001) being the five year period of ERBE.

AIRS & CERES were launched on the NASA AQUA satellite in May 2002. These provide much better quality data, with much better accuracy and resolution.

CERES has three instruments:

  • Solar Reflected Radiation (Shortwave): 0.3 – 5.0 μm
  • Window: 8 – 12 μm
  • Total: 0.3 to > 100 μm

AIRS is an infrared spectrometer/radiometer that covers the 3.7–15.4 μm spectral range with 2378 spectral channels. It runs alongside two microwave instruments (better viewing through clouds): AMSU is a 15-channel microwave radiometer operating between 23 and 89 GHz; HSB is a four-channel microwave radiometer that makes measurements between 150 and 190 GHz.

From Aumann et al (2003):

The simultaneous use of the data from the three instruments provides both new and improved measurements of cloud properties, atmospheric temperature and humidity, and land and ocean skin temperatures, with the accuracy, resolution, and coverage required by numerical weather prediction and climate models.

Among the important datasets that AIRS will contribute to climate studies are as follows:

  • atmospheric temperature profiles;
  • sea-surface temperature;
  • land-surface temperature and emissivity;
  • relative humidity profiles and total precipitable water vapor;
  • fractional cloud cover;
  • cloud spectral IR emissivity;
  • cloud-top pressure and temperature;
  • total ozone burden of the atmosphere;
  • column abundances of minor atmospheric gases such as CO, CH, CO, and N2O;
  • outgoing longwave radiation and longwave cloud radiative forcing;
  • precipitation rate

More about AIRS = Atmospheric Infrared Sounder, at Wikipedia, plus the AIRS website.

More about CERES = Clouds and the Earth’s Radiant Energy System, at Wikipedia, plus the CERES website – where you can select and view or download your own data.

How do CERES & AIRS compare?

CERES and AIRS have different jobs. CERES directly measures OLR. AIRS measures lots of spectral channels that don’t cover the complete range needed to just “add up” OLR. Instead, OLR can be calculated from AIRS data by deriving surface temperature, water vapour concentration vs height, CO2 concentration, etc and using a radiative transfer algorithm to determine OLR.

Here is a comparison of the two measurement systems from Susskind et al (2012) over almost a decade:

Susskind-CERES-vs-AIRS-2012

From Susskind et al (2012)

Figure 1

The second thing to observe is that the measurements have a bias between the two datasets. But because we have two high accuracy measurement systems on the same satellite we do have a reasonable opportunity to identify the source of the bias (total OLR as shown in the graph is made of many components). If we only had one satellite, and then a new satellite took over with a small time overlap any biases would be much more difficult to identify. Of course, that doesn’t stop many people from trying but success would be much harder to judge.

In this paper, as we might expect, the error sources between the two datasets get considerable discussion. One important point is that version 6 AIRS data (prototyped at the time the paper was written) is much closer to CERES. The second point, probably more interesting, is that once we look at anomaly data the results are very close. We’ll see a number of comparisons as we review what the paper shows.

The authors comment:

Behavior of OLR over this short time period should not be taken in any way as being indicative of what long-term trends might be. The ability to begin to draw potential conclusions as to whether there are long-term drifts with regard to the Earth’s OLR, beyond the effects of normal interannual variability, would require consistent calibrated global observations for a time period of at least 20 years, if not longer. Nevertheless, a very close agreement of the 8-year, 10-month OLR anomaly time series derived using two different instruments in two very different manners is an encouraging result.

It demonstrates that one can have confidence in the 1° x 1° OLR anomaly time series as observed by each instrument over the same time period. The second objective of the paper is to explain why recent values of global mean, and especially tropical mean, OLR have been strongly correlated with El Niño/La Niña variability and why both have decreased over the time period under study.

Why Has OLR Varied?

The authors define the average rate of change (ARC) of an anomaly time series as “the slope of the linear least squares fit of the anomaly time series”.

Susskind-2012-table-1

We can see excellent correlation between the two datasets and we can see that OLR has, on average, decreased over this time period.

Below is a comparison with the El Nino index.

We define the term El Niño Index as the difference of the NOAA monthly mean oceanic Sea Surface Temperature (SST), averaged over the NOAA Niño-4 spatial area 5°N to 5°S latitude and 150°W westward to 160°E longitude, from an 8-year NOAA Niño-4 SST monthly mean climatology which we generated based on use of the same 8 years that we used in the generation of the OLR climatologies.

From Susskind et al (2012)

From Susskind et al (2012)

Figure 2

It gets interesting when we look at the geographical distribution of the OLR changes over this time period:

From Susskind et al (2012)

From Susskind et al (2012)

Figure 3 – Click to Enlarge

We see that the tropics have the larger changes (also seen clearly in figure 2) but that some regions of the tropics have strong positive values and other regions have strong negative values. The grey square square centered on 180 longitude is the Nino-4 region. Values as large as +4 W/m²/decade are found in this region. And values as large as -3 W/m²/decade are found over Indonesia (WPMC region).

Let’s look at the time series to see how these changes in OLR took place:

Susskind-Time-Series-2012

Figure 4 – Click to Enlarge

The main parameters which affect changes in OLR month to month and year to year are a) surface temperatures b) humidity c) clouds. As temperature increases, OLR increases. As humidity and clouds increase, OLR decreases.

Here are the changes in surface temperature, specific humidity at 500mbar and cloud fraction:

From Susskind (2012)

From Susskind (2012)

Figure 5 – Click to Enlarge

So, focusing again on the Nino-4 region, we might expect to find that OLR has decreased because of the surface temperature decrease (lower emission of surface radiation) – or we might expect to find that the OLR has increased because the specific humidity and cloud fraction have decreased (thus allowing more surface and lower atmosphere radiation to make it through to TOA). These are mechanisms pulling in opposite directions.

In fact we see that the reduced specific humidity and cloud fraction have outweighed the effect of the surface temperature decrease. So the physics should be clear (still considering the Nino-4 region) – if surface temperature has decreased and OLR has increased then the explanation is the reduction in “greenhouse” gases (in this case water vapor) and clouds, which contain water.

Correlations

We can see similar relationships through correlations.

The term ENC in the graphs stands for El Nino Correlation. This is essentially the correlation of the time-series data with time-series temperature change in the Nino-4 region (more specifically the Nino-4 temperature less the global temperature).

As the Nino-4 temperature declined over the period in question, a positive correlation means the value declined, while a negative correlation means the value increased.

The first graph below is the geographical distribution of rate of change of surface temperature. Of course we see that the Nino-4 region has been declining in temperature (as already seen in figure 2). The second graph shows this as well, but also indicates that the regions west and east of the Nino-4 region have  a stronger (negative) correlation than  other areas of larger temperature change (like the arctic region).

The third graph shows that 500 mb humidity has been decreasing in the Nino-4 region, and increasing to the west and east of this region. Likewise for the cloud fraction. And all of these are strongly correlated to the Nino-4 time-series temperature:

From Susskind (2012)

From Susskind et al (2012)

Figure 6 – Click to expand

For OLR correlations with Nino-4 temperature we find a strong negative correlation, meaning the OLR has increased in the Nino-4 region. And the opposite – a strong positive correlation – in the highlighted regions to east and west of Nino-4:

From Susskind (2012)

From Susskind (2012)

Figure 7 – Click to expand

Note the two highlighted regions

  • to the west: WPMC, Warm Pool Maritime Continent;
  • and to the east: EEMA, Equatorial Eastern Pacific and Atlantic Region

We can see the correlations between the global & tropical OLR and the OLR changes in these regions:

Susskind-2012-table-2

Figure 8 – Click to expand

Both WPMC and EEPA regions together explain the reduction over 10 years in OLR. Without these two regions the change is indistinguishable from zero.

Conclusion

This article is interesting for a number of reasons.

It shows the amazing variability of climate – we can see adjacent regions in the tropics with completely opposite changes over 10 years.

It shows that CERES gets almost identical anomaly results (changes in OLR) to AIRS. CERES directly measures OLR, while AIRS retrieves surface temperature, humidity profiles, cloud fractions and “greenhouse” gas concentrations and uses these to calculate OLR.

AIRS results demonstrate how surface temperature, humidity and cloud fraction affect OLR.

OLR has – over the globe – decreased over 10 years. This is a result of the El-Nino phase – at the start of the measurement period we were coming out of a large El-Nino event, and at the end of the measurement period we were in a La Nina event.

The reduction in OLR is explained by the change in the two regions identified, which are themselves strongly correlated to the Nino-4 region.

References

Interannual variability of outgoing longwave radiation as observed by AIRS and CERES, Susskind et al, Journal of Geophysical Research (2012) – paywall paper

AIRS/AMSU/HSB on the Aqua Mission: Design, Science Objectives, Data Products, and Processing Systems, Aumann et al, IEEE Transactions on Geoscience and Remote Sensing (2003) – free paper

Many curiosity values in atmospheric physics take on new life in the blogosphere. One of them is the value in Kiehl & Trenberth 1997 for the “atmospheric window” flux:

From Kiehl & Trenberth (1997)

From Kiehl & Trenberth (1997)

Figure 1

Here is the update in 2009 by Trenberth, Fasullo & Kiehl:

From Trenberth, Fasullo & Kiehl (2009)

From Trenberth, Fasullo & Kiehl (2009)

Figure 2

The “atmospheric window” value is probably the value in KT97 which has the least attention paid to it in the paper, and the least by climate science. That’s because it isn’t actually used in any calculations of note.

What is the Atmospheric Window?

The “atmospheric window” itself is a term in common use in climate science. The atmosphere is quite opaque to longwave radiation (terrestrial radiation) but the region from 8-12 μm has relatively few absorption lines by “greenhouse” gases. This means that much of the surface radiation emitted in this wavelength region makes it to the top of atmosphere (TOA).

The story is a little more complex for two reasons:

  1. The 8-12μm region has significant absorption by water vapor due to the water vapor continuum. See Visualizing Atmospheric Radiation – Part Ten – “Back Radiation” for more on both the window and the continuum
  2. Outside of the 8-12 region there is some transparency in the atmosphere at particular wavelengths

The term in KT97 was not clearly defined, but what we are really interested in is what value of surface emitted radiation is transmitted through to TOA – from any wavelength, regardless of whether it happens to be in the 8-12 μm region.

Calculating the Value

One blog that I visited recently had many commenters whose expectation was that upward emitted radiation by the surface would be exactly equal to the downward emitted radiation by the atmosphere + the “atmospheric window” value.

To illustrate this expectation let’s use the values from figure 2 (the 2009 paper) – note that all of these figures are globally annually averaged:

  • Upward radiation from the surface = 396 W/m²
  • Downward radiation from the atmosphere (DLR or “back radiation”) = 333 W/m²
  • These commenters appear to think the atmospheric window value is probably really 63 W/m² – and thus the surface and lower atmosphere are in a “radiative balance”

This can’t be the case for fairly elementary reasons – but let’s look at that later.

In Visualizing Atmospheric Radiation – Part Two I describe the basics of a MATLAB line by line calculation of radiative transfer in the atmosphere. And Part Five – The Code gives the specifics, including the code.

Running v0.10.4 I used some “standard atmospheres” (examples in Part Twelve – Heating Rates) and calculated the flux from the surface to TOA:

  • Tropical – 28 W/m² (52 W/m²)
  • Midlatitude summer – 40 W/m² (58 W/m²)
  • Midlatitude winter – 59 W/m² (62 W/m²)
  • Subarctic summer – 50 W/m² (61 W/m²)
  • Subartic winter – 55 W/m² (56 W/m²)
  • US Standard 1976 – 65 W/m² (72 W/m²)

These are all clear sky values, and the values in brackets are the values calculated without the continuum absorption to show its effect. Clear skies are, globally annually averaged, about 38% of the sky.

These values are quite a bit lower than the values found in the new paper we discuss in this article, and at this stage I’m not sure why.

This paper is: Outgoing Longwave Radiation due to Directly Transmitted Surface Emission, Costa & Shine (2012):

This short article is intended to be a pedagogical discussion of one component of the KT97 figure [which was not updated in Trenberth et al. (2009)], which is the amount of longwave radiation labeled ‘‘atmospheric window.’’ KT97 estimate this component to be 40 W/m² compared to the total outgoing longwave radiation (OLR) of 235 W/m²; however, KT97 make clear that their estimate is ‘‘somewhat ad hoc’’ rather than the product of detailed calculations. The estimate was based on their calculation of the clear-sky OLR in the 8–12 μm wavelength region of 99 W/m² and an assumption that no such radiation can directly exit the atmosphere from the surface when clouds are present. Taking the observed global-mean cloudiness to be 62%, their value of 40 W/m² follows from rounding 99 x (1 – 0.62).

They comment:

Presumably the reason why KT97, and others, have not explicitly calculated this term is that the methods of vertical integration of the radiative transfer equation in most radiation codes compute the net effect of surface emission and absorption and emission by the atmosphere, rather than each component separately. In the calculations presented here we explicitly calculate the upward irradiance at the top of the atmosphere due to surface emission: we will call this the surface transmitted irradiance (STI).

In other words, the value in the KT97 paper is not needed for any radiative transfer calculations, but let’s try and work out a more accurate value anyway.

First, how the clear sky values vary with latitude:

Costa-Shine-fig3-2012

Figure 3 – Clear sky values

Note that the dashed line is “imaginary physics”. The water vapor continuum exists but it is very interesting to see what effect it contributes. This is seen by calculating the effect as if it didn’t exist.

We see that in the tropics STI is very low. This is because the effect of the continuum is dependent on the square of the water vapor concentration, which itself is strongly dependent on the temperature of the atmosphere.

The continuum absorption is so strong in the tropics that STIclr in polar regions (which is only modestly influenced by the continuum) is typically 40% higher than the tropical values.Figure 3 shows the zonal and annual mean of the STIclr to emphasize the role of the continuum. The STIclr neglecting the continuum (dash-dotted line) is generally more than 80 W/m² at all latitudes, with maxima in the northern subtropics (mostly associated with the Sahara desert), but with little latitudinal gradient throughout the tropics and subtropics; the tropical values are reduced by more than 50% when the continuum is included (dashed lines). The effect of the continuum clearly diminishes outside of the tropics and is responsible for only around a 10% reduction in STIclr at high latitudes.

Interestingly, these more detailed calculations yield global-mean values of STIclr of 66 and 100 W/m², with and without the continuum, very close to the values (65 and 99 W/m²) computed using the single global-mean profile, in spite of the potential nonlinearities due to the vapor pressure–squared dependence of the self-continuum.

For people unfamiliar with the issue of non-linearity – if we take an “average climate” and do some calculations on it, the result will usually be different from taking lots of location data, doing the calculations on each, and averaging the results of the calculations. Climate is non-linear. However, in this case, the calculated value of STIclr on an “average climate” does turn out to be similar to the average of STIclr when calculated from climate values in each part of the globe.

We can appreciate a little more about the impact of the continuum on this atmospheric window if we look at the details of the calculation vs wavelength:

From Costa & Shine (2012)

From Costa & Shine (2012)

Figure 4 – Highlighted orange text added

Here is the regional breakdown:

From Costa & Shine (2012)

From Costa & Shine (2012)

Figure 5 – Clear and All-sky values – Orange highlighted text added

Note that conventionally in climate science clear sky results are the climate without clouds (i.e., a subset), whereas ‘cloudy sky’ results are the results with both clear and cloudy (i.e., all values).

The authors comment:

When including clouds, the STI is reduced further (Fig. 2c) because clouds absorb strongly throughout the infrared window. In regions of high cloud amount, such as the midlatitude storm tracks, the STI is reduced from a clear-sky value of 70 W/m² to less than 10 W/m². As expected, values are less affected in desert regions. The subtropics are now the main source of the global mean STI. The effect of clouds is to reduce the STI from its clear-sky value of 66 W/m² by two-thirds to a value of about 22 W/m²

Method

They state:

Clear-sky STI (STIclr) is calculated by using the line by line model Reference Forward Model (RFM) version 4.22 (Dudhia 1997) in the wavenumber domain 10–3000 cm-1 (wavelengths 3.33–1000 mm) at a spectral resolution of 0.005 cm-1. The version of RFM used here incorporates the Clough–Kneizys–Davies (CKD) water vapor continuum model (version 2.4); although this has been superseded by the MT-CKD model, the changes in the midinfrared window (see, e.g., Firsov and Chesnokova 2010) are rather small and unlikely to change our estimate by more than 1 W/m²..

..Irradiances are calculated at a spatial resolution of 10° latitude and longitude using a climatology of annual mean profiles of pressure, water vapor, temperature, and cloudiness described in Christidis et al. (1997). Although slightly dated, the global-mean column water amount is within about 1% of more recent climatologies.

Carbon dioxide, methane, and nitrous oxide are assumed to be well mixed with mixing ratios of 365, 1.72, and 0.312 ppmv, respectively. Other greenhouse gases are not considered since their radiative forcing is less than 0.4 W/m² (e.g., Solomon et al. 2007; Schmidt et al. 2010); we have performed an approximate estimate of the effect of 1 ppbv of chlorofluorocarbon 12 (CFC12) (to approximate the sum of all halocarbons in the atmosphere) on the STIclr and the effect is less than 1%.

Likewise, aerosols are not considered. It is the larger mineral dust particles that are more likely to have an impact in this spectral region; estimates of the impact of aerosol on the OLR are typically around 0.5 W/m² (e.g., Schmidt et al. 2010). The impact on the STI will depend on, for example, the height of aerosol layers and the aerosol radiative properties and is likely a larger effect than the CFCs if they are mostly at lower altitudes; this is discussed further in section 4. The surface is assumed to have an emittance of unity.

And later in assumptions:

Our assumption that the surface emits as a blackbody could also be examined, using emerging datasets on the spectral variation of the surface emittance (which can deviate significantly from unity and be as low as 0.75 in the 1000–1200 cm-1 spectral region, in desert regions; e.g., Zhou et al. 2011; Vogel et al. 2011). Some decision would need to made, then, as to whether or not infrared radiation reflected by surfaces with emittances less than zero should be included in the STI term as this reflection partially compensates for the reduced emission. Although locally important, the effect of nonunity emittances on the global-mean STI is likely to be only a few percent.

The point here is that if we consider the places with emissivity less than 1.0 should we calculate the value of flux reaching TOA without absorption from both surface emission AND surface reflection? Or just surface emission? If we include the reflected atmospheric radiation then the result is not so different. This is something I might try to demonstrate in the Visualizing Atmospheric Radiation series.

As is standard in radiative transfer calculations, spherical geometry is taken into consideration via the diffusivity approximation, as outlined in this comment.

Why The Atmosphere and The Surface cannot be Exchanging Equal Amounts of Radiation

This is quite easy to understand. I’ll invent some numbers which are nice round numbers to make it easier.

Let’s say the surface radiates 400 and has an emissivity of 1.0 (implying Ts=289.8 K). The atmosphere has an overall transmissivity of 0.1 (10%). That means 360 is absorbed by the atmosphere and 40 is transmitted to TOA unimpeded. For the radiative balance required/desired by the earlier mentioned commenters the atmosphere must be emitting 360.

Thus, under these fictional conditions, the surface is absorbing 360 from the atmosphere. The atmosphere is absorbing 360 from the surface. Some bloggers are happy.

Now, how does the atmosphere, with a transmissivity of 10%, emit 360? We need to know the atmosphere’s emissivity. For an atmosphere – a gas – energy must be transmitted, absorbed or reflected. Longwave radiation experiences almost no reflection from the atmosphere. So we end up with a nice simple formula:

Transmissivity, t = 100% – absorptivity

 Absorptivity, a = 90%.

What is emissivity? It turns out, explained in Planck, Stefan-Boltzmann, Kirchhoff and LTE, that emissivity = absorptivity (for the same wavelength).

Therefore, emissivity of the atmosphere, e = 90%.

So what temperature of the atmosphere, Ta, at an emissivity of 90% will radiate 360? The answer is simple (from the Stefan Boltzmann equation, E=eσTa4, where σ=5.67×10-8):

Ta = 289.8 K

So, if the atmosphere is exactly the same temperature as the surface then they will exchange equal amounts of radiation. And if not, they won’t. Now the atmosphere is not at one temperature so it makes it a bit harder to work out what the right temperature is. And the full calculation comes from the radiative transfer equations, but the same conclusion is reached with lots of maths – unless the atmosphere is at the same temperature as the surface then they will not exchange equal amounts of radiation.

Conclusion

The authors say:

This study presents what we believe to be the most detailed estimate of the surface contribution to the clear and cloudy-sky OLR. This contribution is called the surface transmitted irradiance (STI). The global- and annual- mean STI is found to be 22 W/m². The purpose of producing the value is mostly pedagogical and is stimulated by the value of 40 W/m² shown on the often-used summary figures produced by KT97 and Trenberth et al. (2009).
As a result of this changed value, of course the standard energy balance diagram shown in KT97 and TFK09 needs some adjustments.

Related Articles

References

Earth’s Annual Global Mean Energy Budget, Kiehl & Trenberth, Bulletin of the American Meteorological Society (1997) – free paper

Earth’s Global Energy Budget, Trenberth, Fasullo & Kiehl, Bulletin of the American Meteorological Society (2009) – free paper

Outgoing Longwave Radiation due to Directly Transmitted Surface Emission, Costa & Shine, Journal of the Atmospheric Sciences (2012) – paywall paper

In Part Two we covered quite a bit of ground. At the end we looked at the first calculation of heating rates. The values calculated were a little different in magnitude from results in a textbook, but the model was still in a rudimentary phase.

After numerous improvements – outlined in Part Five – The Code, I got around to adding some “standard atmospheres” so we can see some comparisons and at least see where this model departs from other more accurate models.

First, what are heating rates? Within the context of this model we are currently thinking about the longwave radiative heating rates, which really means this:

If the only part of climate physics that was actually working was “longwave radiation” (terrestrial radiation) then how fast would different parts of the atmosphere heat up or cool down?

As we will see this mechanism (terrestrial radiation) mostly results in a net cooling for each part of the atmosphere.

The atmosphere also absorbs solar radiation – not shown in these graphs – which acts in the opposite direction and provides a heating.

Lastly, the sun warms the surface and convection transfers heat much more efficiently from the surface to the lower atmosphere – and this makes up the balance.

So, with longwave heating (cooling) curves, we are consider one mechanism of how heat is transferred.

Second, what is “longwave radiation”? This is a conventional description of the radiation emitted by the climate system, specifically the fact that its wavelength is almost all above 4 μm. The other significant radiation component in the climate system is “shortwave radiation”, which by convention means radiation below 4 μm. See The Sun and Max Planck Agree – Part Two for more.

Third, what is a “standard atmosphere”? It’s just a kind of average, useful for inter-comparisons, and for evaluation of various climate mechanisms around ideal cases. In this case, I used the AFGL (air force geophysics lab) models which are also used in the LBLTRM (line by line radiative transfer model).

Here is a graph for tropical conditions of heating rate vs height – and with a breakdown between the rates caused by water vapor, CO2 and O3:

Atmospheric-radiation-13c-Heating-rates-tropical-each-H2O-CO2-O3

Figure 1

Notice that the heating rate is mostly negative, so the atmosphere is cooling via radiation – which means for this atmospheric profile water vapor, CO2 and ozone have a net effect of emitting more terrestrial radiation out than they absorb via these gases.

Here is a textbook comparison:

From Petty (2006)

From Petty (2006)

Figure 2

And a set of graphs detailing the tropical condition for temperature, pressure, density and GHG concentrations:

Atmospheric-radiation-13a-Tropical-profile-temperature-gases-density

Figure 3 – Click to enlarge

Now some comparisons of the overall heating rates for 3 different profiles:

Atmospheric-radiation-13d-Heating-rates-3-atmospheres

Figure 4

Here is a textbook comparison:

From Petty (2006)

From Petty (2006)

Figure 5

So we can see that the MATLAB model created here from first principles and using the HITRAN database of absorption and emission lines is quite close to other calculated standards.

In fact, the differences are small except in the mid-stratosphere and we may find that this is due to slight differences in the model atmosphere used, or as a result of not using the Voigt profile (this is an important but technical area of atmospheric radiation – line shapes and how they change with pressure and temperature in the atmosphere – see for example Part Eight – CO2 Under Pressure).

Pekka Pirilä has been running this MATLAB model as well, has helped with numerous improvements and has just implemented the Voigt profile so we will shortly find out if the line shape is a contributor to any differences.

For reference, here are the profiles of the other two conditions shown in figure 4: Midlatitude summer & Subarctic summer:

Atmospheric-radiation-13h-Midlatitude-summer-profile-temperature-gases-density

Figure 6 – Click to enlarge

Atmospheric-radiation-13e-Subarctic-summer-profile-temperature-gases-density

Figure 7 – Click to enlarge

Related Articles

Part One – some background and basics

Part Two – some early results from a model with absorption and emission from basic physics and the HITRAN database

Part Three – Average Height of Emission – the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions

Part Four – Water Vapor – results of surface (downward) radiation and upward radiation at TOA as water vapor is changed

Part Five – The Code – code can be downloaded, includes some notes on each release

Part Six – Technical on Line Shapes – absorption lines get thineer as we move up through the atmosphere..

Part Seven – CO2 increases – changes to TOA in flux and spectrum as CO2 concentration is increased

Part Eight – CO2 Under Pressure – how the line width reduces (as we go up through the atmosphere) and what impact that has on CO2 increases

Part Nine – Reaching Equilibrium – when we start from some arbitrary point, how the climate model brings us back to equilibrium (for that case), and how the energy moves through the system

Part Ten – “Back Radiation” – calculations and expectations for surface radiation as CO2 is increased

Part Eleven – Stratospheric Cooling – why the stratosphere is expected to cool as CO2 increases

Part Thirteen – Surface Emissivity – what happens when the earth’s surface is not a black body – useful to understand seeing as it isn’t..

References

AFGL atmospheric constituent profiles (0.120 km), by GP Anderson et al (1986)

A First Course in Atmospheric Radiation, Grant Petty, Sundog Publishing (2006)

The data used to create these graphs comes from the HITRAN database.

The HITRAN 2008 molecular spectroscopic database, by L.S. Rothman et al, Journal of Quantitative Spectroscopy & Radiative Transfer (2009)

The HITRAN 2004 molecular spectroscopic database, by L.S. Rothman et al., Journal of Quantitative Spectroscopy & Radiative Transfer (2005)

Understanding atmospheric radiation is not so simple. But now we have a line by line model of absorption and emission of radiation in the atmosphere we can do some “experiments”. See Part Two and Part Five – The Code.

Many people think that models are some kind of sham and climate scientists should be out there doing real experiments. Well, models aren’t a sham and climate scientist are out there doing lots of experiments. Various articles on Science of Doom have outlined some of the very detailed experiments that have been done by atmospheric physicists, aka climate scientists.

When you want to understand why some aspect of a climate mechanism works the way it does, or what happens if something changes then usually you have to resort to a mathematical model of that part of the climate.

You can’t suddenly increase the amount of a major GHG across the planet, or slow down the planetary rotation to ½ its normal speed. Well, not without a sizable investment, a health and safety risk, possible inconvenience to a lot of people and, at some stage, awkward government investigations.

You can’t stop the atmosphere emitting radiation or test a stratosphere that gets cooler with height. But you can attempt to model it.

Mathematical models all have their limitations. We have to understand what the model can tell us and what it can’t tell us. We have to understand what presuppositions are built into the model and what can change in real life that is not being modeled in the maths. It’s all about context.

(Well-designed) models are not correct and are not incorrect. They are informative if we understand their limitations and capabilities.

In contrast to mathematical models built around the physics of climate mechanisms, many people commenting in the blog world (or even writing blogs) have a vague mental model of how climate works. This of course is way way ahead of a climate model built on physics. It has the advantage of not being written down in equations so that no one can challenge it and seemingly plausible hand-waving argument 1 can be traded against hand-waving argument 2. Unfortunately, on this blog we don’t have the luxury of those resources and – where experiments are not available or not possible – we will have to evaluate the results of mathematical models built on physics and observations.

All the above is not an endorsement of what GCMs tell us. And not an indictment. Hopefully no one reading the above paragraphs came to either conclusion.

When I first built the line by line model it had more limitations than today. One early problem was the stratosphere. In real life the temperature of the stratosphere increases with height. In the model the temperature decreased with height.

This was expected. O2 and O3 absorb solar radiation (primarily ultraviolet) and warm mainly the middle layers of the stratosphere. But the model didn’t have this physics. The model, at this stage, primarily modeled the absorption and emission of terrestrial (aka ‘longwave’) radiation by the atmosphere.

So, after a few versions a very crude model of solar absorption was added. Unfortunately, this solar absorption model still did not create a stratosphere that increased with temperature. This was quite disappointing.

Then commenter Uli pointed out that the model had too much stratospheric water vapor and I added a new parameter to the model which allowed stratospheric water vapor to be set differently from the free troposphere. (So far I’ve been using a realistic level of 6ppmv).

The result was happily that the stratosphere, left to its own (model) devices, started increasing with temperature. The starting point is simply a temperature profile dictated to the model, and the finish point is how the physics ends up calculating the final temperature profile:

Atmospheric-radiation-11a-temp-profile-strat-wv

Figure 1 – A warmer stratosphere and a happier climate model

At the same time, I’ve been updating the model so that it can run to some kind of equilbrium and then various GHGs can be changed.

This was to calculate “radiative forcing” under various scenarios, and specifically I wanted to show how energy moved around in the climate system after a “bump” in something like CO2. This is something that many many people can’t get right in their heads. One of the objectives of the model is to show bit by bit how the increased CO2 causes a reduction in net outgoing radiation, and how that in turn pushes up the atmospheric and surface temperature.

On this journey, once the model stratosphere was behaving a little like its real-life big brother it occurred to me that maybe we could answer the question of why the stratosphere was expected to cool with increased CO2.

See Stratospheric Cooling for some background.

Previously I have worked under the assumption that there are lots of competing “terms” in the energy balance equation for how the stratosphere responds to more CO2 and so simple conceptual models are not going to help.

Now the Science of Doom Climate Model (SoDCM) comes to the rescue.

In fact, while I was waiting for lots of simulations to finish on the PC I was reading again the fascinating Radiative Forcing and Climate Response, by Hansen, Sato & Ruedy, JGR (1997) – free paper – and in a groundhog day experience realized I didn’t understand their flux graphs resulting from various GCM simulations. So the SoDCM allowed me to solve my own conceptual problems.

Maybe.

Let’s take a look at stratospheric cooling.

Understanding Flux Curves

In this simulation:

  • CO2 at 280 ppm
  • no ozone, CH4 or NO2 for longwave absorption
  • boundary layer humidity at 80%
  • free tropospheric humidity at 40%
  • stratospheric water vapor at 6 ppmv
  • tropopause at 200 hPa
  • top of atmosphere (TOA) at 1 hPa
  • solar radiation at 242 W/m² with some absorbed in the stratosphere and troposphere as shown in figure 1 of Part Nine – Reaching Equilibrium

The surface temperature reached equilibrium at 281K and the tropopause was at 11 km:

Atmospheric-radiation-12c-temperature-profile

Figure 2

The equilibrium was reached by running the model for 500 (model) days, with timesteps of 2 hours. The ocean depth was only 5 meters simply to allow the model to get to equilibrium quicker (note 1).

Then at 500 days the CO2 concentration was doubled to 560 ppm and we capture a number of different values from the timestep before the increase and the timestep after the increase.

Let’s take a look at the up and down fluxes through the atmosphere. See also figure 6 of Part Two. In this case we can see pre- and post-2xCO2, but let’s first just try and understand what these flux vs height graphs actually mean:

Atmospheric-radiation-12a-flux-profile-pre-post-2xCO2

Figure 3 – Understanding the Basics

If flux just stays constant (vertical line) through a section of the atmosphere what does it mean?

It means there is no net absorption. It could mean that the atmosphere is transparent to that radiation. It could mean that the atmosphere emits exactly the same amount that it absorbs. Or some of both. Either way, no change = no net radiation absorbed.

Take a look in figure 3 at the (pre-CO2 doubling) upward flux above 10km (in the stratosphere). About 237 W/m² enters the bottom of the stratosphere and about 242 W/m² leaves the top of atmosphere. So the stratosphere is 5 W/m² worse off and from the first law of thermodynamics this either cools the stratosphere or something else is supplying this energy.

Now take a look at the (pre-CO2) downward flux in the stratosphere. At the top of atmosphere there is no downward longwave radiation because there is no source of this radiation outside of the atmosphere. So downward flux = 0 at TOA.

At the bottom of the stratosphere, about 27 W/m² is leaving. So zero is entering and 27 W/m² is leaving – this means that the stratosphere is worse off by 27 W/m².

If we add up the upward and downward longwave fluxes through the stratosphere we find that there is a net loss of about 32 W/m². This means that if the stratosphere is in equilibrium some other source must be supplying 32 W/m².

In this case it is the solar absorption of radiation.

If we were considering the troposphere it would most likely be convection from the surface or lower atmosphere that would be balancing any net radiation loss from higher up in the troposphere.

So, to recap:

  • think about the direction radiation is travelling in:
    • if it is reducing in the direction it is travelling then energy is absorbed into that section of the atmosphere
    • if it is increasing in the direction it is travelling then energy is being lost from that section of the atmosphere
  • if plots of flux against height are vertical that means there is no change in energy in that region
  • if flux vs height is constant (vertical) then it either means
    • the atmosphere is transparent to that radiation, OR
    • the atmosphere is isothermal in that region (emission is balanced by absorption)

Take another look at figure 3 below 10km:

  1. The upward radiation is reducing with height – energy is absorbed by each level of the atmosphere. This is a net heating.
  2. The downward radiation is increasing – energy is lost from each level of the atmosphere. This is a net cooling.
  3. The slope of the curves is not equal. This is because energy is transferred via convection in the troposphere.

Understanding these concepts is essential to understanding radiation in the atmosphere.

Upward Flux from Changes in CO2

Let’s take a closer look at the upward and downward changes due to doubling CO2. So the “pre” curve is the atmosphere in a nice equilibrium condition. And the “post” curve is immediately after CO2 has been doubled, long before any equilibrium has been reached.

Let’s zoom in on the upward fluxes in the stratosphere pre- and immediately post-CO2 doubling:

Atmospheric-radiation-12a-flux-profile-pre-post-2xCO2-highlight-up-stratosphere

Figure 4

Even though the curves are roughly parallel from 10km through to 30km you should be able to see that there is a larger gradient on the post-2xCO2 curve. So pre-CO2 increase, the stratosphere loses a net upward of about 5 W/m², and after CO2 increase the stratosphere loses a net upward of about 6 W/m².

This means more CO2 increases the cooling of the stratosphere when we consider the upward flux. So now the question is, WHY?

If we want to understand the answer, the most useful ingredient is to look at the spectral characteristics of pre- and post. Here we take the radiation leaving at TOA and subtract the radiation entering at the tropopause. So we are considering the net energy lost (why lost? because this calculation is energy out – energy in), and as a function of wavenumber.

Here is the spectral graph of energy lost by the stratosphere due to upwards radiation, before the CO2 increase:

Atmospheric-radiation-12g-upward-spectrum-21-13-pre

Figure 5

The post-CO2 doubling looks very similar so here is a comparison graph, with a slight smoothing (moving average window) just to allow us to see a little more clearly the main differences:

Atmospheric-radiation-12f-upward-spectrum-21-13-pre-and-post-smoothed

Figure 6

So we see that in the case of post-2xCO2, the energy lost is a little higher, and it is in the wavenumber region where CO2 emits strongly. CO2’s peak absorption/emission is at 667 cm-1 (15 μm).

Just to confirm, here is the difference – post-2xCO2 minus pre-2xCO2 and not smoothed:

Atmospheric-radiation-12h-upward-spectrum-21-13-post-less-pre

Figure 7

We can see that the main regions of CO2 absorption and emission are the reason. And we note that the temperature of the stratosphere is increasing with height.

So the reason is clear – due to principles outlined earlier in Part Two. Because the stratospheric temperature increases with height, the net emission (i.e., emission less absorption) of radiation, as we go up through the stratosphere will be a progressively higher value. And once we increase the amount of CO2, this net emission will increase even further.

This is what we see in the spectral intensity – the net change in stratospheric emission [(out-in)2xCO2 – (out-in)1xCO2] increases due to the emission in the main CO2 bands.

Downward Flux from Changes in CO2

Here is what we see when we zoom in on the downward flux in the stratosphere:

Atmospheric-radiation-12a-flux-profile-pre-post-2xCO2-highlight-down-stratosphere

Figure 8

Of course, as already mentioned, the downward longwave flux at TOA must be zero.

This time it is conceptually easier to understand the change from more CO2. There’s one little fly in the understanding ointment, but let’s come to that later.

So when we think about the cooling of the stratosphere from downward flux it’s quite easy. Coming in at the top is zero. Coming out of the bottom (pre-CO2 increase) is about 27 W/m². Coming out of the bottom (post-2xCO2) is about 30 W/m². So increasing CO2 causes a cooling of about 3 W/m² due to changes in downward flux.

Here is the spectral flux (unsmoothed) downward out of the bottom of the tropopause, pre- and post-2xCO2:

Atmospheric-radiation-12d-downward-spectrum-tropopause-pre-post

Figure 9

And as with figure 7, below is the difference in downward intensity as a result of 2xCO2. This is post less pre, so the positive value overall means a cooling – as we saw in the total flux change in figure 8.

The cause is still due to the CO2 band but the specifics are a little different from the upward change. Here the center of the CO2 band has zero effect. But the “wings” of the CO2 band – around 600 cm-1 and 700 cm-1 are the places causing the effect:

Atmospheric-radiation-12d-downward-spectrum-tropopause-delta-pre-post

Figure 10

The temperature is reducing as we go downwards so the emission from the center of the CO2 band cannot be increasing as we go downward. If we look back at figure 7 for the upward direction, the temperature is increasing upward so the emission from the center of the CO2 band must be increasing.

And the conceptual fly in the ointment alluded to earlier – this one can be confusing (or simple..) – if the starting flux at TOA is zero and the temperature decreases downward surely the downward flux only gets less? Less than zero? Instead, think of the whole stratosphere as a body. It must emit radiation due to its temperature and emissivity. It can’t absorb any radiation from above (because there is none), so it must emit some downward radiation. As its emissivity increases with more GHGs it must emit more radiation into the troposphere. It’s simple really.

Let’s now finalize this story by considering the net change in flux with height due to CO2 increases. Here if “net” is increasing with height it means absorption or heating. And if “net” is reducing with height it means emission or cooling. See note 2 where the details are explained.

So the blue line (upward flux) decreasing from the tropopause up to TOA means that the change in flux is cooling the stratosphere. And likewise for the green line (downward flux). This is just the results already shown as spectral changes now shown as flux changes:

Atmospheric-radiation-12b-delta-flux-profile-pre-post-2xCO2

Figure 11

Net Effect

If we combine figure 11 for the total net effect of doubling CO2:

Atmospheric-radiation-12b-delta-flux-profile-pre-post-2xCO2-total

Figure 12

From the tropopause at 11km through to TOA we can see that the combined change in flux due to CO2 doubling causes a cooling of the stratosphere. (And from the surface up to the tropopause we see a heating of the troposphere).

By comparison, here is an extract from Hansen et al (1997):

From Hansen et al (1997)

From Hansen et al (1997)

Figure 13

The highlighted instantaneous graph is the one for comparison with figure 12.

This is the case before the stratosphere has relaxed into equilibrium. Note that the “adjusted” graph – stratospheric equilibrium – has a vertical line for ΔF vs height, which simply means that the stratosphere is, in that case, in radiative equilibrium.

Notice as well that the magnitude of my graph is a lot higher. There may be a lot of reasons for that, including that fact that mine is one specific case rather than some climatic mean, and also that the absorption of solar radiation in my model has been treated very crudely. (Lots of other factors include missing GHGs like CH4, N2O, etc).

Reasons

So we have seen that the net emission of radiation by CO2 bands is what causes the cooling from upward radiation and the cooling from downward radiation when CO2 is increased.

For further insight, I amended the model so that on the timestep before and just after equilibrium the stratosphere was:

A) snapped back to an isothermal case, with the temperature set at the tropopause temperature just calculated

B) forced into a cooling at 4 K/km (c.f. the troposphere with a lapse rate of 6.5 K/km)

Case A, temperature profile just before and after equilibrium:

Atmospheric-radiation-12k-isothermal-temperature-profile

Figure 14

And the comparison to figure 11:

Atmospheric-radiation-12b-delta-flux-profile-pre-post-2xCO2-isothermal-stratosphere

Figure 15

We can see that the downward flux change is similar to figure 11, but the upward flux is different. It is fairly constant through the stratosphere. This is not surprising. The flux from below is either transmitted straight through, or is absorbed and re-emitted at the same temperature. So no change to upward flux.

But the downward flux only results from the emission from the stratosphere (nothing transmitted through from above). As CO2 is increased the emissivity of the atmosphere increases and so emission of radiation from the stratosphere increases. The fact that the stratospheric temperature is isothermal has a small effect as can be seen by comparing the green curve on figures 15 & 11. But it isn’t very significant.

Now let’s consider case B. First the temperature profile:

Atmospheric-radiation-12n-declining-strato-temperature-profile

Figure 16

Now the net flux graph:

Atmospheric-radiation-12p-delta-flux-profile-pre-post-2xCO2-cool-stratosphere

Figure 17

Here we see that the effect of increased CO2 on the upward flux is now a heating in the stratosphere. And the net change in downward flux still has a cooling effect.

Atmospheric-radiation-12o-delta-flux-profile-pre-post-2xCO2-total-cool-stratosphere

Figure 18

Here we see that for a stratosphere where temperature reduces with altitude, doubling CO2 would not have a noticeable effect on the stratospheric temperature. Depending on the temperature profile (and other factors) there might be a slight cooling or a slight heating.

Conclusion

This is a subject where it’s easy to confuse readers – along with the article writer. Possibly no one that was unclear before made it the whole way and said “ok, got it”.

Hopefully, if you only made it only part of the way through, you now have a better grasp of some of the principles.

The reasons behind stratospheric cooling due to increased GHGs have been difficult to explain even for very knowledgeable atmospheric physicists (e.g., one of many).

I think I can explain stratospheric cooling under increasing CO2. I think I can see that other factors like the exact temperature profile of the stratosphere on any given day/month and the water vapor profile (not shown in this article) will also affect the change in stratospheric temperature from increasing CO2.

If the bewildering complexity of up/down, in-out, net of in-out, net of in-out for 2xCO2-original CO2 has left you baffled please feel free to ask questions. This is not an easy topic. I was baffled. I have 4 pages of notes with little graphs and have rewritten the equations in note 2 at least 5 times to try and get the meaning clear – and am still expecting someone to point out a sign error.

Related Articles

Part One – some background and basics

Part Two – some early results from a model with absorption and emission from basic physics and the HITRAN database

Part Three – Average Height of Emission – the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions

Part Four – Water Vapor – results of surface (downward) radiation and upward radiation at TOA as water vapor is changed

Part Five – The Code – code can be downloaded, includes some notes on each release

Part Six – Technical on Line Shapes – absorption lines get thineer as we move up through the atmosphere..

Part Seven – CO2 increases – changes to TOA in flux and spectrum as CO2 concentration is increased

Part Eight – CO2 Under Pressure – how the line width reduces (as we go up through the atmosphere) and what impact that has on CO2 increases

Part Nine – Reaching Equilibrium – when we start from some arbitrary point, how the climate model brings us back to equilibrium (for that case), and how the energy moves through the system

Part Ten – “Back Radiation” – calculations and expectations for surface radiation as CO2 is increased

Part Twelve – Heating Rates – heating rate (‘C/day) for various levels in the atmosphere – especially useful for comparisons with other models.

References

The data used to create these graphs comes from the HITRAN database.

The HITRAN 2008 molecular spectroscopic database, by L.S. Rothman et al, Journal of Quantitative Spectroscopy & Radiative Transfer (2009)

The HITRAN 2004 molecular spectroscopic database, by L.S. Rothman et al., Journal of Quantitative Spectroscopy & Radiative Transfer (2005)

Radiative Forcing and Climate Response, by Hansen, Sato & Ruedy, JGR (1997) – free paper

Notes

Note 1: The relative heat capacity of the ocean vs the atmosphere has a huge impact on the climate dynamics. But in this simulation we were interested in reaching an equilibrium for a given CO2 concentration & solar absorption – and then seeing what happened to radiative balance immediately after a bump in CO2 concentration.

For this requirement it isn’t so important to have the right ocean depth needed for decent dynamic modeling.

Note 2: The treatment of upward and downward flux can get bewildering. The easiest approach is to just consider the change in flux in the direction in which it is travelling. But because upward and downward are in opposite directions, F↑ is in the direction of z, and F↓ is in the opposite direction to z, so heating and cooling are in opposite directions.

Due to changing GHGs:

If F↑(z)2xCO2 – F↑(z) < 0 => Heating below height z (less flux escaping);

F↑(z)2xCO2 – F↑(z) > 0  => Cooling below height z

If F↓(z)2xCO2 – F↓(z) < 0 => Cooling below height z (less flux entering);

F↓(z)2xCO2 – F↓(z) > 0  => Heating below height z

So for example for figure 11 – the net upward = F↑(z) – F↑(z)2xCO2 & net downward = F↓(z)2xCO2 – F↓(z)

Flux “divergence”

dF↑(z)/dz < 0  => Heating of that part of the atmosphere (upward flux is reducing due to being absorbed)

dF↓(z)/dz < 0  => Cooling of that part of the atmosphere (downward flux is increasing as we go down due to more being emitted, or rewritten is very strange English to match the equation: downward flux is decreasing in the upward direction)