Feeds:
Posts
Comments

Archive for the ‘Climate Models’ Category

In Part Seven we looked at a couple of papers from 1989 and 1994 which attempted to use GCMs to “start an ice age”. The evolution of the “climate science in progress” has been:

  1. Finding indications that the timing of ice age inception was linked to redistribution of solar insolation via orbital changes – possibly reduced summer insolation in high latitudes (Hays et al 1976 – discussed in Part Three)
  2. Using simple energy balance models to demonstrate there was some physics behind the plausible ideas (we saw a subset of the plausible ideas in Part Six – Hypotheses Abound)
  3. Using a GCM with the starting conditions of around 115,000 years ago to see if “perennial snow cover” could be achieved at high latitudes that weren’t ice covered in the last inter-glacial – i.e., can we start a new ice age?

Why, if an energy balance model can “work”, i.e., produce perennial snow cover to start a new ice age, do we need to use a more complex model? As Rind and his colleagues said in their 1989 paper:

Various energy balance climate models have been used to assess how much cooling would be associated with changed orbital parameters.. With the proper tuning of parameters, some of which is justified on observational grounds, the models can be made to simulate the gross glacial/interglacial climate changes. However, these models do not calculate from first principles all the various influences on surface air temperature noted above, nor do they contain a hydrologic cycle which would allow snow cover to be generated or increase. The actual processes associated with allowing snow cover to remain through the summer will involve complex hydrologic and thermal influences, for which simple models can only provide gross approximations.

[Emphases added – and likewise in all following quotations, bold is emphasis added]. So interestingly, moving to a more complex model with better physics showed that there was a problem with (climate models) starting an ice age. Still, that was early GCMs with much more limited computing power. In this article we will look at the results a decade or so later.

Reviews

We’ll start with a couple of papers that include excellent reviews of “the problem so far”, one in 2002 by Yoshimori and his colleagues and one in 2004 by Vettoretti & Peltier. Yoshimori et al 2002:

One of the fundamental and challenging issues in paleoclimate modelling is the failure to capture the last glacial inception (Rind et al. 1989)..

..Between 118 and 110 kaBP, the sea level records show a rapid drop of 50 – 80 m from the last interglacial, which itself had a sea level only 3 – 5 m higher than today. This sea level lowering, as a reference, is about half of the last glacial maximum. ..As the last glacial inception offers one of few valuable test fields for the validation of climate models, particularly atmospheric general circulation models (AGCMs), many studies regarding this event have been conducted.

Phillipps & Held (1994) and Gallimore & Kutzbach (1995).. conducted a series of sensitivity experiments with respect to orbital parameters by specifying several extreme orbital configurations. These included a case with less obliquity and perihelion during the NH winter, which produces a cooler summer in the NH. Both studies came to a similar conclusion that although a cool summer orbital configuration brings the most favorable conditions for the development of permanent snow and expansion of glaciers, orbital forcing alone cannot account for the permanent snow cover in North America and Europe.

This conclusion was confirmed by Mitchell (1993), Schlesinger & Verbitsky (1996), and Vavrus (1999).. ..Schlesinger & Verbitsky (1996), integrating an ice sheet-asthenosphere model with AGCM output, found that a combination of orbital forcing and greenhouse forcing by reduced CO2 and CH4 was enough to nucleate ice sheets in Europe and North America. However, the simulated global ice volume was only 31% of the estimate derived from proxy records.

..By using a higher resolution model, Dong & Valdes (1995) simulated the growth of perennial snow under combined orbital and CO2 forcing. As well as the resolution of the model, an important difference between their model and others was the use of “envelope orography” [playing around with the height of land].. found that the changes in sea surface temperature due to orbital perturbations played a very important role in initiating the Laurentide and Fennoscandian ice sheets.

And as a note on the last quote, it’s important to understand that these studies were with an Atmospheric GCM, not an Atmospheric Ocean GCM – i.e., a model of the atmosphere with some prescribed sea surface temperatures (these might be from a separate run using a simpler model, or from values determined from proxies). The authors then comment on the potential impact of vegetation:

..The role of the biosphere in glacial inception has been studied by Gallimore & Kutzbach (1996), de Noblet et al. (1996), and Pollard and Thompson (1997).

..Gallimore & Kutzbach integrated an AGCM with a mixed layer ocean model under five different forcings:  1) control; 2) orbital; 3) #2 plus CO2; 4) #3 plus 25% expansion of tundra based on the study of Harrison et al. (1995); and (5) #4 plus further 25% expansion of tundra. The effect of the expansion of tundra through a vegetation-snow masking feedback was approximated by increasing the snow cover fraction. In only the last case was perennial snow cover seen..

..Pollard and Thompson (1997) also conducted an interactive vegetation and AGCM experiment under both orbital and CO2 forcing. They further integrated a dynamic ice-sheet model for 10 ka under the surface mass balance calculated from AGCM output using a multi-layer snow/ice-sheet surface column model on the grid of the dynamical ice-sheet model including the effect of refreezing of rain and meltwater. Although their model predicted the growth of an ice sheet over Baffin Island and the Canadian Archipelago, it also predicted a much faster growth rate in north western Canada and southern Alaska, and no nucleation was seen on Keewatin or Labrador [i.e. the wrong places]. Furthermore, the rate of increase of ice volume over North America was an order of magnitude less than that estimated from proxy records.

They conclude:

It is difficult to synthesise the results of these earlier studies since each model used different parameterisations of unresolved physical processes, resolution, and had different control climates as well as experimental design.

They summarize that results to date indicate that orbital forcing alone nor CO2  alone can explain glacial inception, and the combined effects are not consistent. And the difficulty appears to relate to the resolution of the model or feedback from the biosphere (vegetation).

A couple of years later Vettoretti & Peltier (2004) had a good review at the start of their paper.

Initial attempts to gain deeper understanding of the nature of the glacial–interglacial cycles involved studies based upon the use of simple energy balance models (EBMs), which have been directed towards the simulation of perennial snow cover under the influence of appropriately modified orbital forcing (e.g. Suarez and Held, 1979).

Analyses have since evolved such that the models of the climate system currently employed include explicit coupling of ice sheets to the EBM or to more complete AGCM models of the atmosphere.

The most recently developed models of the complete 100 kyr iceage cycle have evolved to the point where three model components have been interlinked, respectively, an EBM of the atmosphere that includes the influence of ice-albedo feedback including both land ice and sea ice, a model of global glaciology in which ice sheets are forced to grow and decay in response to meteorologically mediated changes in mass balance, and a model of glacial isostatic adjustment, through which process the surface elevation of the ice sheet may be depressed or elevated depending upon whether accumulation or ablation is dominant..

..Such models have also been employed to investigate the key role that variations in atmospheric carbon dioxide play in the 100 kyr cycle, especially in the transition out of the glacial state (Tarasov and Peltier, 1997; Shackleton, 2000). Since such models are rather efficient in terms of the computer resources required to integrate them, they are able to simulate the large number of glacial– interglacial cycles required to understand model sensitivities.

There has also been a movement within the modelling community towards the use of models that are currently referred to as earth models of intermediate complexity (EMICs) which incorporate sub-components that are of reduced levels of sophistication compared to the same components in modern Global ClimateModels (GCMs). These EMICs attempt to include representations of most of the components of the real Earth system including the atmosphere, the oceans, the cryosphere and the biosphere/carbon cycle (e.g. Claussen, 2002). Such models have provided, and will continue to provide, useful insight into long-term climate variability by making it possible to perform a large number of sensitivity studies designed to investigate the role of various feedback mechanisms that result from the interaction between the components that make up the climate system (e.g. Khodri et al., 2003).

Then the authors comment on the same studies and issues covered by Yoshimori et al, and additionally on their own 2003 paper and another study. On their own research:

Vettoretti and Peltier (2003a), more recently, have demonstrated that perennial snow cover is achieved in a recalibrated version of the CCCma AGCM2 solely as a consequence of orbital forcing when the atmospheric CO2 concentration is fixed to the pre-industrial level as constrained by measurements on air bubbles contained in the Vostok ice core (Petit et al., 1999).

This AGCM simulation demonstrated that perennial snow cover develops at high northern latitudes without the necessity of including any feedbacks due to vegetation or other effects. In this work, the process of glacial inception was analysed using three models having three different control climates that were, respectively, the original CCCma cold biased model, a reconfigured model modified so as to be unbiased, and a model that was warm biased with respect to the modern set of observed AMIP2 SSTs.. ..Vettoretti and Peltier (2003b) suggested a number of novel feedback mechanisms to be important for the enhancement of perennial snow cover.

In particular, this work demonstrated that successively colder climates increased moisture transport into glacial inception sensitive regions through increased baroclinic eddy activity at mid- to high latitudes. In order to assess this phenomenon quantitatively, a detailed investigation was conducted of changes in the moisture balance equation under 116 ka BP orbital forcing for the Arctic polar cap. As well as illustrating the action of a ‘‘cyrospheric moisture pump’’, the authors also proposed that the zonal asymmetry of the inception process at high latitudes, which has been inferred on the basis of geological observations, is a consequence of zonally heterogeneous increases and decreases of the northwards transport of heat and moisture.

And they go on to discuss other papers with an emphasis on moisture transport poleward. Now we’ll take a look at some work from that period.

Newer GCM work

Yoshimori et al 2002

Their models – an AGCM (atmospheric GCM) with 116kyrs orbital conditions and a) present day SSTs b) 116 kyrs SSTs. Then another model run with the above conditions and changed vegetation based on temperature (if the summer temperature is less than -5ºC the vegetation type is changed to tundra). Because running a “fully coupled” GCM (atmosphere and ocean) over a long time period required too much computing resources a compromise approach was used.

The SSTs were calculated using an intermediate complexity model, with a simple atmospheric model and a full ocean model (including sea ice) – and by running the model for 2000 years (oceans have a lot of thermal inertia). The details of this is described in section 2.1 of their paper. The idea is to get some SSTs that are consistent between ocean and atmosphere.

The SSTs are then used as boundary conditions for a “proper” atmospheric GCM run over 10 years – this is described in section 2.2 of their paper. The insolation anomaly, with respect to present day: Yoshimori-2002-Fig1-insolation-anomaly-116kaBP

Figure 1

They use 240 ppm CO2 for the 116 kyr condition, as “the lowest probably equivalent CO2 level” (combining radiative forcing of CO2 and CH4). This equates to a reduction of 2.2 W/m² of radiative forcing. The SSTs calculated from the preliminary model are colder globally by 1.1ºC for the 116 kyr condition compared to the present day SST run. This is not due to the insolation anomaly, which just “redistributes” solar energy, it is due to the lower atmospheric CO2 concentration. The 116kyr SST in the northern North Atlantic is about 6ºC colder. This is due to the lower insolation value in summer plus a reduction in the MOC (note 1). The results of their work:

  • with modern SSTs, orbital and CO2 values from 116 kyrs – small extension of perennial snow cover
  • with calculated 116 kyr SST, orbital and CO2 values – a large extension in perennial snow cover into Northern Alaska, eastern Canada and some other areas
  • with vegetation changes (tundra) added – further extension of snow cover north of 60º

They comment (and provide graphs) that increased snow cover is partly from reduced snow melt but also from additional snowfall. This is the case even though colder temperatures generally favor less precipitation.

Contrary to the earlier ice age hypothesis, our results suggest that the capturing of glacial inception at 116kaBP requires the use of “cooler” sea surface conditions than those of the present climate. Also, the large impact of vegetation change on climate suggests that the inclusion of vegetation feedback is important for model validation, at least, in this particular period of Earth history.

What we don’t find out is why their model produces perennial snow cover (even without vegetation changes) where earlier attempts failed. What appears unstated is that although the “orbital hypothesis” is “supported” by the paper, the necessary conditions are colder sea surface temperatures induced by much lower atmospheric CO2. Without the lower CO2 this model cannot start an ice age. And an additional point to note, Vettoretti & Peltier 2004, say this about the above paper:

The meaningfulness of these results, however, remain to be seen as the original CCCma AGCM2 model is cold biased in summer surface temperature at high latitudes and sensitive to the low value of CO2 specified in the simulations.

Vettoretti & Peltier 2003

This is the paper referred to by their 2004 paper.

This simulation demonstrates that entry into glacial conditions at 116 kyr BP requires only the introduction of post-Eemian orbital insolation and standard preindustrial CO2 concentrations

Here are the seasonal and latitudinal variations in solar TOA of 116 kyrs ago vs today:

From Vettoretti & Peltier 2003

From Vettoretti & Peltier 2003

The essence of their model testing was they took an atmospheric GCM coupled to prescribed SSTs – for three different sets of SSTs – with orbital and GHG conditions from 116 kyrs BP and looked to see if perennial snow cover occurred (and where):

The three 116 kyr BP experiments demonstrated that glacial inception was successfully achieved in two of the three simulations performed with this model.

The warm-biased experiment delivered no perennial snow cover in the Arctic region except over central Greenland.

The cold-biased 116 kyr BP experiment had large portions of the Arctic north of 608N latitude covered in perennial snowfall. Strong regions of accumulation occurred over the Canadian Arctic archipelago and eastern and central Siberia. The accumulation over eastern Siberia appears to be excessive since there is little evidence that eastern Siberia ever entered into a glacial state. The accumulation pattern in this region is likely a result of the excessive precipitation in the modern simulation.

They also comment:

All three simulations are characterized by excessive summer precipitation over the majority of the polar land areas. Likewise, a plot of the annual mean precipitation in this region of the globe (not shown) indicates that the CCCma model is in general wet biased in the Arctic region. It has previously been demonstrated that the CCCma GCMII model also has a hydrological cycle that is more vigorous than is observed (Vettoretti et al. 2000b).

I’m not clear how much the model bias of excessive precipitation also affects their result of snow accumulation in the “right” areas.

In Part II of their paper they dig into the details of the changes in evaporation, precipitation and transport of moisture into the arctic region.

Crucifix & Loutre 2002

This paper (and the following paper) used an EMIC – an intermediate complexity model – which is a trade off model that has courser resolution, simpler parameterization but consequently much faster run time  – allowing for lots of different simulations over much longer time periods than can be done with a GCM. The EMICs are also able to have coupled biosphere, ocean, ice sheets and atmosphere – whereas the GCM runs we saw above had only an atmospheric GCM with some method of prescribing sea surface temperatures.

This study addresses the mechanisms of climatic change in the northern high latitudes during the last interglacial (126–115 kyr BP) using the earth system model of intermediate complexity ‘‘MoBidiC’’.

Two series of sensitivity experiments have been performed to assess (a) the respective roles played by different feedbacks represented in the model and (b) the respective impacts of obliquity and precession..

..MoBidiC includes representations for atmosphere dynamics, ocean dynamics, sea ice and terrestrial vegetation. A total of ten transient experiments are presented here..

..The model simulates important environmental changes at northern high latitudes prior the last glacial inception, i.e.: (a) an annual mean cooling of 5 °C, mainly taking place between 122 and 120 kyr BP; (b) a southward shift of the northern treeline by 14° in latitude; (c) accumulation of perennial snow starting at about 122 kyr BP and (d) gradual appearance of perennial sea ice in the Arctic.

..The response of the boreal vegetation is a serious candidate to amplify significantly the orbital forcing and to trigger a glacial inception. The basic concept is that at a large scale, a snow field presents a much higher albedo over grass or tundra (about 0.8) than in forest (about 0.4).

..It must be noted that planetary albedo is also determined by the reflectance of the atmosphere and, in particular, cloud cover. However, clouds being prescribed in MoBidiC, surface albedo is definitely the main driver of planetary albedo changes.

In their summary:

At high latitudes, MoBidiC simulates an annual mean cooling of 5 °C over the continents and a decrease of 0.3 °C in SSTs.

This cooling is mainly related to a decrease in the shortwave balance at the top-of-the atmosphere by 18 W/m², partly compensated for by an increase by 15 W/m² in the atmospheric meridional heat transport divergence.

These changes are primarily induced by the astronomical forcing but are almost quadrupled by sea ice, snow and vegetation albedo feedbacks. The efficiency of these feedbacks is enhanced by the synergies that take place between them. The most critical synergy involves snow and vegetation and leads to settling of perennial snow north of 60°N starting 122 kyr BP. The temperature-albedo feedback is also responsible for an acceleration of the cooling trend between 122 and 120 kyr BP. This acceleration is only simulated north of 60° and is absent at lower latitudes.

See note 2 for details on the model. This model has a cold bias of up to 5°C in the winter high latitudes.

Calov et al 2005

We study the mechanisms of glacial inception by using the Earth system model of intermediate complexity, CLIMBER-2, which encompasses dynamic modules of the atmosphere, ocean, biosphere and ice sheets. Ice-sheet dynamics are described by the three- dimensional polythermal ice-sheet model SICOPOLIS. We have performed transient experiments starting at the Eemian interglacial, at 126 ky BP (126,000 years before present). The model runs for 26 kyr with time-dependent orbital and CO2 forcings.

The model simulates a rapid expansion of the area covered by inland ice in the Northern Hemisphere, predominantly over Northern America, starting at about 117 kyr BP. During the next 7 kyr, the ice volume grows gradually in the model at a rate which corresponds to a change in sea level of 10 m per millennium.

We have shown that the simulated glacial inception represents a bifurcation transition in the climate system from an interglacial to a glacial state caused by the strong snow-albedo feedback. This transition occurs when summer insolation at high latitudes of the Northern Hemisphere drops below a threshold value, which is only slightly lower than modern summer insolation.

By performing long-term equilibrium runs, we find that for the present-day orbital parameters at least two different equilibrium states of the climate system exist—the glacial and the interglacial; however, for the low summer insolation corresponding to 115 kyr BP we find only one, glacial, equilibrium state, while for the high summer insolation corresponding to 126 kyr BP only an interglacial state exists in the model.

We can get some sense of the simplification of the EMIC from the resolution:

The atmosphere, land- surface and terrestrial vegetation models employ the same grid with latitudinal resolution of 10° and longitudinal resolution of approximately 51°

Their ice sheet model has much more detail, with about 500 “cells” of the ice sheet fitting into 1 cell of the land surface model.

They also comment on the general problems (so far) with climate models trying to produce ice ages:

We speculate that the failure of some climate models to successfully simulate a glacial inception is due to their coarse spatial resolution or climate biases, that could shift their threshold values for the summer insolation, corresponding to the transition from interglacial to glacial climate state, beyond the realistic range of orbital parameters.

Another important factor determining the threshold value of the bifurcation transition is the albedo of snow.

In our model, a reduction of averaged snow albedo by only 10% prevents the rapid onset of glaciation on the Northern Hemisphere under any orbital configuration that occurred during the Quaternary. It is worth noting that the albedo of snow is parameterised in a rather crude way in many climate models, and might be underestimated. Moreover, as the albedo of snow strongly depends on temperature, the under-representation of high elevation areas in a coarse- scale climate model may additionally weaken the snow– albedo feedback.

Conclusion

So in this article we have reviewed a few papers from a decade or so ago that have turned the earlier problems (see Part Seven)  into apparent (preliminary) successes.

We have seen two papers using models of “intermediate complexity” and coarse spatial resolution that simulated the beginnings of the last ice age. And we have seen two papers which used atmospheric GCMs linked to prescribed ocean conditions that simulated perennial snow cover in critical regions 116 kyrs ago.

Definitely some progress.

But remember the note that the early energy balance models had concluded that perennial snow cover could occur due to the reduction in high latitude summer insolation – support for the “Milankovitch” hypothesis. But then the much improved – but still rudimentary – models of Rind et al 1989 and Phillipps & Held 1994 found that with the better physics and better resolution they were unable to reproduce this case. And many later models likewise.

We’ve yet to review a fully coupled GCM (atmosphere and ocean) attempting to produce the start of an ice age. In the next article we will take a look at a number of very recent papers, including Jochum et al (2012):

So far, however, fully coupled, nonflux-corrected primitive equation general circulation models (GCMs) have failed to reproduce glacial inception, the cooling and increase in snow and ice cover that leads from the warm interglacials to the cold glacial periods..

..The GCMs failure to recreate glacial inception [see Otieno and Bromwich (2009) for a summary], which indicates a failure of either the GCMs or of Milankovitch’s hypothesis. Of course, if the hypothesis would be the culprit, one would have to wonder if climate is sufficiently understood to assemble a GCM in the first place.

We will also see that the strength of feedback mechanisms that contribute to perennial snow cover varies significantly for different papers.

And one of the biggest problems still being run into is the computing power necessary. From Jochum (2012) again:

This experimental setup is not optimal, of course. Ideally one would like to integrate the model from the last interglacial, approximately 126 kya ago, for 10,000 years into the glacial with slowly changing orbital forcing. However, this is not affordable; a 100-yr integration of CCSM on the NCAR supercomputers takes approximately 1 month and a substantial fraction of the climate group’s computing allocation.

More on this fascinating topic very soon.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models

References

On the causes of glacial inception at 116 kaBP, Yoshimori, Reader, Weaver & McFarlane, Climate Dynamics (2002) – paywall paper – free paper

Sensitivity of glacial inception to orbital and greenhouse gas climate forcing, Vettoretti & Peltier, Quaternary Science Reviews (2004) – paywall paper

Post-Eemian glacial inception. Part I: the impact of summer seasonal temperature bias, Vettoretti & Peltier, Journal of Climate (2003) – free paper

Post-Eemian Glacial Inception. Part II: Elements of a Cryospheric Moisture Pump, Vettoretti & Peltier, Journal of Climate (2003)

Transient simulations over the last interglacial period (126–115 kyr BP): feedback and forcing analysis, Crucifix & Loutre 2002, Climate Dynamics (2002) – paywall paper with first 2 pages viewable for free

Transient simulation of the last glacial inception. Part I: glacial inception as a bifurcation in the climate system, Calov, Ganopolski, Claussen, Petoukhov & Greve, Climate Dynamics (2005) – paywall paper with first 2 pages viewable for free

True to Milankovitch: Glacial Inception in the New Community Climate System Model, Jochum et al, Journal of Climate (2012) – free paper

Notes

1. MOC = meridional overturning current. The MOC is the “Atlantic heat conveyor belt” where the cold salty water in the polar region of the Atlantic sinks rapidly and forms a circulation which pulls (warmer) surface equatorial waters towards the poles.

2. Some specifics on MoBidiC from the paper to give some idea of the compromises:

MoBidiC links a zonally averaged atmosphere to a sectorial representation of the surface, i.e. each zonal band (5° in latitude) is divided into different sectors representing the main continents (Eurasia–Africa and America) and oceans (Atlantic, Pacific and Indian). Each continental sector can be partly covered by snow and similarly, each oceanic sector can be partly covered by sea ice (with possibly a covering snow layer). The atmospheric component has been described by Galle ́e et al. (1991), with some improvements given in Crucifix et al. (2001). It is based on a zonally averaged quasi-geostrophic formalism with two layers in the vertical and 5° resolution in latitude. The radiative transfer is computed by dividing the atmosphere into up to 15 layers.

The ocean component is based on the sectorially averaged form of the multi-level, primitive equation ocean model of Bryan (1969). This model is extensively described in Hovine and Fichefet (1994) except for some minor modifications detailed in Crucifix et al. (2001). A simple thermodynamic–dynamic sea-ice component is coupled to the ocean model. It is based on the 0-layer thermodynamic model of Semtner (1976), with modifications introduced by Harvey (1988a, 1992). A one-dimensional meridional advection scheme is used with ice velocities prescribed as in Harvey (1988a). Finally, MoBidiC includes the dynamical vegetation model VE- CODE developed by Brovkin et al. (1997). It is based on a continuous bioclimatic classification which describes vegetation as a composition of simple plant functional types (trees and grass). Equilibrium tree and grass fractions are parameterised as a function of climate expressed as the GDD0 index and annual precipitation. The GDD0 (growing degree days above 0) index is defined as the cumulate sum of the continental temperature for all days during which the mean temperature, expressed in degrees, is positive.

MoBidiC’s simulation of the present-day climate has been discussed at length in (Crucifix et al. 2002). We recall its main features. The seasonal cycle of sea ice is reasonably reproduced with an Arctic sea-ice area ranging from 5 · 106 (summer) to 15 · 106 km2 (winter), which compares favourably with present-day observations (6.2 · 106 to 13.9 · 106 km2, respectively, Gloersen et al. 1992). Nevertheless, sea ice tends to persist too long in spring, and most of its melting occurs between June and August, which is faster than in the observations. In the Atlantic Ocean, North Atlantic Deep Water forms mainly between 45 and 60°N and is exported at a rate of 12.4 Sv to the Southern Ocean. This export rate is compatible with most estimates (e.g. Schmitz 1995). Furthermore, the main water masses of the ocean are well reproduced, with recirculation of Antarctic Bottom Water below the North Atlantic Deep Water and formation of Antarctic Intermediate Water. However no convection occurs in the Atlantic north of 60°N, contrary to the real world. As a consequence, continental high latitudes suffer of a cold bias, up to 5 °C in winter. Finally, the treeline is around about 65°N, which is roughly comparable to zonally averaged observations (e.g. MacDonald et al. 2000) but experiments made with this model to study the Holocene climate revealed its tendency to overestimate the amplitude of the treeline shift in response to the astronomical forcing (Crucifix et al. 2002).

Read Full Post »

In Part Six we looked at some of the different theories that confusingly go by the same name. The “Milankovitch” theories.

The essence of these many theories – even though the changes in “tilt” of the earth’s axis and the time of closest approach to the sun don’t change the total annual solar energy incident on the climate, the changing distribution of energy causes massive climate change over thousands of years.

One of the “classic” hypotheses is increases in July insolation at 65ºN cause the ice sheets to melt. Or conversely, reductions in July insolation at 65ºN cause the ice sheets to grow.

The hypotheses described can sound quite convincing. Well, one at a time can sound quite convincing – when all of the “Milankovitch theories” are all lined up alongside each other they start to sound more like hopeful ideas.

In this article we will start to consider what GCMs can do in falsifying these theories. For some basics on GCMs, take a look at Models On – and Off – the Catwalk.

Many readers of this blog have varying degrees of suspicion about GCMs. But as regular commenter DeWitt Payne often says, “all models are wrong, but some are useful“, that is, none are perfect, but some can shed light on the climate mechanisms we want to understand.

In fact, GCMs are essential to understand many climate mechanisms and essential to understand the interaction between different parts of the climate system.

Digression – Ice Sheets and Positive Feedback

For beginners, a quick digression into ice sheets and positive feedback. Melting and forming of ice & snow is undisputably a positive feedback within the climate system.

Snow reflects around 60-90% of incident solar radiation. Water reflects less than 10% and most ground surfaces reflect less than 25%.  If a region heats up sufficiently, ice and snow melt. Which means less solar radiation gets reflected, which means more radiation is absorbed, which means the region heats up some more. The effect “feeds itself”. It’s a positive feedback.

In the annual cycle it doesn’t lead to any kind of thermal runaway or a snowball earth because the solar radiation goes through a much bigger cycle.

Over much longer time periods it’s conceivable that (regional) melting of ice sheets leads to more (regional) solar radiation absorbed, causing more melting of ice sheets which leads to yet more melting. And the converse for growth of ice sheets. The reason it’s conceivable is because it’s just that same mechanism.

Digression over.

Why GCMs ?

The only alternative is to do the calculation in your head or on paper. Take a piece of paper, plot a graph of the incident radiation at all latitudes vs the time period we are interested in – say 150 kyrs ago through to 100 kyrs – now work out by year, decade or century, how much ice melts. Work out the new albedo for each region. Calculate the change in absorbed radiation. Calculate the regional temperature changes. Calculated the new heat transfer from low to high latitudes (lots of heat is exported from the equator to the poles via the atmosphere and the ocean) due to the latitudinal temperature gradient, the water vapor transported, and the rainfall and snowfall. Don’t forget to track ice melt at high latitudes and its impact on the Meridional Overturning Circulation (MOC) which drives a significant part of the heat transfer from the equator to poles. Step to the next year, decade or century and repeat.

How are those calculations coming along?

A GCM uses some fundamental physics equations like energy balance and mass balance. It uses a lot of parameterized equations to calculate things like heat transfer from the surface to the atmosphere dependent on the wind speed, cloud formation, momentum transfer from wind to ocean, etc. Whatever we have in a GCM is better than trying to do it on a sheet of paper (and in the end you will be using the same equations with much less spatial and time granularity).

If we are interested in the “classic” Milankovitch theory mentioned above we need to find out the impact of an increase of 50W/m² (over 10,000 years) in summer at 65ºN – see figure 1 in Ghosts of Climates Past – Part Five – Obliquity & Precession Changes.  What effect does the simultaneous spring reduction at 65ºN have. Do these two effects cancel each other out? Is the summer increase more significant than the spring reduction?

How quickly does the circulation lessen the impact? The equator-pole export of heat is driven by the temperature difference – as with all heat transfer. So if the northern polar region is heating up due to ice melting, the ocean and atmospheric circulation will change and less heat will be driven to the poles. What effect does this have?

How quickly does an ice sheet melt and form? Can the increases and reductions in solar radiation absorbed explain the massive ice sheet growth and shrinking?

If the positive feedback is so strong how does an ice age terminate and how does it restart 10,000 years later?

We can only assess all of these with a general circulation model.

There is a problem though. A typical GCM run is a few decades or a century. We need a 10,000 – 50,000 year run with a GCM. So we need 500x the computing power – or we have to reduce the complexity of the model.

Alternatively we can run a model to equilibrium at a particular time in history to see what effect the historical parameters had on the changes we are interested in.

Early Work

Many readers of this blog are frequently mystified by my choosing “old work” to illuminate a topic. Why not pick the most up to date research?

Because the older papers usually explain the problem more clearly and give more detail on the approach to the problem.

The latest papers are written for researchers in the field and assume most of the preceding knowledge – that everyone in that field already has. A good example is the Myhre et al (1998) paper on the “logarithmic formula” for radiative forcing with increasing CO2, cited by the IPCC TAR in 2001. This paper has mystified so many bloggers. I have read many blog articles where the blog authors and commenters throw up their metaphorical hands at the lack of justification for the contents of this paper. However, it is not mystifying if you are familiar with the physics of radiative transfer and the papers from the 70’s through the 90’s calculating radiative imbalance as a result of more “greenhouse” gases.

It’s all about the context.

We’ll take a walk through a few decades of GCMs..

We’ll start with Rind, Peteet & Kukla (1989). They review the classic thinking on the problem:

Kukla et al. [1981] described how the orbital configurations seemed to match up with gross climate variations for the last 150 millennia or so. As a result of these and other geological studies, the consensus exists that orbital variations are responsible for initiating glacial and interglacial climatic regimes. The most obvious difference between these two regimes, the existence of subpolar continental ice sheets, appears related to solar insolation at northern hemisphere high latitudes in summer. For example, solar insolation at these latitudes in August and September was reduced, compared with today’s values, around 116,000 years before the present (116 kyr B.P.), during the time when ice growth apparently began, and it was increased around 10 kyr B.P. during a time of rapid ice sheet retreat [e.g., Berger, 1978] (Figure 1).

And the question of whether basic physics can link the supposed cause and effect:

Are the solar radiation variations themselves sufficient to produce or destroy the continental ice sheets?

The July solar radiation incident at 50ºN and 60ºN over the past 170 kyr is shown in Figure 1, along with August and September values at 50ºN (as shown by the example for July, values at the various latitudes of concern for ice age initiation all have similar insolation fluctuations). The peak variations are of the order of 10%, which if translated with an equal percentage into surface air temperature changes would be of the order of 30ºC. This would certainly be sufficient to allow snow to remain throughout the summer in extreme northern portions of North America, where July surface temperatures today are only about 10ºC above freezing.

However, the direct translation ignores all of the other features which influence surface air temperature during summer, such as cloud cover and albedo variations, long wave radiation, surface flux effects, and advection.

[Emphasis added].

Various energy balance climate models have been used to assess how much cooling would be associated with changed orbital parameters. As the initiation of ice growth will alter the surface albedo and provide feedback to the climate change, the models also have to include crude estimates of how ice cover will change with climate. With the proper tuning of parameters, some of which is justified on observational grounds, the models can be made to simulate the gross glacial/interglacial climate changes.

However, these models do not calculate from first principles all the various influences on surface air temperature noted above, nor do they contain a hydrologic cycle which would allow snow cover to be generated or increase. The actual processes associated with allowing snow cover to remain through the summer will involve complex hydrologic and thermal influences, for which simple models can only provide gross approximations.

They comment then on the practical problems of using GCMs for 10 kyr runs that we noted above. The problem is worked around by using prescribed values for certain parameters and by using a coarse grid – 8° x 10° and 9 vertical layers.

The various GCMs runs are typical of the approach to using GCMs to “figure stuff out” – try different runs with different things changed to see what variations have the most impact and what variations, if any, result in the most realistic answers:

Rind et al 1989-1

We have thus used the Goddard Institute for Space Studies (GISS) GCM for a series of experiments in which orbital parameters, atmospheric composition, and sea surface temperatures are changed. We examine how the various influences affect snow cover and low-elevation ice sheets in regions of the northern hemisphere where ice existed at the Last Glacial Maximum (LGM). As we show, the GCM is generally incapable of simulating the beginnings of ice sheet growth, or of maintaining low-elevation ice sheets, regardless of the orbital parameters or sea surface temperatures used.

[Emphasis added].

And the result:

The experiments indicate there is a wide discrepancy between the model’s response to Milankovitch perturbations and the geophysical evidence of ice sheet initiation. As the model failed to grow or sustain low-altitude ice during the time of high-latitude maximum solar radiation reduction (120-110 kyrB.P.), it is unlikely it could have done so at any other time within the last several hundred thousand years.

If the model results are correct, it indicates that the growth of ice occurred in an extremely ablative environment, and thus demanded some complicated strategy, or else some other climate forcing occurred in addition to the orbital variation influence (and CO2 reduction), which would imply we do not really understand the cause of the ice ages and the Milankovitch connection. If the model is not nearly sensitive enough to climate forcing, it could have implications for projections of future climate change.

[Emphasis added].

The basic model experiment on the ability of Milankovitch variations by themselves to generate ice sheets in a GCM, experiment 2, shows that in the GISS GCM even exaggerated summer radiation deficits are not sufficient. If widespread ice sheets at 10-m elevation are inserted, CO2 reduced by 70ppm, sea ice increases to full ice age conditions, and sea surface temperatures reduced to CLIMAP 18 kyr BP estimates or below, the model is just barely able keep these ice sheets from melting in restricted regions. How likely are these results to represent the actual state of affairs?

That was 1989 GCM’s.

Phillipps & Held (1994) had basically the same problem. This is the famous Isaac Held, who has written extensively on climate dynamics, water vapor feedback, GCMs and runs an excellent blog that is well-worth reading.

While paleoclimatic records provide considerable evidence in support of the astronomical, or Milankovitch, theory of the ice ages (Hays et al. 1976), the mechanisms by which the orbital changes influence the climate are still poorly understood..

..For this study we utilize the atmosphere-mixed layer ocean model.. In examining this model’s sensitivity to different orbital parameter combinations, we have compared three numerical experiments.

They describe the comparison models:

Our starting point was to choose the two experiments that are likely to generate the largest differences in climate, given the range of the parameter variations computed to have occurred over the past few hundred thousand years. The eccentricity is set equal to 0.04 in both cases. This is considerably larger than the present value of 0.016 but comparable to that which existed from ~90 to 150k BP.

In the first experiment, the perihelion is located at NH summer solstice and the obliquity is set at the high value of 24°.

In the second case, perihelion is at NH winter solstice and the obliquity equals 22°.

The perihelion and obliquity are both favorable for warm northern summers in the first case, and for cool northern summers in the second. These experiments are referred to as WS and CS respectively.

We then performed another calculation to determine how much of the difference between these two integrations is due to the perihelion shift and how much to the change in obliquity. This third model has perihelion at summer solstice, but a low value (22°) of the obliquity. The eccentricity is still set at 0.04. This experiment is referred to as WS22.

Sadly:

We find that the favorable orbital configuration is far from being able to maintain snow cover throughout the summer anywhere in North America..

..Despite the large temperature changes on land the CS experiment does not generate any new regions of permanent snow cover over the NH. All snow cover melts away completely in the summer. Thus, the model as presently constituted is unable to initiate the growth of ice sheets from orbital perturbations alone. This is consistent with the results of Rind with a GCM (Rind et al. 1989)..

In the next article we will look at more favorable results in the 2000’s.

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – remapping a deep ocean core dataset and updating the previous article

Seventeen – Proxies under Water I – explaining the isotopic proxies and what they actually measure

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models

References

Can Milankovitch Orbital Variations Initiate the Growth of Ice Sheets in a General Circulation Model?, Rind, Peteet & Kukla, JGR (1989) – behind a paywall, email me if you want to read it, scienceofdoom – you know what goes here – gmail.com

Response to Orbital Perturbations in an Atmospheric Model Coupled to a Slab Ocean, Phillipps & Held, Journal of Climate (1994) – free paper

New estimates of radiative forcing due to well-mixed greenhouse gases, Myhre et al, GRL (1998)

Read Full Post »

In Wonderland, Radiative Forcing and the Rate of Inflation we looked at the definition of radiative forcing and a few concepts around it:

  • why the instantaneous forcing is different from the adjusted forcing
  • what adjusted forcing is and why it’s a more useful concept
  • why the definition of the tropopause affects the value
  • GCM results usually don’t use radiative forcing as an input

In this article we will look at some results using the Wonderland model.

Remember the Wonderland model is not the earth. But the same is also true of “real” GCMs with geographical boundaries that match the earth as we know it. They are not the earth either. All models have limitations. This is easy to understand in principle. It is challenging to understand in the specifics of where the limitations are, even for specialists – and especially for non-specialists.

What the Wonderland model provides is a coarse geography with earth-like layout of land and ocean, plus of course, physics that follows the basic equations. And using this model we can get a sense of how radiative forcing is related to temperature changes when the same value of radiative forcing is applied via different mechanisms.

In the 1997 paper I think that Hansen, Sato & Ruedy did a decent job of explaining the limitations of radiative forcing, at least as far as the Wonderland climate model is able to assist us with that understanding. Remember as well that, in general, results we see from GCMs do not use radiative forcing. Instead they calculate from first principles – or parameterized first principles.

Doubling CO2

Now there’s a lot in this first figure, it can be a bit overwhelming. We’ll take it one step at a time. We double CO2 overnight – in Wonderland – and we see various results. The left half of the figure is all about flux while the right half is all about temperature:

From Hansen et al 1997

From Hansen et al 1997

Figure 1 – Green text added – Click to Expand

On the top line, the first two graphs are the net flux change, as a function of height and latitude. First left – instantaneous; second left – adjusted. These two cases were explained in the last article.

The second left is effectively the “radiative forcing”, and we can see that the above the tropopause (at about 200 mbar) the net flux change with height is constant. This is because the stratosphere has come into radiative balance. Refer to the last article for more explanation. On the right hand side, with all feedbacks from this one change in Wonderland, we can see the famous predicted “tropospheric hot spot” and the cooling of the stratosphere.

We see in the bottom two rows on the right the expected temperature change :

  • second row – change in temperature as a function of latitude and season (where temperature is averaged across all longitudes)
  • third row – change in temperature as a function of latitude and longitude (averaged annually)

It’s interesting to see the larger temperature increases predicted near the poles. I’m not sure I really understand the mechanisms driving that. Note that the radiative forcing is generally higher in the tropics and lower at the poles, yet the temperature change is the other way round.

Increasing Solar Radiation by 2%

Now let’s take a look at a comparison exercise, increasing solar radiation by 2%.

The responses to these comparable global forcings, 2xCO2 & +2% S0, are similar in a gross sense, as found by previous investigators. However, as we show in the sections below, the similarity of the responses is partly accidental, a cancellation of two contrary effects. We show in section 5 that the climate model (and presumably the real world) is much more sensitive to a forcing at high latitudes than to a forcing at low latitudes; this tends to cause a greater response for 2xCO2 (compare figures 4c & 4g); but the forcing is also more sensitive to a forcing that acts at the surface and lower troposphere than to a forcing which acts higher in the troposphere; this favors the solar forcing (compare figures 4a & 4e), partially offsetting the latitudinal sensitivity.

We saw figure 4 in the previous article, repeated again here for reference:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 2

In case the above comment is not clear, absorbed solar radiation is more concentrated in the tropics and a minimum at the poles, whereas CO2 is evenly distributed (a “well-mixed greenhouse gas”). So a similar average radiative change will cause a more tropical effect for solar but a more even effect for CO2.

We can see that clearly in the comparable graphic for a solar increase of 2%:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 3 – Green text added – Click to Expand

We see that the change in net flux is higher at the surface than the 2xCO2 case, and is much more concentrated in the tropics.

We also see the predicted tropospheric hot spot looking pretty similar to the 2xCO2 tropospheric hot spot (see note 1).

But unlike the cooler stratosphere of the 2xCO2 case, we see an unchanging stratosphere for this increase in solar irradiation.

These same points can also be seen in figure 2 above (figure 4 from Hansen et al).

Here is the table which compares radiative forcing (instantaneous and adjusted), no feedback temperature change, and full-GCM calculated temperature change for doubling CO2, increasing solar by 2% and reducing solar by 2%:

From Hansen et al 1997

From Hansen et al 1997

Figure 4 – Green text added – Click to Expand

The value R (far right of table) is the ratio of the predicted temperature change from a given forcing divided by the predicted temperature change from the 2% increase in solar radiation.

Now the paper also includes some ozone changes which are pretty interesting, but won’t be discussed here (unless we have questions from people who have read the paper of course).

“Ghost” Forcings

The authors then go on to consider what they call ghost forcings:

How does the climate response depend on the time and place at which a forcing is applied? The forcings considered above all have complex spatial and temporal variations. For example, the change of solar irradiance varies with time of day, season, latitude, and even longitude because of zonal variations in ground albedo and cloud cover. We would like a simpler test forcing.

We define a “ghost” forcing as an arbitrary heating added to the radiative source term in the energy equation.. The forcing, in effect, appears magically from outer space at an atmospheric level, latitude range, season and time of day. Usually we choose a ghost forcing with a global and annual mean of 4 W/m², making it comparable to the 2xCO2 and +2% S0 experiments.

In the following table we see the results of various experiments:

Hansen et al (1997)

Hansen et al (1997)

Figure 5 – Click to Expand

We note that the feedback factor for the ghost forcing varies with the altitude of the forcing by about a factor of two. We also note that a substantial surface temperature response is obtained even when the forcing is located entirely within the stratosphere. Analysis of these results requires that we first quantify the effect of cloud changes. However, the results can be understood qualitatively as follows.

Consider ΔTs in the case of fixed clouds. As the forcing is added to successively higher layers, there are two principal competing effects. First, as the heating moves higher, a larger fraction of the energy is radiated directly to space without warming the surface, causing ΔTs to decline as the altitude of the forcing increases. However, second, warming of a given level allows more water vapor to exist there, and at the higher levels water vapor is a particularly effective greenhouse gas. The net result is that ΔTs tends to decline with the altitude of the forcing, but it has a relative maximum near the tropopause.

When clouds are free to change the surface temperature change depends even more on the altitude of the forcing (figure 8). The principal mechanism is that heating of a given layer tends to decrease large-scale cloud cover within that layer. The dominant effect of decreased low-level clouds is a reduced planetary albedo, thus a warming, while the dominant effect of decreased high clouds is a reduced greenhouse effect, thus a cooling. However, the cloud cover, the cloud cover changes and the surface temperature sensitivity to changes may depend on characteristics of the forcing other than altitude, e.g. latitude, so quantitive evaluation requires detailed examination of the cloud changes (section 6).

Conclusion

Radiative forcing is a useful concept which gives a headline idea about the imbalance in climate equilibrium caused by something like a change in “greenhouse” gas concentration.

GCM calculations of temperature change over a few centuries do vary significantly with the exact nature of the forcing – primarily its vertical and geographical distribution. This means that a calculated radiative forcing of, say, 1 W/m² from two different mechanisms (e.g. ozone and CFCs) would (according to GCMs) not necessarily produce the same surface temperature change.

References

Radiative forcing and climate response, Hansen, Sato & Ruedy, Journal of Geophysical Research (1997) – free paper

Notes

Note 1: The reason for the predicted hot spot is more water vapor causes a lower lapse rate – which increases the temperature higher up in the troposphere relative to the surface. This change is concentrated in the tropics because the tropics are hotter and, therefore, have much more water vapor. The dry polar regions cannot get a lapse rate change from more water vapor because the effect is so small.

Any increase in surface temperature is predicted to cause this same change.

With limited research on my part, the idealized picture of the hotspot as shown above is not actually the real model results. The top graph is the “just CO2” graph, and the bottom graph is the “CO2 + aerosols” – the second graph is obviously closer to the real case:

From Santer et al 1996

From Santer et al 1996

Many people have asked for my comment on the hot spot, but apart from putting forward an opinion I haven’t spent enough time researching this topic to understand it. From time to time I do dig in, but it seems that there are about 20 papers that need to be read to say something useful on the topic. Unfortunately many of them are heavy in stats and my interest wanes.

Read Full Post »

Radiative forcing is a “useful” concept in climate science.

But while it informs it also obscures and many people are confused about its applicability. Also many people are confused about why stratospheric adjustment takes place and what that means. And why does the definition of the tropopause, which is a concept that doesn’t have one definite meaning, affect this all important concept of radiative forcing. Surely there is a definition which is clear and unambiguous?

So there are a few things we will attempt to understand in this article.

The Rate of Inflation and Other Stories

The value of radiative forcing (however it is derived) has the same usefulness as the rate of inflation, or the exchange rate as measured by a basket of currencies (with relevant apologies to all economists reading this article).

The rate of inflation tells you something about how prices are increasing but in the end it is a complex set of relationships reduced to a single KPI.

It’s quite possible for the rate of inflation to be the same value in two different years, and yet one important group of the country in question to see no increase in their spending in the first year yet a significant increase in their spending costs in the second year. That’s the problem with reducing a complex problem to one number.

However, the rate of inflation apparently has some value despite being a single KPI. And so it is with radiative forcing.

The good news is, when we get the results from a GCM, we can be sure the value of radiative forcing wasn’t actually used. Radiative forcing is more to inform the public and penniless climate scientists who don’t have access to a GCM.

Wonderland, the Simple Climate Model

The more precision you put into a GCM the slower it runs. So comparing 100’s of different cases can be impossible. Such is the dilemma of a climate scientist with access to a supercomputer running a GCM but a long queue of funded but finger-tapping climate scientists behind him or her.

Wonderland is a compromise model and is described in Wonderland Climate Model by Hansen et al (1997). This model includes some basic geography that is similar to the earth as we know it. It is used to provide insight into radiative forcing basics.

The authors explain:

A climate model provides a tool which allows us to think about, analyze, and experiment with a facsimile of the climate system in ways which we could not or would not want to experiment with the real world. As such, climate modeling is complementary to basic theory, laboratory experiments and global observations.

Each of these tools has severe limitations, but together, especially in iterative combinations they allow our understanding to advance. Climate models, even though very imperfect, are capable of containing much of the complexity of the real world and the fundamental principles from which that complexity arises.

Thus models can help structure the discussions and define needed observations, experiments and theoretical work. For this purpose it is desirable that the stable of modeling tools include global climate models which are fast enough to allow the user to play games, to make mistakes and rerun the experiments, to run experiments covering hundreds or thousands of simulated years, and to make the many model runs needed to explore results over the full range of key parameters. Thus there is great incentive for development of a highly efficient global climate model, i.e., a model which numerically solves the fundamental equations for atmospheric structure and motion.

Here is Wonderland, from a geographical point of view:

From Hansen et al (1997)

From Hansen et al (1997)

Figure 1

Wonderland is then used in Radiative Forcing and Climate Response, Hansen, Sato & Ruedy (1997). The authors say:

We examine the sensitivity of a climate model to a wide range of radiative forcings, including change of solar irradiance, atmospheric CO2, O3, CFCs, clouds, aerosols, surface albedo, and “ghost” forcing introduced at arbitrary heights, latitudes, longitudes, season, and times of day.

We show that, in general, the climate response, specifically the global mean temperature change, is sensitive to the altitude, latitude, and nature of the forcing; that is, the response to a given forcing can vary by 50% or more depending on the characteristics of the forcing other than its magnitude measured in watts per square meter.

In other words, radiative forcing has its limitations.

Definition of Radiative Forcing

The authors explain a few different approaches to the definition of radiative forcing. If we can understand the difference between these definitions we will have a much clearer view of atmospheric physics. From here, the quotes and figures will be from Radiative Forcing and Climate Response, Hansen, Sato & Ruedy (1997) unless otherwise stated.

Readers who have seen the IPCC 2001 (TAR) definition of radiative forcing may understand the intent behind this 1997 paper. Up until that time different researchers used inconsistent definitions.

The authors say:

The simplest useful definition of radiative forcing is the instantaneous flux change at the tropopause. This is easy to compute because it does not require iterations. This forcing is called “mode A” by WMO [1992]. We refer to this forcing as the “instantaneous forcing”, Fi, using the nomenclature of Hansen et al [1993c]. In a less meaningful alternative, Fi is computed at the top of the atmosphere; we include calculations of this alternative for 2xCO2 and +2% S0 for the sake of comparison.

An improved measure of radiative forcing is obtained by allowing the stratospheric temperature to adjust to the presence of the perturber, to a radiative equilibrium profile, with the tropospheric temperature held fixed. This forcing is called “mode B” by WMO [1992]; we refer to it here as the “adjusted forcing”, Fa [Hansen et al 1993c].

The rationale for using the adjusted forcing is that the relaxation time of the stratosphere is only several months [Manabe & Strickler, 1964], compared to several decades for the troposphere [Hansen et al 1985], and thus the adjusted forcing should be a better measure of the expected climate response for forcings which are present at least several months..The adjusted forcing can be calculated at the top of the atmosphere because the net radiative flux is constant throughout the stratosphere in radiative equilibrium. The calculated Fa depends on where the tropopause level is specified. We specify this level as 100 mbar from the equator to 40° latitude, changing to 189 mbar there, and then increasing linearly to 300 mbar at the poles.

[Emphasis added].

This explanation might seem confusing or abstract so I will try and explain.

Let’s say we have a sudden increase in a particular GHG (see note 1). We can calculate the change in radiative transfer through the atmosphere with a given temperature profile and concentration profile of absorbers with little uncertainty. This means we can see immediately the reduction in outgoing longwave radiation (OLR). And the change in absorption of solar radiation.

Now the question becomes – what happens in the next 1 day, 1 month, 1 year, 10 years, 100 years?

Small changes in net radiation (solar absorbed – OLR) will have an equilibrium effect over many decades at the surface because of the thermal inertia of the oceans (the heat capacity is very high).

The issue that everyone found when they reviewed this problem – the radiative forcing on day 1 was different from the radiative forcing on day 90.

Why?

Because the changes in net absorption above the tropopause (the place where convection stops and let’s review that definition a little later) affect the temperature of the stratosphere very quickly. So the stratosphere quickly adjusts to the new world order and of course this changes the radiative forcing. It’s like (in non-technical terms) the stratosphere responded very quickly and “bounced out” some of the radiative forcing in the first month or two.

So the stratosphere, with little heat capacity, quickly adapts to the radiative changes and moves back into radiative equilibrium. This changes the “radiative forcing” and so if we want to work out the changes over the next 10-100 years there is little point in considering the radiative forcing on day 1, but maybe if the quick responders sort themselves out in 60 days we can wait for the quick responders to settle down and pick the radiative forcing number after 90-120 days.

This is the idea behind the definition.

Let’s look at this in pictures. In the graph below the top line is for doubling CO2 (the line below is for increasing solar by 2%), and the top left is the flux change through the atmosphere for instantaneous and for adjusted. The red line is the “adjusted” value:

From Hansen (1997)

From Radiative Forcing & Climate Response, Hansen et al (1997)

Figure 2 – Click to expand

This red line is the value of flux change after the stratosphere has adjusted to the radiative forcing. Why is the red line vertical?

The reason is simple.

The stratosphere is now in temperature equilibrium because energy in = energy out at all heights. With no convection in the stratosphere this is the same as radiation absorbed = radiation emitted at all heights. Therefore, the net flux change with height must be zero.

If we plotted separately the up and down flux we would find that they have a slope, but the slope of the up and down would be the same. Net absorption of radiation going up balances net emission of radiation going down – more on this in Visualizing Atmospheric Radiation – Part Eleven – Stratospheric Cooling.

Another important point, we can see in the top left graph that the instantaneous net flux at the tropopause (i.e., the net flux on day one) is different from the net flux at the tropopause after adjustment (i.e., after the stratosphere has come into radiative balance).

But once the stratosphere has come into balance we could use the TOA net flux, or the tropopause net flux – it would not matter because both are the same.

Result of Radiative Forcing

Now let’s look at 4 different ways to think about radiative forcing, using the temperature profile as our guide to what is happening:

From Hansen et al (1997)

From Radiative Forcing & Climate Response, Hansen et al (1997)

Figure 3 – Click to expand

On the left, case a, instantaneous forcing. This is the result of the change in net radiation absorbed vs height on day one. Temperature doesn’t change instantaneously so it’s nice and simple.

On the next graph, case b, adjusted forcing. This is the temperature change resulting from net radiation absorbed after the stratosphere has come into equilibrium with the new world order, but the troposphere is held fixed. So by definition the tropospheric temperature is identical in case b to case a.

On the next graph, case c, no feedback response of temperature. Now we allow the tropospheric temperature to change until such time as the net flux at the tropopause has gone back to zero. But during this adjustment we have held water vapor, clouds and the lapse rate in the troposphere at the same values as before the radiative forcing.

On the last graph, case d, all feedback response of temperature. Now we let the GCM take over and calculate how water vapor, clouds and the lapse rate respond. And as with case c, we wait until the temperature has increased sufficiently that net tropopause flux has gone back to zero.

What Definition for the Tropopause and Why does it Matter?

We’ve seen that if we use adjusted forcing that the radiative forcing is the same at TOA and at the tropopause. And the adjusted forcing is the IPCC 2001 definition. So why use the forcing at the tropopause? And why does the definition of the tropopause matter?

The first question is easy. We could use the forcing at TOA, it wouldn’t matter so long as we have allowed the stratosphere to come into radiative equilibrium (which takes a few months). As far as I can tell, my opinion, it’s more about the history of how we arrived at this point. If you want to run a climate model to calculate the radiative forcing without stratospheric equilibrium then, on day one, the radiative forcing at the tropopause is usually pretty close to the value calculated after stratospheric equilibrium is reached.

So:

  1. Calculate the instantaneous forcing at the tropopause and get a value close to the authoritative “radiative forcing” – with the benefit of minimal calculation resources
  2. Calculate the adjusted forcing at the tropopause or TOA to get the authoritative “radiative forcing”

And lastly, why then does the definition of the tropopause matter?

The reason is simple, but not obvious. We are holding the tropospheric temperature constant, and letting the stratospheric temperature vary. The tropopause is the dividing line. So if we move the dividing line up or down we change the point where the temperatures adjust and so, of course, this affects the “adjusted forcing”. This is explained in some detail in Forster et al (1997) in section 4, p.556 (see reference below).

For reference, three definitions of the tropopause are found in Freckleton et al (1998):

  • the level at which the lapse rate falls below 2K/km
  • the point at which the lapse rate changes sign, i.e., the temperature minimum
  • the top of convection

Conclusion

Understanding what radiative forcing means requires understanding a few basics.

The value of radiative forcing depends upon the somewhat arbitrary definition of the location of the tropopause. Some papers like Freckleton et al (1998) have dived into this subject, to show the dependence of the radiative forcing for doubling CO2 on this definition.

We haven’t covered it in this article, but the Hansen et al (1997) paper showed that radiative forcing is not a perfect guide to how climate responds (even in the idealized world of GCMs). That is, the same radiative forcing applied via different mechanisms can lead to different temperature responses.

Is it a useful parameter? Is the rate of inflation a useful parameter in economics? Usefulness is more a matter of opinion. What is more important at the start is to understand how the parameter is calculated and what it can tell us.

References

Radiative forcing and climate response, Hansen, Sato & Ruedy, Journal of Geophysical Research (1997) – free paper

Wonderland Climate Model, Hansen, Ruedy, Lacis, Russell, Sato, Lerner, Rind & Stone, Journal of Geophysical Research, (1997) – paywall paper

Greenhouse gas radiative forcing: Effect of averaging and inhomogeneities in trace gas distribution, Freckleton et al, QJR Meteorological Society (1998) – paywall paper

On aspects of the concept of radiative forcing, Forster, Freckleton & Shine, Climate Dynamics (1997) – free paper

Notes

Note 1: The idea of an instantaneous increase in a GHG is a thought experiment to make it easier to understand the change in atmospheric radiation. If instead we consider the idea of a 1% change per year, then we have a more difficult problem. (Of course, GCMs can quite happily work with a real-world slow change in GHGs. And they can quite happily work with a sudden change).

Read Full Post »

The earth’s surface is not a black-body. A blackbody has an emissivity and absorptivity = 1.0, which means that it absorbs all incident radiation and emits according to the Planck law.

The oceans, covering over 70% of the earth’s surface, have an emissivity of about 0.96. Other areas have varying emissivity, going down to about 0.7 for deserts. (See note 1).

A lot of climate analyses assume the surface has an emissivity of 1.0.

Let’s try and qualify the effect of this assumption.

The most important point to understand is that if the emissivity of the surface, ε, is less than 1.0 it means that the surface also reflects some atmospheric radiation.

Let’s first do a simple calculation with nice round numbers.

Say the surface is at a temperature, Ts=289.8 K. And the atmosphere emits downward flux = 350 (W/m²).

  • If ε = 1.0 the surface emits 400. And it reflects 0. So a total upward radiation of 400.
  • If ε = 0.8 the surface emits 320. And it reflects 70 (350 x 0.2). So a total upward radiation of 390.

So even though we are comparing a case where the surface reduces its emission by 20%, the upward radiation from the surface is only reduced by 2.5%.

Now the world of atmospheric radiation is very non-linear as we have seen in previous articles in this series. The atmosphere absorbs very strongly in some wavelength regions and is almost transparent in other regions. So I was intrigued to find out what the real change would be for different atmospheres as surface emissivity is changed.

To do this I used the Matlab model already created and explained – in brief in Part Two and with the code in Part Five – The Code (note 2). The change in surface emissivity is assumed to be wavelength independent (so if ε = 0.8, it is the case across all wavelengths).

I used some standard AFGL (air force geophysics lab) atmospheres. A description of some of them can be seen in Part Twelve – Heating Rates (note 3).

For the tropical atmosphere:

  • ε = 1.0, TOA OLR = 280.9   (top of atmosphere outgoing longwave radiation)
  • ε = 0.8, TOA OLR = 278.6
  • Difference = 0.8%

Here is the tropical atmosphere spectrum:

Atmospheric-radiation-14b-tropical-atm-TOA-emissivity-0.8vs1.0

Figure 1

We can see that the difference occurs in the 800-1200 cm-1 region (8-12 μm), the so-called “atmospheric window” – see Kiehl & Trenberth and the Atmospheric Window. We will come back to the reasons why in a moment.

For reference, an expanded view of the area with the difference:

Atmospheric-radiation-14b-tropical-atm-TOA-emissivity-0.8vs1.0-expanded

Figure 2

Now the mid-latitude summer atmosphere:

  • ε = 1.0, TOA OLR = 276.9
  • ε = 0.8, TOA OLR = 272.4
  • Difference = 1.6%

And the mid-latitude winter atmosphere:

  • ε = 1.0, TOA OLR = 227.9
  • ε = 0.8, TOA OLR = 217.4
  • Difference = 4.6%

Here is the spectrum:

Atmospheric-radiation-14c-midlat-winter-atm-TOA-emissivity-0.8vs1.0

Figure 3

We can see that the same region is responsible and the difference is much greater.

The sub-arctic summer:

  • ε = 1.0, TOA OLR = 259.8
  • ε = 0.8, TOA OLR = 252.7
  • Difference = 2.7%

The sub-arctic winter:

  • ε = 1.0, TOA OLR = 196.8
  • ε = 0.8, TOA OLR = 186.9
  • Difference = 5.0%

Atmospheric-radiation-14c-subarctic-winter-atm-TOA-emissivity-0.8vs1.0

Figure 4

We can see that the surface emissivity of the tropics has a negligible difference on OLR. The higher latitude winters have a 5% change for the same surface emissivity change, and the higher latitude summers have around 2-3%.

The reasoning is simple.

For the tropics, the hot humid atmosphere radiates quite close to a blackbody, even in the “window region” due to the water vapor continuum. We can see this explained in detail in Part Ten – “Back Radiation”.

So any “missing” radiation from a non-blackbody surface is made up by reflection of atmospheric radiation (where the radiating atmosphere is almost at the same temperature as the surface).

When we move to higher latitudes the “window region” becomes more transparent, and so the “missing” radiation cannot be made up by reflection of atmospheric radiation in this wavelength region. This is because the atmosphere is not emitting in this “window” region.

And the effect is more pronounced in the winters in high latitudes because the atmosphere is colder and so there is even less water vapor.

Now let’s see what happens when we do a “radiative forcing” calculation – we will do a comparison of TOA OLR at 360 ppm CO2 – 720 ppm at two different emissivities for the tropical atmosphere. That is, we will calculate 4 cases:

  • 360 ppm at ε=1.0
  • 720  ppm at ε=1.0
  • 360 ppm at ε=0.8
  • 720  ppm at ε=0.8

And, at both ε=1.0 & ε=0.8 we subtract the OLR at 360ppm from OLR at 720ppm and plot both differenced emissivity results on the same graph:

Atmospheric-radiation-14fg-tropical-atm-2xCO2-TOA-emissivity-0.8vs1.0

 

Figure 5

We see that both comparisons look almost identical – we can’t distinguish between them on this graph. So let’s subtract one from the other. That is, we plot (360ppm-720ppm)@ε=1.0 – (360ppm – 720ppm)@ε=0.8:

Atmospheric-radiation-14h-tropical-atm-2xCO2-1xCO2-emissivity-0.8-1.0

 

Figure 6 – same units as figure 5

So it’s clear that in this specific case of calculating the difference in CO2 from 360ppm to 720ppm it doesn’t matter whether we use surface emissivity = 1.0 or 0.8.

Conclusion

The earth’s surface is not a blackbody. No one in climate science thinks it is. But for a lot of basic calculations assuming it is a blackbody doesn’t have a big impact on the TOA radiation – for the reasons outlined above. And it has even less impact on the calculations of changes in CO2.

The tropics, from 30°S to 30°N, are about half the surface area of the earth. And with a typical tropical atmosphere, a drop in surface emissivity from 1.0 to 0.8 causes a TOA OLR change of less than 1%.

Of course, it could get more complicated than the calculations we have seen in this article. Over deserts in the tropics, where the surface emissivity actually gets below 0.8, water vapor is also low and therefore the resulting TOA flux change will be higher (as a result of using actual surface emissivity vs black body emissivity).

I haven’t delved into the minutiae of GCMs to find out what they assume about surface emissivity and, if they do use 1.0, what calculations have been done to quantify the impact.

The average surface emissivity of the earth is much higher than 0.8. I just picked that value as a reference.

The results shown in this article should help to clarify that the effect of surface emissivity less than 1.0 is not as large as might be expected.

Notes

Note 1: Emissivity and absorptivity are wavelength dependent phenomena. So these values are relevant for the terrestrial wavelengths of 4-50μm.

Note 2: There was a minor change to the published code to allow for atmospheric radiation being reflected by the non-black surface. This hasn’t been updated to the relevant article because it’s quite minor. Anyone interested in the details, just ask.

In this model, the top of atmosphere is at 10 hPa.

Some outstanding issues remain in my version of the model, like whether the diffusivity improvement is correct or needs improvement, and the Voigt profile (important in the mid-upper stratosphere) is still not used. These issues will have little or no effect on the question addressed in this article.

Note 3: For speed, I only considered water vapor and CO2 as “greenhouse” gases. No ozone was used. To check, I reran the tropical atmosphere with ozone at the values prescribed in that AFGL atmosphere. The difference between ε = 1.0 and ε = 0.8 was 0.7% – less than with no ozone (0.8%). This is because ozone reduces the transparency of the “atmospheric window” region.

Read Full Post »

In an earlier article on water vapor we saw that changing water vapor in the upper troposphere has a disproportionate effect on outgoing longwave radiation (OLR). Here is one example from Spencer & Braswell 1997:

Spencer and Braswell (1997)

From Spencer & Braswell (1997)

Figure 1

The upper troposphere is very dry, and so the mass of water vapor we need to change OLR by a given W/m² is small by comparison with the mass of water vapor we need to effect the same change in or near the boundary layer (i.e., near to the earth’s surface). See also Visualizing Atmospheric Radiation – Part Four – Water Vapor.

This means that when we are interested in climate feedback and how water vapor concentration changes with surface temperature changes, we are primarily interested in the changes in upper tropospheric water vapor (UTWV).

Upper Tropospheric Water Vapor

A major problem with analyzing UTWV is that most historic measurements are poor for this region. The upper troposphere is very cold and very dry – two issues that cause significant problems for radiosondes.

The atmospheric infrared sounder (AIRS) was launched in 2002 on the Aqua satellite and this instrument is able to measure temperature and water vapor with vertical resolution similar to that obtained from radiosondes. At the same time, because it is on a satellite we get the global coverage that is not available with radiosondes and the ability to measure the very cold, very dry upper tropospheric atmosphere.

Gettelman & Fu (2008) focused on the tropics and analysed the relationship (covariance) between surface temperature and UTWV from AIRS over 2002-2007, and then compared this with the results of the CAM climate model using prescribed (actual) surface temperature from 2001-2004 (note 1):

This study will build upon previous estimates of the water vapor feedback, by focusing on the observed response of upper-tropospheric temperature and humidity (specific and relative humidity) to changes in surface temperatures, particularly ocean temperatures. Similar efforts have been performed before (see below), but this study will use new high vertical resolution satellite measurements and compare them to an atmospheric general circulation model (GCM) at similar resolution.

The water vapor feedback arises largely from the tropics where there is a nearly moist adiabatic profile. If the profile stays moist adiabatic in response to surface temperature changes, and if the relative humidity (RH) is unchanged because of the supply of moisture from the oceans and deep convection to the upper troposphere, then the upper-tropospheric specific humidity will increase.

[Emphasis added]

They describe the objective:

The goal of this work is a better understanding of specific feedback processes using better statistics and vertical resolution than has been possible before. We will compare satellite data over a short (4.5 yr) time record to a climate model at similar space and time resolution and examine the robustness of results with several model simulations. The hypothesis we seek to test is whether water vapor in the model responds to changes in surface temperatures in a manner similar to the observations. This can be viewed as a necessary but not sufficient condition for the model to reproduce the upper-tropospheric water vapor feedback caused by external forcings such as anthropogenic greenhouse gas emissions.

[Emphasis added].

The results are for relative humidity (RH) on the left and absolute humidity on the right:

From Gettelman & Fu (2008)

From Gettelman & Fu (2008)

Figure 2

The graphs show that change in 250 mbar RH with temperature is statistically indistinguishable from zero. For those not familiar with the basics, if RH stays constant with rising temperature it is the same as increasing “specific humidity” – which means an increased mixing ratio of water vapor in the atmosphere. And we see this is the right hand graph.

Figure 1a has considerable scatter, but in general, there is little significant change of 250-hPa relative humidity anomalies with anomalies in the previous month’s surface temperature. The slope is not significantly different than zero in either AIRS observations (1.9 ± 1.9% RH/°C) or CAM (1.4 ± 2.8% RH/°C).

The situation for specific humidity in Fig. 1b indicates less scatter, and is a more fundamental measurement from AIRS (which retrieves specific humidity and temperature separately). In Fig. 1b, it is clear that 250- hPa specific humidity increases with increasing averaged surface temperature in both AIRS observations and CAM simulations. At 250 hPa this slope is 20 ± 8 ppmv/°C for AIRS and 26 ± 11 ppmv/°C for CAM. This is nearly 20% of background specific humidity per degree Celsius at 250 hPa.

The observations and simulations indicate that specific humidity increases with surface temperatures (Fig. 1b). The increase is nearly identical to that required to maintain constant relative humidity (the sloping dashed line in Fig. 1b) for changes in upper-tropospheric temperature. There is some uncertainty in this constant RH line, since it depends on calculations of saturation vapor mixing ratio that are nonlinear, and the temperature used is a layer (200–250 hPa) average.

The graphs below show the change in each variable as surface temperature is altered as a function of pressure (height). The black line is the measurement (AIRS).

So the right side graph shows that, from AIRS data of 4 years, specific humidity increases with surface temperature in the upper troposphere:

From Gettelman & Fu (2008)

From Gettelman & Fu (2008)

Figure 3 – Click to Enlarge

There are a number of model runs using CAM with different constraints. This is a common theme in climate science – researchers attempting to find out what part of the physics (at least as far as the climate model can reproduce it) contributes the most or least to a given effect. The paper has no paywall, so readers are recommended to review the whole paper.

Conclusion

The question of how water vapor responds to increasing surface temperature is a critical one in climate research. The fundamentals are discussed in earlier articles, especially Clouds and Water Vapor – Part Two – and much better explained in the freely available paper Water Vapor Feedback and Global Warming, Held and Soden (2000).

One of the key points is that the response of water vapor in the planetary boundary layer (the bottom layer of the atmosphere) is a lot easier to understand than the response in the “free troposphere”. But how water vapor changes in the free troposphere is the important question. And the water vapor concentration in the free troposphere is dependent on the global circulation, making it dependent on the massive complexity of atmospheric dynamics.

Gettelman and Fu attempt to answer this question for the first half decade’s worth of quality satellite observation and they find a result that is similar to that produced by GCMs.

Many people outside of climate science believe that GCMs have “positive feedback” or “constant relative humidity” programmed in. Delving into a climate model is a technical task, but the details are freely available – e.g., Description of the NCAR Community Atmosphere Model (CAM 3.0), W.D. Collins (2004). It’s clear to me that relative humidity is not prescribed in climate models – both from the equations used and from the results that are produced in many papers. And people like the great Isaac Held, a veteran of climate modeling and atmospheric dynamics, also state the same. So, readers who believe otherwise – come forward with evidence.

Still, that’s a different story from acknowledging that climate models attempt to calculate humidity from some kind of physics but believing that these climate models get it wrong. That is of course very possible.

At least from this paper we can see that over this short time period, not subject to strong ENSO fluctuations or significant climate change, the satellite date shows upper tropospheric humidity increasing with surface temperature. And the CAM model produces similar results.

Articles in this Series

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data

References

Observed and Simulated Upper-Tropospheric Water Vapor Feedback, Gettelman & Fu, Journal of Climate (2008) – free paper

How Dry is the Tropical Free Troposphere? Implications for Global Warming Theory, Spencer & Braswell, Bulletin of the American Meteorological Society (1997) – free paper

Notes

Note 1 – The authors note: “..Model SSTs may be slightly different from the data, but represent a partially overlapping period..”

I asked Andrew Gettelman why the model was run for a different time period than the observations and he said that the data (in the form needed for running CAM) was not available at that time.

Read Full Post »

In Frontiers of Climate Modeling, Jeffrey Kiehl says:

The study of the Earth’s climate system is motivated by the desire to understand the processes that determine the state of the climate and the possible ways in which this state may have changed in the past or may change in the future..

Earth’s climate system is composed of a number of components (e.g., atmosphere, hydrosphere, cryosphere and biosphere). These components are non-linear systems in themselves, with various processes, which are spatially non-local.

Each component has a characteristic time scale associated with it. The entire Earth system is composed of the coupled interaction of these non-local, non-linear components.

Given this level of complexity, it is no wonder that the system displays a rich spectrum of climate variability on time scales ranging from the diurnal to millions of years.. This level of complexity also implies the system is chaotic (Lorenz, 1996, Hansen et al., 1997), which means the representation of the Earth system is not deterministic.

However, this does not imply that the system is not predictable. If it were not predictable at some level, climate modeling would not be possible. Why is it predictable? First, the climate system is forced externally through solar radiation from the Sun. This forcing is quasi-regular on a wide range of time scales. The seasonal cycle is the largest forcing Earth experiences, and is very regular. Second, certain modes of variability, e.g., the El Nino southern oscillation (ENSO), North Atlantic oscillation, etc., are quasi-periodic unforced internal modes of variability. Because they are quasi-periodic, they are predictable to some degree of accuracy.

The representation of the Earth system requires a statistical approach, rather than a deterministic one.

Modeling the climate system is not concerned with predicting the exact time and location of a specific small-scale event. Rather, modeling the climate system is concerned with understanding and predicting the statistical behavior of the system; in simplest terms, the mean and variance of the climate system.

He goes on to comment on climate history – warm periods such as the Cretaceous & Eocene, and very cold states such as the ice ages (e.g., 18,000 years ago), as well as climate fluctuations on very fast time scales.

The complexity of the mathematical relations and their solutions requires the use of large supercomputers. The chaotic nature of the climate system implies that ensembles are required to best understand the properties of the system. This requires numerous simulations of the state of the climate. The length of the climate simulations depends on the problem of interest..

And later comments:

There is some degree of skepticism concerning the predictive capabilities of climate models. These concerns center on the ability to represent all of the diverse processes of nature realistically. Since many of these processes (e.g., clouds, sea ice, water vapor) strongly affect the sensitivity of climate models, there is concern that model response to increased greenhouse-gas concentrations may be in error.

For this reason alone, it is imperative that climate models be compared to a diverse set of observations in terms of the time mean, the spatio-temporal variability and the response to external forcing. To the extent that models can reproduce observed features for all of these features, belief in the model’s ability to predict future climate change is better justified.

Interesting stuff.

Jeffrey Kiehl has 110 peer-reviewed papers to his name, including papers co-authored with the great Ramanathan and Petr Chylek, to name just a couple.

Probably the biggest question to myself and the readers on this blog is the measure of predictability of the climate.

I’m a beginner with non-linear dynamics but have been playing around with some basics. I would have preferred to know a lot more before writing this article, but I thought many people would find Kiehl’s comments interesting.

In various blogs I have read that climate is predictable because summer will be warmer than winter and the equator warmer than the poles. This is clearly true. However, there is a big gap between knowing that and knowing the state of the climate 50 years from now.

Or, to put it another way – if it is true that summer will be warmer than winter, and it is true that climate models forecast that summer will be warmer than winter, does it follow that climate models are reliable about the mean climate state 50 years from now? Of course, it doesn’t – and I don’t think many people would make this claim in such simplistic terms. How about – if it is true that a climate model can reproduce the mean annual climatology over the next few years (whatever precisely that entails) does it follow that climate models are reliable about the mean climate state 50 years from now?

I haven’t found many papers that really address this subject (which doesn’t mean there aren’t any). From my very limited understanding of chaotic systems I believe that the question is not easily resolvable. With a precise knowledge of the equations governing the system, and a detailed study of the behavior of the system described by these equations, it is possible to determine the boundary conditions which lead to various types of results. And without a precise knowledge it appears impossible. Is this correct?

However, with a little knowledge of the stochastic behavior of non-linear systems, I did find Jeffrey Kiehl’s comments very illuminating as to why ensembles of climate models are used.

Climatology is more about statistics than one day in one place. Which helps explain why, just as an example, the measure of a climate model is not measuring the average temperature in Moscow in January 2012 vs what a climate model “predicts” about the average temperature in Moscow in January 2012. You can easily create systems that have unpredictable time-varying behavior, yet very predictable statistical behavior. (The predictable statistical behavior can be seen in frequency based plots, for example).

So the fact that climate is a non-linear system does not mean as a necessary consequence that it is statistically unpredictable.

But it might in practical terms – that is, in terms of the certainty we would like to ascribe to future climatology.

I would be interested to know how the subject could be resolved.

Reference

Frontiers of Climate Modeling, edited by J.T. Kiehl & V. Ramanathan, Cambridge University Press (2006)

 

Read Full Post »

In Part One we saw how the ocean absorbed different wavelengths of radiation:

  • 50% of solar radiation is absorbed in the first meter, and 80% within 10 meters
  • 50% of “back radiation” (atmospheric radiation) is absorbed  in the first few microns (μm).

This is because absorption is a strong function of wavelength and atmospheric radiation is centered around 10μm, while solar radiation is centered around 0.5μm.

In Part Two we considered what would happen if back radiation only caused evaporation and removal of energy from the ocean surface via the latent heat. The ocean surface would become much colder than it obviously is. That is a very simple “first law of thermodynamics” problem. Then we looked at another model with only conductive heat transfer between different “layers” in the ocean. This caused various levels below the surface to heat to unphysical values. It is clear that turbulent heat transport takes place from lower in the ocean. Solar energy reaches down many meters heating the ocean from within – hotter water expands and so rises – moving heat by convection.

In Part Three we reviewed various experimental results showing how the temperature profile (vs depth) changes during the diurnal cycle (day-night-day) and with wind speed. This demonstrates very clearly how much mixing goes on in the ocean.

The Different Theories

This series of articles was inspired by the many people who think that increases in back radiation from the atmosphere will have no effect (or an unnoticeable effect) on the temperature of the ocean depths.

So far, no evidence has so far been brought forward for the idea that back radiation can’t “heat” the ocean (see note 1 at the end), other than the “it’s obvious” evidence. At least, I am unaware of any stronger arguments. Hopefully as a result of this article advocates can put forward their ideas in more detail in response.

I’ll summarize the different theories as I’ve understood them. Apologies to anyone who feels misrepresented – it’s quite possible I just haven’t heard your particular theory or your excellent way of explaining it.

Hypothesis A – Because the atmospheric radiation is completely absorbed in the first few microns it will cause evaporation of the surface layer, which takes away the energy from the back radiation as latent heat into the atmosphere. Therefore, more back-radiation will have zero effect on the ocean temperature.

Hypothesis B – Because the atmospheric radiation is completely absorbed in the first few microns it will be immediately radiated or convected back out to the atmosphere. Heat can’t flow downwards due to the buoyancy of hotter water. Therefore, if an increase in back radiation occurs (perhaps due to increases in inappropriately-named “greenhouse” gases) it will not “heat” the ocean = increase the temperature of the ocean below the surface.

For other, more basic objections about back radiation, see Note 2 (at the end).

I believe that Part Two showed that Hypothesis A was flawed.

I would like to propose a different hypothesis:

Hypothesis C – Heat transfer is driven by temperature differences. For example, conduction of heat is proportional to the temperature difference across the body that the heat is conducted through.

Solar radiation is absorbed from the surface through many meters of the ocean. This heats the ocean below the surface which causes “natural convection” – heated bodies expand and therefore rise. So solar energy has a tendency to be moved back to the surface (this was demonstrated in Part Two).

The more the surface temperature increases, the less temperature difference there will be to drive this natural convection. And, therefore, increases in surface temperature can affect the amount of heat stored in the ocean.

Clarification from St.Google: HypothesisA supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation

An Excellent Question

In Part Three, one commenter asked an excellent question:

Some questions from an interested amateur.
Back radiation causes more immediate evaporation and quicker reemission of LWR than does a similar amount of solar radiation.

Does that mean that the earth’s temperature should be more sensitive to a given solar forcing than it would be to an equal CO2 forcing?

What percentage CO2 forcing transfers energy to the oceans compared to space and the atmosphere?

How does this compare with solar forcing?

Is there a difference between the effect of the sun and the back radiation when they are of equal magnitude? This, of course, pre-supposes that Hypothesis C is correct and that back radiation has any effect at all on the temperature of the ocean below the surface.

So the point is this – even if Hypothesis C is correct, there may still be a difference between the response of the ocean temperatures below the surface – for back radiation compared with solar radiation.

So I set out to try and evaluate these two questions:

  1. Can increases in back radiation affect the temperature of the ocean below the surface? I.e., is Hypothesis C supported against B?
  2. For a given amount of energy, is there a difference between solar forcing and back radiation forcing?

And my approach was to use a model:

Oh no, a model! Clearly wrong then, and a result that can’t fool anyone..

For a bit of background generally on models, take a look at the Introduction in Models On – and Off – the Catwalk.

Here is one way to think about a model

The idea of a model is to carry out some calculations when doing them in your head is too difficult

A model helps us see the world a bit more clearly. At these point I’m not claiming anything other than they help us see the effect of the well-known and undisputed laws of heat transfer on the ocean a little bit more clearly.

Ocean Model

The ocean model under consideration is about a billion times less complex than a GCM. It is a 1-d model with heat flows by radiation, conduction and, in a very limited form, convection.

Here is a schematic of the model. I thought it would be good to show the layers to scale but that means the thicker layers can’t be shown (not without taking up a ridiculous amount of blank screen space) – so the full model, to scale, is 100x deeper than this:

Figure 1

To clarify – the top layer is at temperature, T1, the second layer at T2, even though these values aren’t shown.

The red arrows show conducted or convected heat. They could be in either direction, but the upwards is positive (just as a convention). Obviously, only a few of these are shown in the schematic – there is a heat flux between each layer.

1. Solar and back radiation are modeled as sine waves with the peak at midday. See the graph “Solar and Back Radiation” in Part Two for an example.

2. Convected heat is modeled with a simple formula:

H=h(T1-Tair), where Tair = air temperature, T1 = “surface” temperature, h = convection coefficient = 25 W/m².K.

Convected heat can be in either direction, depending on the surface and air temperature. The air temperature is assumed constant at 300K, something we will return to.

3. Radiation from the surface:

E = εσT4 – the well-known Stefan-Boltzmann equation, and ε = emissivity

For the purposes of this simple model ε = 1. So is absorptivity for back radiation, and for solar radiation. More on these assumptions later.

4. Heat flux between layers (e.g. H54 in the schematic) is calculated using the temperature values for the previous time step for the two adjacent layers then using the conducted heat formula: q” = k.(T5-T4)/d54, where k= conductivity, and d54 = distance between center of each layer 5 to the center of layer 4.

For still water, k = 0.6 W/m.K – a very low value as water is a poor conductor of heat.

In this model at the end of each time step, the program checks the temperature of each layer. If T5 > T4 for example, then the conductivity between these layers for the next time step is set to a much higher value to simulate convection. I used a value for stirred water that I found in a textbook: kt = 2 x 105 W/m.K. What actually happens in practice is the hotter water rises taking the heat with it (convection). Using a high value of conductivity produces a similar result without any actual water motion.

For interest I did try lower values like 2 x 10³ W/m.K and the 1m layer, for example, ended up at a higher temperature than the layers above it. See the more detailed explanation in Part Two.

5. In Part Three I showed results from a number of field experiments which demonstrated that the ocean experiences mixing due to surface cooling at night, and due to high winds. The mixing due to surface cooling is automatically taken account of in this model (and we can see it in the results), but the mixing due to the winds “stirring” the ocean is not included. So we can consider the model as being “under light winds”. If we had a model which evaluated stronger winds it would only make any specific effects of back radiation less noticeable. So this is the “worst case” – or the “highlighting back radiation’s special nature” model.

Problems of Modeling

Some people will already know about the many issues with numerical models. A very common one is resolving small distances and short timescales.

If we want to know the result over many years we don’t really want to have the iterate the model through time steps of fractions of a second. In this model I do have to use very small time steps because the distance scales being considered range from extremely small to quite large – the ocean is divided into thin slabs of 5mm, 15mm.. through to a 70m slab.

If I use a time step which is too long then too much heat gets transferred from the layers below the surface to the 5mm surface layer in the one time step, the model starts oscillating – and finally “loses the plot”. This is easy to see, but painful to deal with.

But I thought it might be interesting for people to see the results of the model over five days with different time steps. Instead of having the model totally “lose the plot” (=surface temperature goes to infinity), I put a cap on the amount of heat that could move in each time step for the purposes of this demonstration.

You can see four results with these time steps (tstep = time step, is marked on the top left of each graph):

  • 3 secs
  • 1 sec
  • 0.2 sec
  • 0.05 sec

Figure 2 – Click for a larger image

I played around with many other variables in the model to see what problems they caused..

The Tools

The model is written in Matlab and runs on a normal PC (Dell Vostro 1320 laptop).

To begin with there were 5 layers in the model (values are depth from the surface to the bottom edge of each layer):

  • 5 mm
  • 50 mm
  • 1 m
  • 10 m
  • 100 m

I ran this with a time step of 0.2 secs and ended up doing up to 15-year runs.

In the model runs I wanted to ensure that I had found a steady-state value, and also that the model conserved energy (first law of thermodynamics) once steady state was reached. So the model included a number of “house-keeping” tests so I could satisfy myself that the model didn’t have any obvious errors and that equilibrium temperatures were reached for each layer.

For 15 year runs, 5 layers and 0.2s time step the run would take about two and a half hours on the laptop.

I find that quite amazing – showing how good Matlab is. There are 31 million seconds in a year, so 15 years at 0.2 secs per step = 2.4 billion iterations. And each iteration involves looking up the solar and DLR value, calculating 7 heat flow calculations and 5 new temperatures. All in a couple of hours on a laptop.

Well, as we will see, because of the results I got I thought I would check for any changes if there were more layers in my model. So that’s why the 9-layer model (see the first diagram) was created. For this model I need an even shorter time step – 0.1 secs and so long model runs start to get painfully long..

Results

Case 1: The standard case was a peak solar radiation, S, of 600 W/m² and back radiation, DLR of 340 with a 50 W/m² variation day to night (i.e., max of 390 W/m², min of 290 W/m²).

Case 2a: Add 10 W/m² to the peak solar radiation, keep DLR the same. Case 2b – Add 31.41 W/m² to solar.

Case 3a: Keep solar radiation the same, add 3.14 W/m² to DLR. This is an equivalent amount of energy per day to case 2, see note 3. Case 3b – Add 10 W/m² to DLR.

Many people are probably asking, “Why isn’t case 3a – Add 10 W/m² to DLR?”

Solar radiation only occurs for 12 out of the 24 hours, while DLR occurs 24 hours of the day. And the solar value is the peak, while the DLR value is the average. It is a mathematical reason explained further in Note 3.

The important point is that for total energy absorbed in a day, case 2a and 3a are the same, and case 2b and 3b are the same.

Let’s compare the average daily temperature in the top layer, 1m, 10m and 100m layer for the three cases (note: depths are from the surface to the bottom of each layer; and only 4 layers of the 5 were recorded):

Figure 3

The time step (tstep) = 0.2s.

The starting temperatures for each layer were the same in all cases.

Now because the 4 year runs recorded almost identical values for solar vs DLR forcing, and because the results had not quite stabilized, I then did the 15 year run and also recorded the temperature to the 4 decimal places shown. This isn’t because the results are this accurate – this is to see what differences, if any, exist between the two different scenarios.

The important results are:

  1. DLR increases cause temperature increases at all levels in the ocean
  2. Equivalent amounts of daily energy into the ocean from solar and DLR cause almost exactly the same temperature increase at each level of the ocean – even though the DLR is absorbed in the first few microns and the solar energy in the first few meters
  3. The slight difference in temperature may be a result of “real physics” or may be an artifact of the model

And perhaps 5 layers is not enough?

Therefore, I generated the 9-layer model, as shown in the first diagram in this article. The 15-year model runs on the 9-layer model produced these results:

Figure 4

The general results are similar to the 5-layer model.

The temperature changes have clearly stabilized, as the heat unaccounted for (inputs – outputs) on the last day = 41 J/m². Note that this is Joules, not Watts, and is over a 24 hour period. This small “unaccounted” heat is going into temperature increases of the top 100m of the ocean. (“Inputs – outputs” includes the heat being transferred from the model layers down into the ocean depths below 100m).

If we examine the difference in temperature for the bottom 30-100m deep level for case 2b vs 3b we see that the temperature difference after 15 years = 0.011°C. For a 70m thick layer, this equates to an energy difference = 3.2 x 106 J, which, over 15 years, = 591 J/m².day = 0.0068 W/m². This is spectacularly tiny. It might be a model issue, or it might be a real “physics difference”.

In any case, the model has demonstrated that DLR increases vs solar increases cause almost exactly the same temperature changes in each layer being considered.

For interest here are the last 5 days of the model (average hourly temperatures for each level) for case 3b:

Figure 5

and for case 2b:

Figure 6

Pretty similar..

Results – Convection and Air Temperature

In the model results up until now the air temperature has been at 300K (27°C) and the surface temperature of the ocean has been only a few degrees higher.

The model doesn’t attempt to change the air temperature. And in the real world the atmosphere at the ocean surface and the surface temperature are usually within a few degrees.

But what happens in our model if real world situations cool the ocean surface more? For example, higher temperatures locally create large convective currents of rising hot air which “sucks in” cooler air from another area.

What would be the result? A higher “instantaneous” surface temperature from higher back radiation might be “swept away” into the atmosphere and “lost” from the model.. This might create a different final answer for back radiation compared with solar radiation.

It seemed to be worth checking out, so I reduced the air temperature to 285K (from 300K) and ran the model for one year from the original starting temperatures (just over 300K). The result was that the ocean temperature dropped significantly, demonstrating how closely the ocean surface and the atmosphere (at the ocean surface) are coupled.

Using the end of the first year as a starting temperature, I ran the model for 5 years for case 1, 2a and 3a (each with the same starting temperature):

Figure 7

Once again we see that back radiation increases do change the temperatures of the ocean depths – and at almost identical values to the solar radiation changes.

Here is a set of graphs for one of the 5-year model runs for this lower air temperature, also demonstrating how the lower air temperature pulls down the ocean surface temperature:

Figure 8 – Click for a larger image

The first graph shows how the average daily temperature changes over the full time period – making it easy to see equilibrium being reached. The second graph shows the hourly average temperature change for the last 5 days. The last graph shows the heat which is either absorbed or released within the ocean in temperature changes. As zero is reached it means the ocean is not heating up or cooling down.

Inaccuracies in the Model

We can write a lot on the all the inaccuracies in the model. It’s a very rudimentary model. In the real world the hotter tropical / sub-tropical oceans transfer heat to higher latitudes and to the poles. So does the atmosphere. A 1-d model is very unrealistic.

The emissivity and absorptivity of the ocean are set to 1, there are no ocean currents, the atmosphere doesn’t heat up and cool down with the ocean surface, the solar radiation value doesn’t change through the year, the top layer was 5mm not 1μm, the cooler skin layer was not modeled, a number of isothermal layers is unphysical compared with the real ocean of continuously varying temperatures..

However, what a nice simple model tells us is how energy only absorbed in the top few microns of the ocean can affect the temperature of the ocean much lower down.

It’s obvious“, I could say.

Conclusion

My model could be wrong – for example, just a mistake which means it doesn’t operate how I have described it. The many simplifications of the model might hide some real world physics effect which means that Hypothesis C is actually less likely than Hypothesis B.

However, if the model doesn’t contain mistakes, at least I have provided more support for Hypothesis C – that the back radiation absorbed in the very surface of the ocean can change the temperature of the ocean below, and demonstrated that Hypothesis B is less likely.

I look forward to advocates of Hypothesis B putting forward their best arguments.

Update – Code files saved here

Notes

Note 1 – To avoid upsetting the purists, when we say “does back-radiation heat the ocean?” what we mean is, “does back-radiation affect the temperature of the ocean?”

Some people get upset if we use the term heat, and object that heat is the net of the two way process of energy exchange. It’s not too important for most of us. I only mention it to make it clear that if the colder atmosphere transfers energy to the ocean then more energy goes in the reverse direction.

It is a dull point.

Note 2 – Some people think that back radiation can’t occur at all, and others think that it can’t affect the temperature of the surface for reasons that are a confused mangle of the second law of thermodynamics. See Science Roads Less Travelled and especially Amazing Things we Find in Textbooks – The Real Second Law The Real Second Law of Thermodynamics and The Three Body Problem. And for real measurements of back radiation, see The Amazing Case of “Back Radiation” -Part One.

Note 3 – If we change the peak solar radiation from 600 to 610, this is the peak value and only provides an increase for 12 out of 24 hours. By contrast, back radiation is a 24 hour a day value. How much do we have to change the average DLR value to provide an equivalent amount of energy over 24 hours?

If we integrate the solar radiation for the before and after cases we find the relationship between the value for the peak of the solar radiation and the average of the back radiation = π (3.14159). So if the DLR increase = 10, the peak solar increase to match = 10 x π = 31.4159; and if the solar peak increase = 10, the DLR increase to match = 10/π = 3.1831.

If anyone would like this demonstrated further please ask and I will update in the comments. I’m sure I could have made this easier to understand than I actually have (haven’t).

Read Full Post »

In Part One we had a look at Ramanathan’s work (actually Raval and Ramanathan) attempting to measure the changes in outgoing longwave radiation vs surface temperature.

In Part Two (Part Zero perhaps) we looked at some basics on water vapor as well as some measurements. The subject of the non-linear effects of water vapor was raised.

Part One Responses attempted a fuller answer to various questions and objections about Part One

Water vapor feedback isn’t a simple subject.

First, a little more background.

Effectiveness of Water Vapor at Different Heights

Here are some model results of change in surface temperature for changes in specific humidity at different heights:

From Shine & Sinha (1991)

From Shine & Sinha (1991)

For newcomers, 200mbar is the top of the troposphere (lower atmosphere), and 1000mbar is the surface.

You can see that for a given increase in the mixing ratio of water vapor the most significant effect comes at the top of the troposphere.

The three temperatures: cool = 277K (4°C); average = 287K (14°C); and warm = 298K (23°C).

Now a similar calculation using changes in relative humidity:

From Shine & Sinha (1991)

From Shine & Sinha (1991)

The average no continuum shows the effect without the continuum absorption portion of the water vapor absorption. This is the frequency range between 800-1200 cm-1, (wavelength range 12-8μm) – often known as the “atmospheric window”. This portion of the spectral range is important in studies of increasing water vapor, something we will return to in later articles.

Here we can see that in warmer climates the lower troposphere has more effect for changes in relative humidity. And for average and cooler climates, changes in relative humidity are still more important in the lower troposphere, but the upper troposphere does become more significant.

(This paper, by Shine & Sinha, appears to have been inspired by Lindzen’s 1990 paper where he talked about the importance of upper tropospheric water vapor among other subjects).

So clearly the total water vapor in a vertical section through the atmosphere isn’t going to tell us enough (see note 1). We also need to know the vertical distribution of water vapor.

Here is a slightly different perspective from Spencer and Braswell (1997):

Spencer and Braswell (1997)

Spencer and Braswell (1997)

This paper took a slightly different approach.

  • Shine & Sinha looked at a 10% change in relative humidity – so for example, from 20% to 22% (20% x 110%)
  • Spencer & Braswell said, let’s take a 10% change as 20% to 30% (20% + 10%)

This isn’t an argument about how to evaluate the effect of water vapor – just how to illustrate a point. Spencer & Braswell are highlighting the solid line in the right hand graph, and showing Shine & Sinha’s approach as the dashed line.

In the end, both will get the same result if the water vapor changes from 20% to 30% (for example).

Boundary Layers and Deep Convection

Here’s a conceptual schematic from Sun and Lindzen 1993:

The bottom layer is the boundary layer. Over the ocean the source of water vapor in this boundary layer is the ocean itself. Therefore, we would assume that the relative humidity would be high and the specific humidity (the amount of water vapor) would be strongly dependent on temperature (see Part Two).

Higher temperatures drive stronger convection which creates high cloud levels. This is often called “deep convection” in the literature. These convective towers are generally only a small percentage of the surface area. So over most of the tropics, air is subsiding.

Here is a handy visualization from Held & Soden (2000):

Held and Soden (2000)

Held and Soden (2000)

The concept to be clear about is within the well-mixed boundary layer there is a strong connection between the surface temperature and the water vapor content. But above the boundary layer there is a disconnect. Why?

Because most of the air (by area) is subsiding (see note 2). This air has at one stage been convected high up in the atmosphere, has dried out and now is returning back to the surface.

Subsiding air in some parts of the tropics is extremely dry with a very low relative humidity. Remember the graphs in Part Two – air high up in the atmosphere can only hold 1/1,000th of the water vapor that can be held close to the surface. So air which is saturated when it is at the tropopause is – in relative terms – very dry when it returns to the surface.

Therefore, the theoretical connection between surface temperature and specific humidity becomes a challenging one above the boundary layer.

And the idea that relative humidity is conserved is also challenged.

Relationship between Specific Humidity and Local Temperature

Sun and Oort (1995) analyzed the humidity and temperature in the tropics (30°S to 30°N) at a number of heights over a long time period:

Sun and Oort (1995)

Sun and Oort (1995)

Note that the four graphs represent four different heights (pressures) in the atmosphere. And note as well that the temperatures plotted are the temperatures at that relevant height.

Their approach was to average the complete tropical domain (but not the complete globe) and, therefore, average out the ascending and descending portions of the atmosphere:

Through horizontal averaging, variations of water vapor and temperature that are related to the horizontal transport by the large-scale circulation will be largely removed, and thus the water vapor and temperature relationship obtained is more indicative of the property of moist convection, and is thus more relevant to the issue of water vapor feedback in global warming.

In analyzing the results, they said:

Overall, the variations of specific humidity correlate positively at all levels with the temperature variations at the same level. However, the strength of the correlation between specific humidity variations and the temperature variations at the same level appears to be strongly height dependent.

Sun & Oort (1995)

Sun & Oort (1995)

Early in the paper they explained that pre-1973 values of water vapor were more problematic than post-1973 and therefore much of the analysis would be presented with and without the earlier period. Hence, the two plots in the graph above.

Now they do something even more interesting and plot the results of changes in specific humidity (q) with temperature and compare with the curve for constant relative humidity:

Sun & Oort (1995)

Sun & Oort (1995)

The dashed line to the right is the curve of constant relative humidity. (For those still trying to keep up, if specific humidity was constant, the measured values would be a straight vertical line going through the zero).

The largest changes of water vapor with temperature occur in the boundary layer and the upper troposphere.

They note:

The water vapor in the region right above the tropical convective boundary layer has the weakest dependence on the local temperature.

And also that the results are consistent with the conceptual picture put forward by Sun and Lindzen (1993). Well, it is the same De-Zheng Sun..

Vertical Structure of Water Vapor Variations

How well can we correlate what happens at the surface with what happens in the “free troposphere” (the atmosphere above the boundary layer)?

If we want to understand temperature vertically through the atmosphere it correlates very well with the surface temperature. Probably not a surprise to anyone.

If we want to understand variations of specific humidity in the upper troposphere, we find (Sun & Oort find) that it doesn’t correlate very well with specific humidity in the boundary layer.

Sun & Oort (1995)

Sun & Oort (1995)

Take a look at (b) – this is the correlation of local temperature at any height with the surface temperature below. There is a strong correlation and no surprise.

Then look at (a) – this is the correlation of specific humidity at any height with the surface specific humidity. We can see that the correlation reduces the higher up we go.

This demonstrates that the vertical movement of water vapor is not an easy subject to understand.

Sun and Oort also comment on Raval and Ramanathan (1989), the source of the bulk of Clouds and Water Vapor – Part One:

Raval and Ramanathan (1989) were probably the first to use observational data to determine the nature of water vapor feedback in global warming. They examined the relationship between sea surface temperature and the infrared flux at the top of the atmosphere for clear sky conditions. They derived the relationship from the geographical variations..

However, whether the tropospheric water vapor content at all levels is positively correlated with the sea surface temperature is not clear. More importantly, the air must be subsiding in clear-sky regions. When there is a large-scale subsidence, the influence from the sea is restricted to a shallow boundary layer and the free tropospheric water vapor content and temperature are physically decoupled from the sea surface temperature underneath.

Thus, it may be questionable to attribute the relationships obtained in such a way to the properties of moist convection.

Conclusion

The subject of water vapor feedback is not a simple one.

In their analysis of long-term data, Sun and Oort found that water vapor variations with temperature in the tropical domain did not match constant relative humidity.

They also, like most papers, caution drawing too much from their results. They note problems in radiosonde data, and also that statistical relationships observed from inter-annual variability may not be the same as those due to global warming from increased “greenhouse” gases.

Articles in this Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

References

How Dry is the Tropical Free Troposphere? Implications for Global Warming Theory,
Spencer & Braswell, Bulletin of the American Meteorological Society (1997)

Humidity-Temperature Relationships in the Tropical Troposphere, Sun & Oort, Journal of Climate (1995)

Distribution of Tropical Tropospheric Water Vapor, Sun & Lindzen, Journal of Atmospheric Sciences (1993)

Sensitivity of the Earth’s Climate to height-dependent changes in the water vapor mixing ratio, Shine & Sinha, Nature (1991)

Some Coolness concerning Global Warming, Lindzen,Bulletin of the American Meteorological Society (1990)

Notes

Note 1 – The total amount of water vapor, TPW ( total precipitable water), is obviously something we want to know, but we don’t have enough information if we don’t know the distribution of this water vapor with height. It’s a shame, because TPW is the easiest value to measure via satellite.

Note 2 – Obviously the total mass of air is conserved. If small areas have rapidly rising air, larger areas will have have slower subsiding air.

Read Full Post »

Without a firm grasp on the basics it can be hard to choose between a good and bad explanation.

In The Hoover Incident I explained what would happen if the atmosphere didn’t absorb or emit radiation – i.e., if the radiatively active gases were “hoovered up”. Have a read of that post for a full explanation, but the essence of it is that with no atmospheric absorption or radiation the surface would be radiating around 390 W/m² into space while receiving only 240 W/m² from the sun. Therefore, the earth would cool down until it was only radiating 240 W/m² (it’s slightly more complicated) – leading to a surface temperature around -18°C (255K).

One of the statements I made was:

And no matter what happens to convection, lapse rates, and rainfall this cooling will continue. That’s because these aspects of the climate only distribute the heat.

Nothing can stop the radiation loss from the surface because the atmosphere is no longer absorbing radiation. They might enhance or reduce the cooling by changing the surface temperature in some way – because radiation emitted by the surface is a function of temperature (proportional to T4). But while energy out > energy in, the climate system would be cooling.

Recently, one commenter said in response to this (but in another article):

Convection etc distributes the heat in ways that affect radiative heat transport from the surface. While the intensity of radiation is a function of temperature, radiative heat transport is a function of a temperature difference. No heat may be exchanged between regions with the same temperature.

In an isothermal atmosphere there would be no temperature difference between it and the surface and therefore no heat loss from the surface. The accumulation of heat in a radiatively-constrained atmosphere by non-radiative means of heat transport from the surface would produce an isothermal atmosphere. Then, energy out = 0 and < energy in and the climate system would be heating.

The comment is confused, so I thought it was worth explaining in some detail.

Radiation and Temperature

Here is the starting point for the Hoover Incident:

This isn’t showing any heat transfer by conduction or convection, to keep the diagram simple.

The blue area – the troposphere, or lower atmosphere – is shown with a gap between it and the earth’s surface. This is just to make heat transfer values clearer – there isn’t really a gap. Notice that no radiation is emitted by the atmosphere (because this is a thought experiment where radiatively-active gases have been “hoovered” up).

Note as well that we are looking at averages in this diagram. The solar radiation absorbed in any one places is very rarely 240 W/m² – at night it is zero, and at midday in the tropics it is closer to 1000 W/m². If you want to understand why the average value of solar radiation absorbed is 240 W/m² take a look at Earth’s Energy Budget – Part One.

Rather than thinking of this as the average, if it helps, simply think of this as the heat transfer for one location where these are the actual values.

The equation for the emission of thermal radiation by the earth’s surface is only dependent on its temperature and emissivity. The equation is the well-known Stefan-Boltzmann law:

j = εσT4

where ε=emissivity, σ=5.67×10-8, T is temperature in K and j is energy per second per unit area (W/m²)

Emissivity is a value between 0 and 1, where 1 is a “blackbody” or perfect radiator. The surface of the earth has an emissivity very close to 1. See The Dull Case of Emissivity and Average Temperatures.

Now regardless of any heat transfer by conduction or convection with the atmosphere, the surface of the earth will continue to radiate in accordance with that equation. With an emissivity of 1, a surface of 15°C (288K) radiates 390 W/m².

Emission of thermal radiation is independent of any other heat transfer mechanisms and only depends on the temperature of the body and its emissivity.

Now the earth also absorbs solar energy by radiation. So for our initial conditions, the net heat transfer by radiation,

Hrad = 240 – 390 W/m² = -150 W/m² (i.e., a cooling of 150 W/m²)

The only heat transfer mechanism in a vacuum is radiation and therefore heat can only be transferred into and out of the total climate system by radiation. In our thought experiment the atmosphere is unable to absorb or emit radiation.

Therefore the solar energy absorbed at the surface minus the energy radiated from the surface of the earth gives the net heat transfer for the entire climate system.

Radiation, Sensible and Latent Heat

Now with the particular example above let’s add heat transfer between the surface and the atmosphere by conduction and convection. This is often termed sensible heat. Gases have a very low thermal conductivity, so most heat will be transferred by convection (bulk movement of air). This will also include latent heat, which is the heat used in evaporation of water from the surface of the earth.

There is no simple formula for convection because it depends on many factors including the speed of the air movement. The formula for latent heat removal is also complex. So to get started we will use the average value derived by Kiehl and Trenberth in their well-known 1997 paper. Note that their calculation of latent heat was derived from the amount of rainfall (what comes down, must have been evaporated up in the first place).

Here is the updated diagram, still showing the initial conditions, just after the Hoover Incident has taken place:

Note that the conduction and convection from the atmosphere into space = 0 W/m².

And with conduction and convection it is conventional to show the net flow of heat – which is why there is no arrow with heat from the atmosphere to the surface. (With radiation, because heat is exchanged across distances it is more usual to show radiation emitted from each body).

What happens now?

To calculate dynamic processes is more difficult, especially if we wanted to do it for all points on the earth.

What everyone should be able to see is that the surface of the earth is losing heat.

If we use the value from K&T for sensible and latent heat removal, we can see that net heat transfer from the surface of the earth at time = 0 is now 252 W/m². That is, a cooling of 252 W/m².

Let’s consider the atmosphere. It is gaining heat from the surface of the earth, and not radiating it into space (or back to the surface), because in this post-Hoover world we have an atmosphere with no ability to emit radiation.

Therefore, within a relatively short space of time, the heat transfer (averaged around the globe) between the surface of the earth and the atmosphere will drop to almost zero. If the atmosphere heats up and the earth’s surface cools down – the result has to be that this heat transfer reduces.

But whatever happens to the temperature difference and heat transfer between the atmosphere and the earth’s surface – the radiation from the earth’s surface into space will still follow the Stefan-Boltzmann law and be proportional to T4. The only way this can change is if fundamental physics turns out to be wrong..

Dynamic Situation

How fast will the earth’s surface cool down?

This is a more challenging question. It involves calculating the heat flow out from the rocks, soil, sand, vegetation and most importantly, from the oceans. For each of these materials we would need to know the thermal diffusivity, which is the ratio of the thermal conductivity (how well heat travels through a material) to the heat capacity (how much heat is stored in a material per K of temperature change). As the earth’s surface cools down the rate of heat loss from radiation will reduce. This is because, using our earlier equation with the term for radiation stated explicitly:

Hrad = 240 – εσT4 W/m² ( = solar radiation absorbed – radiation emitted from the surface)

That is, the solar radiation absorbed stays constant while the radiation emitted reduces as the temperature decreases. It’s not so easy to visualize if you haven’t seen this kind of function before. Here is a very simple model of how the temperature (and net radiation) might change with time:

Click for a larger view

This graph is calculated by assuming that the climate system’s heat is stored in an ocean 4km deep and a very high thermal conductivity of water (that is, the heat can flow from the depths of the ocean to the surface with almost no resistance).

A more complete treatment takes account of the thermal conductivity of water. This value varies greatly depending on whether the water is still or well-mixed.

Below, the graph on the left shows the surface temperature against time. The graph on the right is more interesting and shows the temperature profile against depth of ocean for a few different times:

Click for a larger view

In this right hand graph the lower curves are later times. The initial temperature profile against depth is a straight line from 288K at the surface to 273K at 4000m – this is my assumption, my initial conditions.

What you can see from this graph is that the surface is much better at radiating heat away than the ocean is at conducting heat from its depths to the surface. That’s why the temperature stays higher for longer lower down in the ocean.

In this more thorough treatment the surface cools more quickly initially but will take much longer to reach the equilibrium of 255K.

And of course, alert readers will have noticed that it all changes when the surface freezes as the heat conductivity through ice will be different, and the albedo of the earth will change..

In fact, the problem can be made more and more complex, but that doesn’t change the essential elements.

Conclusion

Heat transfer by radiation is conceptually simple.

A surface emits radiation with a well-known formula which depends on temperature of that surface (and its emissivity).

The net heat transfer by radiation depends on how much radiation is incident on that surface from other bodies – whether near or far – and what proportion is absorbed.

In the case of a planet with an atmosphere – if the atmosphere cannot absorb or emit radiation then the equilibrium condition for that climate system will be where the radiation emitted by the planetary surface equals the radiation absorbed by the planetary surface.

Read Full Post »

« Newer Posts - Older Posts »