Feeds:
Posts
Comments

Archive for the ‘Basic Science’ Category

[I started writing this some time ago and got side-tracked, initially because aerosol interaction in clouds and rainfall is quite fascinating with lots of current research and then because there are many papers on higher resolution simulations of convection that also looked interesting.. so decided to post it less than complete because it will be some time before I can put together a more complete article..]

In Part Four of this series we looked at the paper by Mauritsen et al (2012). Isaac Held has a very interesting post on his blog – and people interested in understanding climate science will benefit from reading his blog – he has been in the field writing papers for 40 years). He highlighted this paper: Cloud tuning in a coupled climate model: Impact on 20th century warming, Jean-Christophe Golaz, Larry W. Horowitz, and Hiram Levy II, GRL (2013).

Their paper has many similarities to Mauritsen et al (2013). Here are some of their comments:

Climate models incorporate a number of adjustable parameters in their cloud formulations. They arise from uncertainties in cloud processes. These parameters are tuned to achieve a desired radiation balance and to best reproduce the observed climate. A given radiation balance can be achieved by multiple combinations of parameters. We investigate the impact of cloud tuning in the CMIP5 GFDL CM3 coupled climate model by constructing two alternate configurations.

They achieve the desired radiation balance using different, but plausible, combinations of parameters. The present-day climate is nearly indistinguishable among all configurations.

However, the magnitude of the aerosol indirect effects differs by as much as 1.2 W/m², resulting in significantly different temperature evolution over the 20th century..

 

..Uncertainties that arise from interactions between aerosols and clouds have received considerable attention due to their potential to offset a portion of the warming from greenhouse gases. These interactions are usually categorized into first indirect effect (“cloud albedo effect”; Twomey [1974]) and second indirect effect (“cloud lifetime effect”; Albrecht [1989]).

Modeling studies have shown large spreads in the magnitudes of these effects [e.g., Quaas et al., 2009]. CM3 [Donner et al., 2011] is the first Geophysical Fluid Dynamics Laboratory (GFDL) coupled climate model to represent indirect effects.

As in other models, the representation in CM3 is fraught with uncertainties. In particular, adjustable cloud parameters used for the purpose of tuning the model radiation can also have a significant impact on aerosol effects [Golaz et al., 2011]. We extend this previous study by specifically investigating the impact that cloud tuning choices in CM3 have on the simulated 20th century warming.

What did they do?

They adjusted the “autoconversion threshold radius”, which controls when water droplets turn into rain.

Autoconversion converts cloud water to rain. The conversion occurs once the mean cloud droplet radius exceeds rthresh. Larger rthresh delays the formation of rain and increases cloudiness.

The default in CM3 was 8.2 􏰃μm. They selected alternate values from other GFDL models: 6.0 􏰃μm (CM3w) and 10.6 􏰃μm (CM3c). Of course, they have to then adjust others parameters to achieve radiation balance – the “erosion time” (lateral mixing effect reducing water in clouds) which they note is poorly constrained (that is, we don’t have some external knowledge of the correct value for this parameter) and the “velocity variance” which affects how quickly water vapor condenses out onto aerosols.

Here is the time evolution in the three models (and also observations):

 

From Golaz et al 2013

From Golaz et al 2013

Figure 1 – Click to enlarge

In terms of present day climatology, the three variants are very close, but in terms of 20th century warming two variants are very different and only CM3w is close to observations.

Here is their conclusion, well worth studying. I reproduce it in full:

CM3w predicts the most realistic 20th century warming. However, this is achieved with a small and less desirable threshold radius of 6.0 􏰃μm for the onset of precipitation.

Conversely, CM3c uses a more desirable value of 10.6 􏰃μm but produces a very unrealistic 20th century temperature evolution. This might indicate the presence of compensating model errors. Recent advances in the use of satellite observations to evaluate warm rain processes [Suzuki et al., 2011; Wang et al., 2012] might help understand the nature of these compensating errors.

CM3 was not explicitly tuned to match the 20th temperature record.

However, our findings indicate that uncertainties in cloud processes permit a large range of solutions for the predicted warming. We do not believe this to be a peculiarity of the CM3 model.

Indeed, numerous previous studies have documented a strong sensitivity of the radiative forcing from aerosol indirect effects to details of warm rain cloud processes [e.g., Rotstayn, 2000; Menon et al., 2002; Posselt and Lohmann, 2009; Wang et al., 2012].

Furthermore, in order to predict a realistic evolution of the 20th century, models must balance radiative forcing and climate sensitivity, resulting in a well-documented inverse correlation between forcing and sensitivity [Schwartz et al., 2007; Kiehl, 2007; Andrews et al., 2012].

This inverse correlation is consistent with an intercomparison-driven model selection process in which “climate models’ ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication” [Mauritsen et al., 2012].

Very interesting paper, and freely availableKiehl’s paper, referenced in the conclusion, is also well-worth reading. In his paper he shows that models with the highest sensitivity to GHGs have the highest negative value from 20th century aerosols, while the models with the lowest sensitivity to GHGs have the lowest negative value from 20th century aerosols. Therefore, both ends of the range can reproduce 20th century temperature anomalies, while suggesting very different 21st century temperature evolution.

A paper on higher resolution models, Siefert et al 2015, did some model experiments, “large eddy simulations”, which are much higher resolution than GCMs. The best resolution GCMs today typically have a grid size around 100km x 100km. Their LES model had a grid size of 25m x 25m, with 2048 x 2048 x 200 grid points, to span a simulated volume of 51.2 km x 51.2 km x 5 km, and ran for a simulated 60hr time span.

They had this to say about the aerosol indirect effect:

It has also been conjectured that changes in CCN might influence cloud macrostructure. Most prominently, Albrecht [1989] argued that processes which reduce the average size of cloud droplets would retard and reduce the rain formation in clouds, resulting in longer-lived clouds. Longer living clouds would increase cloud cover and reflect even more sunlight, further cooling the atmosphere and surface. This type of aerosol-cloud interaction is often called a lifetime effect. Like the Twomey effect, the idea that smaller particles will form rain less readily is based on sound physical principles.

Given this strong foundation, it is somewhat surprising that empirical evidence for aerosol impacts on cloud macrophysics is so thin.

Twenty-five years after Albrecht’s paper, the observational evidence for a lifetime effect in the marine cloud regimes for which it was postulated is limited and contradictory. Boucher et al. [2013] who assess the current level of our understanding, identify only one study, by Yuan et al. [2011], which provides observational evidence consistent with a lifetime effect. In that study a natural experiment, outgassing of SO2 by the Kilauea volcano is used to study the effect of a changing aerosol environment on cloud macrophysical processes.

But even in this case, the interpretation of the results are not without ambiguity, as precipitation affects both the outgassing aerosol precursors and their lifetime. Observational studies of ship tracks provide another inadvertent experiment within which one could hope to identify lifetime effects [Conover, 1969; Durkee et al., 2000; Hobbs et al., 2000], but in many cases the opposite response of clouds to aerosol perturbations is observed: some observations [Christensen and Stephens, 2011; Chen et al., 2012] are consistent with more efficient mixing of smaller cloud drops leading to more rapid cloud desiccation and shorter lifetimes.

Given the lack of observational evidence for a robust lifetime effect, it seems fair to question the viability of the underlying hypothesis.

In their paper they show a graphic of what their model produced, it’s not the main dish but interesting for the realism:

From Seifert et al 2015

From Seifert et al 2015

Figure 2 – Click to expand

It is an involved paper, but here is one of the conclusions, relevant for the topic at hand:

Our simulations suggest that parameterizations of cloud-aerosol effects in large-scale models are almost certain to overstate the impact of aerosols on cloud albedo and rain rate. The process understanding on which the parameterizations are based is applicable to isolated clouds in constant environments, but necessarily neglects interactions between clouds, precipitation, and circulations that, as we have shown, tend to buffer much of the impact of aerosol change.

For people new to parameterizations, a couple of articles that might be useful:

Conclusion

Climate models necessarily have some massive oversimplifications, as we can see from the large eddy simulation which has 25m x 25m grid cells while GCMs have 100km x 100km at best. Even LES models have simplifications – to get to direct numerical solution (DNS) of the equations for turbulent flow we would need a scale closer to a few millimeters rather than meters.

The over-simplifications in GCMs require “parameters” which are not actually intrinsic material properties, but are more an average of some part of a climate process over a large scale. (Note that even if we had the resolution for the actual fundamental physics we wouldn’t necessary know the material parameters necessary, especially in the case of aerosols which are heterogeneous in time and space).

As the climate changes will these “parameters” remain constant, or change as the climate changes?

References

Cloud tuning in a coupled climate model: Impact on 20th century warming, Jean-Christophe Golaz, Larry W. Horowitz, and Hiram Levy II, GRL (2013) – free paper

Twentieth century climate model response and climate sensitivity, Jeffrey T. Kiehl, GRL (2007) – free paper

Large-eddy simulation of the transient and near-equilibrium behavior of precipitating shallow convection, Axel Seifert et al, Journal of Advances in Modeling Earth Systems (2015) – free paper

Read Full Post »

In Part VI we looked at past and projected sea level rise. There is significant uncertainty in future sea level rise, even assuming we know the future global temperature change. The uncertainty results from “how much ice will melt?”

We can be reasonably sure of sea level rise from thermal expansion (so long as we know the temperature). By contrast, we don’t have much confidence in the contribution from melting ice (on land). This is because ice sheet dynamics (glaciers, Greenland & Antarctic ice sheet) are non-linear and not well understood.

Here’s something surprising. Suppose you live in Virginia near the ocean. And suppose all of the Greenland ice sheet melted in a few years (not possible, but just suppose). How much would sea level change in Virginia? Hint: the entire Greenland ice sheet converted into global mean sea level is about 7m.

Zero change in Virginia.

Here are charts of relative sea level change across the globe for Greenland & West Antarctica, based on a 1mm/yr contribution from each location – click to expand:

From Tamisiea 2011

From Tamisiea 2011

Figure 1 – Click to Expand

We see that the sea level actually drops close to Greenland, stays constant around mid-northern latitudes in the Atlantic and rises in other locations. The reason is simple – the Greenland ice sheet is a local gravitational attractor and is “pulling the ocean up” towards Greenland. Once it is removed, the local sea level drops.

Uncertainty

If we knew for sure that the global mean temperature in 2100 would be +2ºC or +3ºC compared to today we would have a good idea in each case of the sea level rise from thermal expansion. But not much certainty on any rise from melting ice sheets.

Let’s consider someone thinking about the US for planning purposes. If the Greenland ice sheet contributes lots of melting ice, the sea level on the US Atlantic coast won’t be affected at all and the increase on the Pacific coast will be significantly less than the overall sea level rise. In this case, the big uncertainty in the magnitude of sea level rise is not much of a factor for most of the US.

If the West Antarctic ice sheet contributes lots of melting ice, the sea level on the east and west coasts of the US will be affected by more than the global mean sea level rise.

For example, imagine the sea level was expected to rise 0.3m from thermal expansion by 2100. But there is a fear that ice melting will cause 0 – 0.5m global rise. A US policymaker really needs to know which ice sheet will melt. The “we expect at most an additional 0.5m from melting ice” tells her that she might have – in total – a maximum sea level rise of 0.3m on the east coast and a little more than 0.3m on the west coast if Greenland melts; but she instead might have – in total – a maximum of almost 1m on each coast if West Antarctica melts.

The source of the melting ice just magnifies the uncertainty for policy and economics.

If this 10th century legend was still with us maybe it would be different (we only have his tweets):

Donaeld the Unready

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

References

The moving boundaries of sea level change: Understanding the origins of geographic variability, ME Tamisiea & JX Mitrovica, Oceanography (2011)

Read Full Post »

In Part II we looked at various scenarios for emissions. One important determinant is how the world population will change through this century and with a few comments on that topic I thought it worth digging a little.

Here is Lutz, Sanderson & Scherbov, Nature (2001):

The median value of our projections reaches a peak around 2070 at 9.0 billion people and then slowly decreases. In 2100, the median value of our projections is 8.4 billion people with the 80 per cent prediction interval bounded by 5.6 and 12.1 billion.

From Lutz 2001

From Lutz 2001

Figure 1 – Click to enlarge

This paper is behind a paywall but Lutz references the 1996 book he edited for assumptions, which is freely available (link below).

In it the authors comment, p. 22:

Some users clearly want population figures for the year 2100 and beyond. Should the demographer disappoint such expectations and leave it to others with less expertise to produce them? The answer given in this study is no. But as discussed below, we make a clear distinction between what we call projections up to 2030-2050 and everything beyond that time, which we term extensions for illustrative purposes.

[Emphasis added]

And then p.32:

Sanderson (1995) shows that it is impossible to produce “objective” confidence ranges for future population projections. Subjective confidence intervals are the best we can ever attain because assumptions are always involved.

Here are some more recent views.

Gerland et al 2014 – Gerland is from the Population Division of the UN:

The United Nations recently released population projections based on data until 2012 and a Bayesian probabilistic methodology. Analysis of these data reveals that, contrary to previous literature, world population is unlikely to stop growing this century. There is an 80% probability that world population, now 7.2 billion, will increase to between 9.6 and 12.3 billion in 2100. This uncertainty is much smaller than the range from the traditional UN high and low variants. Much of the increase is expected to happen in Africa, in part due to higher fertility and a recent slowdown in the pace of fertility decline..

..Among the most robust empirical findings in the literature on fertility transitions are that higher contraceptive use and higher female education are associated with faster fertility decline. These suggest that the projected rapid population growth could be moderated by greater investments in family planning programs to satisfy the unmet need for contraception, and in girls’ education. It should be noted, however, that the UN projections are based on an implicit assumption of a continuation of existing policies, but an intensification of current investments would be required for faster changes to occur

Wolfgang Lutz & Samir KC (2010). Lutz seems popular in this field:

The total size of the world population is likely to increase from its current 7 billion to 8–10 billion by 2050. This uncertainty is because of unknown future fertility and mortality trends in different parts of the world. But the young age structure of the population and the fact that in much of Africa and Western Asia, fertility is still very high makes an increase by at least one more billion almost certain. Virtually, all the increase will happen in the developing world. For the second half of the century, population stabilization and the onset of a decline are likely..

Although the paper doesn’t focus on 2100, but only up to 2050 it does include a graph for probalistic expectations to 2100 and has some interesting commentary around how different forecasting groups deal with uncertainty, how women’s education plays a huge role in reducing fertility and many other stories, for example:

The Demographic and Health Survey for Ethiopia, for instance, shows that women without any formal education have on average six children, whereas those with secondary education have only two (see http://www.measuredhs.com). Significant differentials can be found in most populations of all cultural traditions. Only in a few modern societies does the strongly negative association give way to a U-shaped pattern in which the most educated women have a somewhat higher fertility than those with intermediate education. But globally, the education differentials are so pervasive that education may well be called the single most important observable source of population heterogeneity after age and sex (Lutz et al. 1999). There are good reasons to assume that during the course of a demographic transition the fact that higher education leads to lower fertility is a true causal mechanism, where education facilitates better access to and information about family planning and most importantly leads to a change in attitude in which ‘quantity’ of children is replaced by ‘quality’, i.e. couples want to have fewer children with better life chances..

Lee 2011, another very interesting paper, makes this comment:

The U.N. projections assume that fertility will slowly converge toward replacement level (2.1 births per woman) by 2100

Lutz’s book had a similar hint that many demographers assume that somehow societies on mass will converge towards a steady state. Lee also comments that probability treatments for “low”, “medium” and “high” are not very realistic because the methods used assume a correlation between different countries, that isn’t true in practice. Lutz likewise has similar points. Here is Lee:

Special issues arise in constructing consistent probability intervals for individual countries, for regions, and for the world, because account must be taken of the positive or negative correlations among the country forecast errors within regions and across regions. Since error correlation is typically positive but less than 1.0, country errors tend to cancel under aggregation, and the proportional error bounds for the world population are far narrower than for individual countries. The NRC study (20) found that the average absolute country error was 21% while the average global error was only 3%. When the High, Medium and Low scenario approach is used, there is no cancellation of error under aggregation, so the probability coverage at different levels of aggregation cannot be handled consistently. An ongoing research collaboration between the U.N. Population Division and a team led by Raftery is developing new and very promising statistical methods for handling uncertainty in future forecasts.

And then on UN projections:

One might quibble with this or that assumption, but the UN projections have had an impressive record of success in the past, particularly at the global level, and I expect that to continue in the future. To a remarkable degree, the UN has sought out expert advice and experimented with cutting edge forecasting techniques, while maintaining consistency in projections. But in forecasting, errors are inevitable, and sound decision making requires that the likelihood of errors be taken into account. In this respect, there is much room for improvement in the UN projections and indeed in all projections by government statistical offices.

This comment looks like an oblique academic gentle slapping around (disguised as praise), but it’s hard to tell.

Conclusion

I don’t have a conclusion. I thought it would be interesting to find some demographic experts and show their views on future population trends. The future is always hard to predict – although in demography the next 20 years are usually easy to predict, short of global plagues and famines.

It does seem hard to have much idea about the population in 2100, but the difference between a population of 8bn and 11bn will have a large impact on CO2 emissions (without very significant CO2 mitigation policies).

References

The end of world population growth, Wolfgang Lutz, Warren Sanderson & Sergei Scherbov, Nature (2001) – paywall paper

The future population of the world – what can we assume?, edited Wolfgang Lutz, Earthscan Publications (1996) – freely available book

World Population Stabilization Unlikely This Century, Patrick Gerland et al, Science (2014) – free paper

Dimensions of global population projections: what do we know about future population trends and structures? Wolfgang Lutz & Samir KC, Phil. Trans. R. Soc. B (2010)

The Outlook for Population Growth, Ronald Lee, Science (2011) – free paper

Read Full Post »

In Planck, Stefan-Boltzmann, Kirchhoff and LTE one of our commenters asked a question about emissivity. The first part of that article is worth reading as a primer in the basics for this article. I don’t want to repeat all the basics, except to say that if a body is a “black body” it emits radiation according to a simple formula. This is the maximum that any body can emit. In practice, a body will emit less.

The ratio between actual and the black body is the emissivity. It has a value between 0 and 1.

The question that this article tries to help readers understand is the origin and use of the emissivity term in the Stefan-Boltzmann equation:

E = ε’σT4

where E = total flux, ε’ = “effective emissivity” (a value between 0 and 1), σ is a constant and T = temperature in Kelvin (i.e., absolute temperature).

The term ε’ in the Stefan-Boltzmann equation is not really a constant. But it is often treated as a constant in articles that related to climate. Is this valid? Not valid? Why is it not a constant?

There is a constant material property called emissivity, but it is a function of wavelength. For example, if we found that the emissivity of a body at 10.15 μm was 0.55 then this would be the same regardless of whether the body was in Antarctica (around 233K = -40ºC), the tropics (around 303K = 30ºC) or at the temperature of the sun’s surface (5800K). How do we know this? From experimental work over more than a century.

Hopefully some graphs will illuminate the difference between emissivity the material property (that doesn’t change), and the “effective emissivity” (that does change) we find in the Stefan-Boltzmann equation. In each graph you can see:

  • (top) the blackbody curve
  • (middle) the emissivity of this fictional material as a function of wavelength
  • (bottom) the actual emitted radiation due to the emissivity – and a calculation of the “effective emissivity”.

The calculation of “effective emissivity” = total actual emitted radiation / total blackbody emitted radiation (note 1).

At 288K – effective emissivity = 0.49:

emissivity-288k

At 300K – effective emissivity = 0.49:

emissivity-300k

At 400K – effective emissivity = 0.44:

emissivity-400k

At 500K – effective emissivity = 0.35:

emissivity-500k

At 5800K, that is solar surface temperature — effective emissivity = 0.00 (note the scale on the bottom graph is completely different from the scale of the top graph):

emissivity-5800k

Hopefully this helps people trying to understand what emissivity really relates to in the Stefan Boltzmann equation. It is not a constant except in rare cases. But you can see that treating it as a constant over a range of temperatures is a reasonable approximation (depending on the accuracy you want), but change the temperature “too much” and your “effective emissivity” can change massively.

As always with approximations and useful formulas, you need to understand the basis behind them to know when you can and can’t use them.

Any questions, just ask in the comments.

Note 1 – The flux was calculated for the wavelength range of 0.01 μm to 50μm. If you use the Stefan Boltzmann equation for 288K you will get E = 5.67×10-8 x 2884 = 390 W/m2. The reason my graph has 376 W/m2 is because I don’t include the wavelength range from 50 to infinity. It doesn’t change the practical results you see.

Read Full Post »

Long before the printing of money, golden eggs were the only currency.

In a deep cave, goose Day-Lewis, the last of the gold-laying geese, was still at work.

Day-Lewis lived in the country known affectionately as Utopia. Every day, Day-Lewis laid 10 perfect golden eggs, and was loved and revered for her service. Luckily, everyone had read Aesop’s fables, and no one tried to kill Day-Lewis to get all those extra eggs out. Still Utopia did pay a few armed guards to keep watch for the illiterates, just in case.

Utopia wasn’t into storing wealth because it wanted to run some important social programs to improve the education and health of the country. Thankfully they didn’t run a deficit and issue bonds so we don’t need to get into any political arguments about libertarianism.

This article is about golden eggs.

Utopia employed the service of bunny Fred to take the golden eggs to the nearby country of Capitalism in return for services of education and health. Every day, bunny Fred took 10 eggs out of the country. Every day, goose Day-Lewis produced 10 eggs. It was a perfect balance. The law of conservation of golden eggs was intact.

Thankfully, history does not record any comment on the value of the services received for these eggs, or on the benefit to society of those services, so we can focus on the eggs story.

Due to external circumstances outside of Utopia’s control, on January 1st, the year of Our Goose 150, a new international boundary was created between Utopia and Capitalism. History does not record the complex machinations behind the reasons for this new border control.

However, as always with government organizations, things never go according to plan. On the first day, January 1st, there were paperwork issues.

Bunny Fred showed up with 10 golden eggs, and, what he thought was the correct paperwork. Nothing got through. Luckily, unlike some government organizations with wafer-thin protections for citizens’ rights, they didn’t practice asset forfeiture for “possible criminal activity we might dream up and until you can prove you earned this honestly we are going to take it and never give it back”. Instead they told Fred to come back tomorrow.

On January 2nd, Bunny Fred had another run at the problem and brought another 10 eggs. The export paperwork for the supply of education and health only allowed for 10 golden eggs to be exported to Capitalism so border control sent on the 10 eggs from Jan 1st and insisted Bunny Fred take 10 eggs take back to Utopia.

On January 3rd, Bunny Fred, desperate to remedy the deficit of services in Utopia took 20 eggs – 10 from Day-Lewis and 10 he had brought back from border control the day before.

Insistent on following their new ad hoc processes, border control could only send on 10 eggs to Capitalism. As they had no approved paperwork for “storing” extra eggs, they insisted that Fred take back the excess eggs.

Every day, the same result:

  • Day-Lewis produced 10 eggs, Bunny Fred took 20 eggs to border control
  • Border control sent 10 eggs to Capitalism, Bunny Fred brought 10 eggs back

One day some people who thought they understood the law of conservation of golden eggs took a look at the current situation and declared:

Heretics! This is impossible. Day-Lewis, last of the gold-laying geese, only produces 10 eggs per day. How can Bunny Fred be taking 20 eggs to border control?

You can’t create golden eggs! The law of conservation of golden eggs has been violated.

You can’t get more than 100% efficiency. This is impossible.

And in other completely unrelated stories:

A Challenge for Bryan & 

Do Trenberth and Kiehl understand the First Law of Thermodynamics? & Part Two & Part Three – The Creation of Energy?

and recent comments in CO2- An Insignificant Trace Gas? – Part One

Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics

The Three Body Problem

Read Full Post »

In Ensemble Forecasting I wrote a short section on parameterization using the example of latent heat transfer and said:

Are we sure that over Connecticut the parameter CDE = 0.004, or should it be 0.0035? In fact, parameters like this are usually calculated from the average of a number of experiments. They conceal as much as they reveal. The correct value probably depends on other parameters. In so far as it represents a real physical property it will vary depending on the time of day, seasons and other factors. It might even be, “on average”, wrong. Because “on average” over the set of experiments was an imperfect sample. And “on average” over all climate conditions is a different sample.

Interestingly, a new paper has just shown up in JGR (“accepted for publication” and on their website in the pre-publishing format): Seasonal changes in physical processes controlling evaporation over an inland water, Qianyu Zhang & Heping Liu.

They carried out detailed measurements over a large reservoir (134 km² and 4-8m deep) in Mississippi for the winter and summer months of 2008. What were they trying to do?

Understanding physical processes that control turbulent fluxes of energy, heat, water vapor, and trace gases over inland water surfaces is critical in quantifying their influences on local, regional, and global climate. Since direct measurements of turbulent fluxes of sensible heat (H) and latent heat (LE) over inland waters with eddy covariance systems are still rare, process-based understanding of water-atmosphere interactions remains very limited..

..Many numerical weather prediction and climate models use the bulk transfer relations to estimate H and LE over water surfaces. Given substantial biases in modeling results against observations, process-based analysis and model validations are essential in improving parameterizations of water-atmosphere exchange processes..

Before we get into their paper, here is a relevant quote on parameterization from a different discipline. This is from Turbulent dispersion in the ocean, Garrett (2006):

Including the effects of processes that are unresolved in models is one of the central problems in oceanography.

In particular, for temperature, salinity, or some other scalar, one seeks to parameterize the eddy flux in terms of quantities that are resolved by the models. This has been much discussed, with determinations of the correct parameterization relying on a combination of deductions from the large-scale models, observations of the eddy fluxes or associated quantities, and the development of an understanding of the processes responsible for the fluxes.

The key remark to make is that it is only through process studies that we can reach an understanding leading to formulae that are valid in changing conditions, rather than just having numerical values which may only be valid in present conditions.

[Emphasis added]

Background

Latent heat transfer is the primary mechanism globally for transferring the solar radiation that is absorbed at the surface up into the atmosphere. Sensible heat is a lot smaller by comparison with latent heat. Both are “convection” in a broad term – the movement of heat by the bulk movement of air. But one is carrying the “extra heat” of evaporated water. When the evaporated water condenses (usually higher up in the atmosphere) it releases this stored heat.

Let’s take a look at the standard parameterization in use (adopting their notation) for latent heat:

LE = ρaLCEU(qw −qa)

LE = latent heat transfer, ρa = air density, L = latent heat of vaporization (2.5×106 J kg–1), CE = bulk transfer coefficient for moisture, U = wind speed, qw & qa are the respective specific humidity in the water-atmosphere interface and the over-water atmosphere

The values  ρa and L are a fundamental values. The formula says that the key parameters are:

  • wind speed (horizontal)
  • the difference between the humidity at the water surface (this is the saturated value which varies strongly with temperature) and the humidity in the air above

We would expect the differential of humidity to be important – if the air above is saturated then latent heat transfer will be zero, because there is no way to move any more water vapor into the air above. At the other extreme, if the air above is completely dry then we have maximized the potential for moving water vapor into the atmosphere.

The product of wind speed and humidity difference indicate how much mixing is going on due to air flow. There is a lot of theory and experiment behind the ideas, going back into the 1950s or further, but in the end it is an over-simplification.

That’s what all parameterizations are – over-simplifications.

The real formula is much simpler:

 LE = ρaL<w’q’>, where the brackets denote averages,w’q’ = the turbulent moisture flux

w is the upwards velocity, q is moisture; and the ‘ denoting eddies

Note to commenters, if you write < or > in the comment it gets dropped because WordPress treats it like a html tag. You need to write &lt; or &gt;

The key part of this equation just says “how much moisture is being carried upwards by turbulent flow”. That’s the real value so why don’t we measure that instead?

Here’s a graph of horizontal wind over a short time period from Stull (1988):

From Stull 1988

From Stull 1988

Figure 1

And any given location the wind varies across every timescale. Pick another location and the results are different. This is the problem of turbulence.

And to get accurate measurements for the paper we are looking at now, they had quite a setup:

Zhang 2014-instruments

Figure 2

Here’s the description of the instrumentation:

An eddy covariance system at a height of 4 m above the water surface consisted of a three-dimensional sonic anemometer (model CSAT3, Campbell Scientific, Inc.) and an open path CO2/H2O infrared gas analyzer (IRGA; Model LI-7500, LI-COR, Inc.).

A datalogger (model CR5000, Campbell Scientific, Inc.) recorded three-dimensional wind velocity components and sonic virtual temperature from the sonic anemometer and densities of carbon dioxide and water vapor from the IRGA at a frequency of 10 Hz.

Other microclimate variables were also measured, including Rn at 1.2 m (model Q-7.1, Radiation and Energy Balance Systems, Campbell Scientific, Inc.), air temperature (Ta) and relative humidity (RH) (model HMP45C, Vaisala, Inc.) at approximately 1.9, 3.0, 4.0, and 5.5 m, wind speeds (U) and wind direction (WD) (model 03001, RM Young, Inc.) at 5.5 m.

An infrared temperature sensor (model IRR-P, Apogee, Inc.) was deployed to measure water skin temperature (Tw).

Vapor pressure (ew) in the water-air interface was equivalent to saturation vapor pressure at Tw [Buck, 1981].

The same datalogger recorded signals from all the above microclimate sensors at 30-min intervals. Six deep cycling marine batteries charged by two solar panels (model SP65, 65 Watt Solar Panel, Campbell Scientific, Inc.) powered all instruments. A monthly visit to the tower was scheduled to provide maintenance and download the 10-Hz time-series data.

I don’t know the price tag but I don’t think the equipment is cheap. So this kind of setup can be used for research, but we can’t put one each every 1km across a country or an ocean and collect continuous data.

That’s why we need parameterizations if we want to get some climatological data. Of course, these need verifying, and that’s what this paper (and many others) are about.

Results

When we look back at the parameterized equation for latent heat it’s clear that latent heat should vary linearly with the product of wind speed and humidity differential. The top graph is sensible heat which we won’t focus on, the bottom graph is latent heat. Δe is humidity, expressed as partial pressure rather than g/kg. We see that the correlation between LE and wind speed x humidity differential is very different in summer and winter:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 2

The scatterplots showing the same information:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 3

The authors looked at the diurnal cycle – averaging the result for the time of day over the period of the results, separated into winter and summer.

Our results also suggest that the influences of U on LE may not be captured simply by the product of U and Δe [humidity differential] on short timescales, especially in summer. This situation became more serious when the ASL (atmospheric surface layer, see note 1) became more unstable, as reflected by our summer cases (i.e., more unstable) versus the winter cases.

They selected one period to review in detail. First the winter results:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 4

On March 18, Δe was small (i.e., 0 ~ 0.2 kPa) and it experienced little diurnal variations, leading to limited water vapor supply (Fig. 5a).

The ASL (see note 1) during this period was slightly stable (Fig. 5b), which suppressed turbulent exchange of LE. As a result, LE approached zero and even became negative, though strong wind speeds of approximately around 10 ms–1 were present, indicating a strong mechanical turbulent mixing in the ASL.

On March 19, with an increased Δe up to approximately 1.0 kPa, LE closely followed Δe and increased from zero to more than 200 Wm–2. Meanwhile, the ASL experienced a transition from stable to unstable conditions (Fig. 5b), coinciding with an increase in LE.

On March 20, however, the continuous increase of Δe did not lead to an increase in LE. Instead, LE decreased gradually from 200 Wm–2 to about zero, which was closely associated with the steady decrease in U from 10 ms–1 to nearly zero and with the decreased instability.

These results suggest that LE was strongly limited by Δe, instead of U when Δe was low; and LE was jointly regulated by variations in Δe and U once a moderate Δe level was reached and maintained, indicating a nonlinear response of LE to U and Δe induced by ASL stability. The ASL stability largely contributed to variations in LE in winter.

Then the summer results:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 5

In summer (i.e., July 23 – 25 in Fig. 6), Δe was large with a magnitude of 1.5 ~ 3.0 kPa, providing adequate water vapor supply for evaporation, and had strong diurnal variations (Fig. 6a).

U exhibited diurnal variations from about 0 to 8 ms–1. LE was regulated by both Δe and U, as reflected by the fact that LE variations on the July 24 afternoon did not follow solely either the variations of U or the variations of Δe. When the diurnal variations of Δe and U were small in July 25, LE was also regulated by both U and Δe or largely by U when the change in U was apparent.

Note that during this period, the ASL was strongly unstable in the morning and weakly unstable in the afternoon and evening (Fig. 6b), negatively corresponding to diurnal variations in LE. This result indicates that the ASL stability had minor impacts on diurnal variations in LE during this period.

Another way to see the data is by plotting the results to see how valid the parameterized equation appears. Here we should have a straight line between LE/U and Δe as the caption explains:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 6

One method to determine the bulk transfer coefficients is to use the mass transfer relations (Eqs. 1, 2) by quantifying the slopes of the linear regression of LE against UΔe. Our results suggest that using this approach to determine the bulk transfer coefficient may cause large bias, given the fact that one UΔe value may correspond to largely different LE values.

They conclude:

Our results suggest that these highly nonlinear responses of LE to environmental variables may not be represented in the bulk transfer relations in an appropriate manner, which requires further studies and discussion.

Conclusion

Parameterizations are inevitable. Understanding their limitations is very difficult. A series of studies might indicate that there is a “linear” relationship with some scatter, but that might just be disguising or ignoring a variable that never appears in the parameterization.

As Garrett commented “..having numerical values which may only be valid in present conditions”. That is, if the mean state of another climate variable shifts the parameterization will be invalid, or less accurate.

Alternatively, given the non-linear nature of climate process, changes don’t “average out”. So the mean state of another climate variable may not shift, the mean state might be constant, but its variation with time or another variable may introduce a change in the real process that results in an overall shift in climate.

There are other problems with calculating latent heat transfer – even accepting the parameterization as the best version of “the truth” – there are large observational gaps in the parameters we need to measure (wind speed and humidity above the ocean) even at the resolution of current climate models. This is one reason why there is a need for reanalysis products.

I found it interesting to see how complicated latent heat variations were over a water surface.

References

Seasonal changes in physical processes controlling evaporation over an inland water, Qianyu Zhang & Heping Liu, JGR (2014)

Turbulent dispersion in the ocean, Chris Garrett, Progress in Oceanography (2006)

Notes

Note 1:  The ASL (atmospheric surface layer) stability is described by the Obukhov stability parameter:

ζ = z/L0

where z is the height above ground level and L0 is the Obukhov parameter.

L0 = −θvu*3/[kg(w’θv‘)s ]

where θv is virtual potential temperature (K), u* is frictional velocity by the eddy covariance system (ms–1), k is Von Karman constant (0.4), g is acceleration due to gravity (9.8 ms–2), w is vertical velocity (m s–1), and (w’θv‘)s is the flux of virtual potential temperature by the eddy covariance system

Read Full Post »

The atmosphere cools to space by radiation. Well, without getting into all the details, the surface cools to space as well by radiation but not much radiation is emitted by the surface that escapes directly to space (note 1). Most surface radiation is absorbed by the atmosphere. And of course the surface mostly cools by convection into the troposphere (lower atmosphere).

If there were no radiatively-active gases (aka “GHG”s) in the atmosphere then the atmosphere couldn’t cool to space at all.

Technically, the emissivity of the atmosphere would be zero. Emission is determined by the local temperature of the atmosphere and its emissivity. Wavelength by wavelength emissivity is equal to absorptivity, another technical term, which says what proportion of radiation is absorbed by the atmosphere. If the atmosphere can’t emit, it can’t absorb (note 2).

So as you increase the GHGs in the atmosphere you increase its ability to cool to space. A lot of people realize this at some point during their climate science journey and finally realize how they have been duped by climate science all along! It’s irrefutable – more GHGs more cooling to space, more GHGs mean less global warming!

Ok, it’s true. Now the game’s up, I’ll pack up Science of Doom into a crate and start writing about something else. Maybe cognitive dissonance..

Bye everyone!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Halfway through boxing everything up I realized there was a little complication to the simplicity of that paragraph. The atmosphere with more GHGs has a higher emissivity, but also a higher absorptivity.

Let’s draw a little diagram. Here are two “layers” (see note 3) of the atmosphere in two different cases. On the left 400 ppmv CO2, on the right 500ppmv CO2 (and relative humidity of water vapor was set at 50%, surface temperature at 288K):

Cooling-to-space-2a

Figure 1

It’s clear that the two layers are both emitting more radiation with more CO2.More cooling to space.

For interest, the “total emissivity” of the top layer is 0.190 in the first case and 0.197 in the second case. The layer below has 0.389 and 0.395.

Let’s take a look at all of the numbers and see what is going on. This diagram is a little busier:

Cooling-to-space-3a

Figure 2

The key point is that the OLR (outgoing longwave radiation) is lower in the case with more CO2. Yet each layer is emitting more radiation. How can this be?

Take a look at the radiation entering the top layer on the left = 265.1, and add to that the emitted radiation = 23.0 – the total is 288.1. Now subtract the radiation leaving through the top boundary = 257.0 and we get the radiation absorbed in the layer. This is 31.1 W/m².

Compare that with the same calculation with more CO2 – the absorption is 32.2 W/m².

This is the case all the way up through the atmosphere – each layer emits more because its emissivity has increased, but it also absorbs more because its absorptivity has increased by the same amount.

So more cooling to space, but unfortunately more absorption of the radiation below – two competing terms.

So why don’t they cancel out?

Emission of radiation is a result of local temperature and emissivity.

Absorption of radiation is the result of the incident radiation and absorptivity. Incident upwards radiation started lower in the atmosphere where it is hotter. So absorption changes always outweigh emission changes (note 4).

Conceptual Problems?

If it’s still not making sense then think about what happens as you reduce the GHGs in the atmosphere. The atmosphere emits less but absorbs even less of the radiation from below. So the outgoing longwave radiation increases. More surface radiation is making it to the top of atmosphere without being absorbed. So there is less cooling to space from the atmosphere, but more cooling to space from the surface and the atmosphere.

If you add lagging to a pipe, the temperature of the pipe increases (assuming of course it is “internally” heated with hot water). And yet, the pipe cools to the surrounding room via the lagging! Does that mean more lagging, more cooling? No, it’s just the transfer mechanism for getting the heat out.

That was just an analogy. Analogies don’t prove anything. If well chosen, they can be useful in illustrating problems. End of analogy disclaimer.

If you want to understand more about how radiation travels through the atmosphere and how GHG changes affect this journey, take a look at the series Visualizing Atmospheric Radiation.

 

Notes

Note 1: For more on the details see

Note 2: A very basic point – absolutely essential for understanding anything at all about climate science – is that the absorptivity of the atmosphere can be (and is) totally different from its emissivity when you are considering different wavelengths. The atmosphere is quite transparent to solar radiation, but quite opaque to terrestrial radiation – because they are at different wavelengths. 99% of solar radiation is at wavelengths less than 4 μm, and 99% of terrestrial radiation is at wavelengths greater than 4 μm. That’s because the sun’s surface is around 6000K while the earth’s surface is around 290K. So the atmosphere has low absorptivity of solar radiation (<4 μm) but high emissivity of terrestrial radiation.

Note 3: Any numerical calculation has to create some kind of grid. This is a very course grid, with 10 layers of roughly equal pressure in the atmosphere from the surface to 200mbar. The grid assumes there is just one temperature for each layer. Of course the temperature is decreasing as you go up. We could divide the atmosphere into 30 layers instead. We would get more accurate results. We would find the same effect.

Note 4: The equations for radiative transfer are found in Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. The equations prove this effect.

Read Full Post »

Older Posts »