Feeds:
Posts
Comments

Archive for the ‘Atmospheric Physics’ Category

A couple of recent articles covered ground related to clouds, but under Models –Models, On – and Off – the Catwalk – Part Seven – Resolution & Convection & Part Five – More on Tuning & the Magic Behind the Scenes. In the first article Andrew Dessler, day job climate scientist, made a few comments and in one comment provided some great recent references. One of these was by Paulo Ceppi and colleagues published this year and freely accessible. Another paper with some complementary explanations is from Mark Zelinka and colleagues, also published this year (but behind a paywall).

In this article we will take a look at the breakdown these papers provide. There is a lot to the Ceppi paper so we’re not going to review it all in this article, hopefully in a followup article.

Globally and annually averaged, clouds cool the planet by around 18W/m² – that’s large compared with the radiative effect of doubling CO2, a value of 3.7W/m². The net effect is made up of two larger opposite effects:

  • cooling from reflecting sunlight (albedo effect) of about 46W/m²
  • warming from the radiative effect of about 28W/m² – clouds absorb terrestrial radiation and reemit from near the top of the cloud where it is colder, this is like the “greenhouse” effect

In this graphic, Zelinka and colleagues show the geographical breakdown of cloud radiative effect averaged over 15 years from CERES measurements:

From Zelinka et al 2017

Figure 1 – Click to enlarge

Note that the cloud radiative effect shown above isn’t feedbacks from warming, it is simply the current effect of clouds. The big question is how this will change with warming.

In the next graphic, the inset in the top shows cloud feedback (note 1) vs ECS from 28 GCMs. ECS is the steady state temperature resulting from doubling CO2. Two models are picked out – red and blue – and in the main graph we see simulated warming under RCP8.5 (an unlikely future world confusing described by many as the “business as usual” scenario).

In the bottom graphic, cloud feedbacks from models are decomposed into the effect from low cloud amount, from changing high cloud altitude and from low cloud opacity. We see that the amount of low cloud is the biggest feedback with the widest spread, followed by the changing altitude of high clouds. And both of them have a positive feedback. The gray lines extending out cover the range of model responses.

From Zelinka et al 2017

Figure 2 – Click to enlarge

In the next figure – click to enlarge – they show the progression in each IPCC report, helpfully color coded around the breakdown above:

From Zelinka et al 2017

Figure 3 – Click to enlarge

On AR5:

Notably, the high cloud altitude feedback was deemed positive with high confidence due to supporting evidence from theory, observations, and high-resolution models. On the other hand, continuing low confidence was expressed in the sign of low cloud feedback because of a lack of strong observational constraints. However, the AR5 authors noted that high-resolution process models also tended to produce positive low cloud cover feedbacks. The cloud opacity feedback was deemed highly uncertain due to the poor representation of cloud phase and microphysics in models, limited observations with which to evaluate models, and lack of physical understanding. The authors noted that no robust mechanisms contribute a negative cloud feedback.

And on work since:

In the four years since AR5, evidence has increased that the overall cloud feedback is positive. This includes a number of high-resolution modelling studies of low cloud cover that have illuminated the competing processes that govern changes in low cloud coverage and thickness, and studies that constrain long-term cloud responses using observed short-term sensitivities of clouds to changes in their local environment. Both types of analyses point toward positive low cloud feedbacks. There is currently no evidence for strong negative cloud feedbacks..

Onto Ceppi et al 2017. In the graph below we see climate feedback from models broken out into a few parameters

  • WV+LR – the combination of water vapor and lapse rate changes (lapse rate is the temperature profile with altitude)
  • Albedo – e.g. melting sea ice
  • Cloud total
  • LW cloud – this is longwave effects, i.e., how clouds change terrestrial radiation emitted to space
  • SW cloud- this is shortwave effects, i.e., how clouds reflect solar radiation back to space

From Ceppi et al 2017

Figure 4 – Click to enlarge

Then they break down the cloud feedback further. This graph is well worth understanding. For example, in the second graph (b) we are looking at higher altitude clouds. We see that the increasing altitude of high clouds causes a positive feedback. The red dots are LW (longwave = terrestrial radiation). If high clouds increase in altitude the radiation from these clouds to space is lower because the cloud tops are colder. This is a positive feedback (more warming retained in the climate system). The blue dots are SW (shortwave = solar radiation). If high clouds increase in altitude it has no effect on the reflection of solar radiation – and so the blue dots are on zero.

Looking at the low clouds – bottom graph (c) – we see that the feedback is almost all from increasing reflection of solar radiation from increasing amounts of low clouds.

From Ceppi et al 2017

Figure 5 

Now a couple more graphs from Ceppi et al – the spatial distribution of cloud feedback from models (note this is different from our figure 1 which showed current cloud radiative effect):

From Ceppi et al 2017

Figure 6

And the cloud feedback by latitude broken down into: altitude effects; amount of cloud; and optical depth (higher optical depth primarily increases the reflection to space of solar radiation but also has an effect on terrestrial radiation).

From Ceppi et al 2017

Figure 7

They state:

The patterns of cloud amount and optical depth changes suggest the existence of distinct physical processes in different latitude ranges and climate regimes, as discussed in the next section. The results in Figure 4 allow us to further refine the conclusions drawn from Figure 2. In the multi- model mean, the cloud feedback in current GCMs mainly results from:

  • globally rising free-tropospheric clouds
  • decreasing low cloud amount at low to middle latitudes, and
  • increasing low cloud optical depth at middle to high latitudes

Cloud feedback is the main contributor to intermodel spread in climate sensitivity, ranging from near zero to strongly positive (−0.13 to 1.24 W/m²K) in current climate models.

It is a combination of three effects present in nearly all GCMs: rising free- tropospheric clouds (a LW heating effect); decreasing low cloud amount in tropics to midlatitudes (a SW heating effect); and increasing low cloud optical depth at high latitudes (a SW cooling effect). Low cloud amount in tropical subsidence regions dominates the intermodel spread in cloud feedback.

Happy Christmas to all Science of Doom readers.

Note – if anyone wants to debate the existence of the “greenhouse” effect, please add your comments to Two Basic Foundations or The “Greenhouse” Effect Explained in Simple Terms or any of the other tens of articles on that subject. Comments here on the existence of the “greenhouse” effect will be deleted.

References

Cloud feedback mechanisms and their representation in global climate models, Paulo Ceppi, Florent Brient, Mark D Zelinka & Dennis Hartmann, IREs Clim Change 2017 – free paper

Clearing clouds of uncertainty, Mark D Zelinka, David A Randall, Mark J Webb & Stephen A Klein, Nature 2017 – paywall paper

Notes

Note 1: From Ceppi et al 2017: CLOUD-RADIATIVE EFFECT AND CLOUD FEEDBACK:

The radiative impact of clouds is measured as the cloud-radiative effect (CRE), the difference between clear-sky and all-sky radiative flux at the top of atmosphere. Clouds reflect solar radiation (negative SW CRE, global-mean effect of −45W/m²) and reduce outgoing terrestrial radiation (positive LW CRE, 27W/m²−2), with an overall cooling effect estimated at −18W/m² (numbers from Henderson et al.).

CRE is proportional to cloud amount, but is also determined by cloud altitude and optical depth.

The magnitude of SW CRE increases with cloud optical depth, and to a much lesser extent with cloud altitude.

By contrast, the LW CRE depends primarily on cloud altitude, which determines the difference in emission temperature between clear and cloudy skies, but also increases with optical depth. As the cloud properties change with warming, so does their radiative effect. The resulting radiative flux response at the top of atmosphere, normalized by the global-mean surface temperature increase, is known as cloud feedback.

This is not strictly equal to the change in CRE with warming, because the CRE also responds to changes in clear-sky radiation—for example, due to changes in surface albedo or water vapor. The CRE response thus underestimates cloud feedback by about 0.3W/m² on average. Cloud feedback is therefore the component of CRE change that is due to changing cloud properties only. Various methods exist to diagnose cloud feedback from standard GCM output. The values presented in this paper are either based on CRE changes corrected for noncloud effects, or estimated directly from changes in cloud properties, for those GCMs providing appropriate cloud output. The most accurate procedure involves running the GCM radiation code offline—replacing instantaneous cloud fields from a control climatology with those from a perturbed climatology, while keeping other fields unchanged—to obtain the radiative perturbation due to changes in clouds. This method is computationally expensive and technically challenging, however.

Read Full Post »

In Part Five – More on Tuning & the Magic Behind the Scenes and also in the earlier Part Four we looked at the challenge of selecting parameters in climate models. A recent 2017 paper on this topic by Frédéric Hourdin and colleagues is very illuminating. One of the co-authors is Thorsten Mauritsen, the principal author of the 2012 paper we reviewed in Part Four. Another co-author is Jean-Christophe Golaz, the principal author of the 2013 paper we reviewed in Part Five.

The topics are similar but there is some interesting additional detail and commentary. The paper is open and, as always, I recommend reading the whole paper.

One of the key points is that climate models need to be specific about their “target” – were they trying to get the model to match recent climatology? top of atmosphere radiation balance? last 100 years of temperature trends? If we know that a model was developed with an eye on a particular target then it doesn’t demonstrate model skill if they get that target right.

Because of the uncertainties in observations and in the model formulation, the possible parameter choices are numerous and will differ from one modeling group to another. These choices should be more often considered in model intercomparison studies. The diversity of tuning choices reflects the state of our current climate understanding, observation, and modeling. It is vital that this diversity be maintained. It is, however, important that groups better communicate their tuning strategy. In particular, when comparing models on a given metric, either for model assessment or for understanding of climate mechanisms, it is essential to know whether some models used this metric as tuning target.

They comment on the paper by Jeffrey Kiehl from 2007 (referenced in The Debate is Over – 99% of Scientists believe Gravity and the Heliocentric Solar System so therefore..) which showed how models with higher sensitivity to CO2 have higher counter-balancing negative forcing from aerosols.

And later in the paper:

The question of whether the twentieth-century warming should be considered a target of model development or an emergent property is polarizing the climate modeling community, with 35% of modelers stating that twentieth-century warming was rated very important to decisive, whereas 30% would not consider it at all during development.

Some view the temperature record as an independent evaluation dataset not to be used, while others view it as a valuable observational constraint on the model development. Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming.

The question of developing toward the twentieth- century warming therefore is an area of vigorous debate within the community..

..The fact that some models are explicitly, or implicitly, tuned to better match the twentieth-century warming, while others may not be, clearly complicates the interpretation of the results of combined model ensembles such as CMIP. The diversity of approaches is unavoidable as individual modeling centers pursue their model development to seek their specific scientific goals.

It is, however, essential that decisions affecting forcing or feedback made during model development be transparently documented.

And so, onto another recent paper by Sumant Nigam and colleagues. They review the temperature trends by season over the last 100 years and review that against models. They look only at the northern hemisphere over land, due to the better temperature dataset available (compared with the southern hemisphere).

Here are the observations of the trends for each of the four seasons, I find it fascinating to see the difference between the seasonal trends:

From Nigam et al 2017

Figure 1 – Click to enlarge

Then they compare the observations to some of the models used in IPCC AR5 (from the model intercomparison project, CMIP5) – top line is observations, each line below is a different model. When we compare the geographical distribution of winter-summer trend (right column) we can see that the models don’t do very well:

From Nigam et al 2017

Figure 2 – Click to enlarge

From their conclusion:

The urgent need for shifting the evaluative and diagnostic focus away from the customary annual mean toward the seasonal cycle of secular warming is manifest in the inability of the leading climate models (whose simulations inform the IPCC’s Fifth Assessment Report) to generate realistic and robust (large signal-to noise ratio) twentieth-century winter and summer SAT trends over the northern continents. The large intra-ensemble SD of century-long SAT trends in some IPCC AR5 models (e.g., GFDL-CM3) moreover raises interesting questions: If this subset of climate models is realistic, especially in generation of ultra-low-frequency variability, is the century-long (1902–2014) linear trend in observed SAT—a one-member ensemble of the climate record—a reliable indicator of the secular warming signal?

I’ve commented a number of times in various articles – people who don’t read climate science papers often have some idea that climate scientists are monolithically opposed to questioning model results or questioning “the orthodoxy”. This is contrary to what you find if you read lots of papers. It might be that press releases that show up in The New York Times, CNN or the BBC (or pick another ideological bellwether) have some kind of monolithic sameness but this just demonstrates that no one interested in finding out anything important (apart from the weather and celebrity news) should ever watch/read media outlets.

They continue:

The relative contribution of both mechanisms to the observed seasonality in century-long SAT trends needs further assessment because of uncertainties in the diagnosis of evapotranspiration and sea level pressure from the century-long observational records. Climate system models—ideal tools for investigation of mechanisms through controlled experimentation—are unfortunately not yet ready given their inability to simulate the seasonality of trends in historical simulations.

Subversive indeed.

Their investigation digs into evapotranspiration – the additional water vapor, available from plants, to be evaporated and therefore to remove heat from the surface during the summer months.

Conclusion

All models are wrong but some are useful” – a statement attributed to a modeler from a different profession (statistical process control) and sometimes quoted also by climate modelers.

This is always a good way to think about models. Perhaps the inability of climate models to reproduce seasonal trends is inconsequential – or perhaps it is important. Models fail on many levels. The question is why, and the answers lead to better models.

Climate science is a real science, contrary to the claims of many people who don’t read much climate science papers, because many published papers ask important and difficult questions, and critique the current state of the science. That is, falsifiability is being addressed. These questions might not become media headlines, or even make it into the Summary for Policymakers in IPCC reports, but papers with these questions are not outliers.

I found both of these papers very interesting. Hourdin et al because they ask valuable questions about how models are tuned, and Nigam et al because they point out that climate models do a poor job of reproducing an important climate trend (seasonal temperature) which provides an extra level of testing for climate models.

References

Striking Seasonality in the Secular Warming of the Northern Continents: Structure and Mechanisms, Sumant Nigam et al, Journal of Climate (2017)

The Art and Science of Climate Model Tuning, Frédéric Hourdin et al, American Meteorological Society (2017) – free paper

Read Full Post »

In recent articles we have looked at rainfall and there is still more to discuss. This article changes tack to look at tropical cyclones, prompted by the recent US landfall of Harvey and Irma along with questions from readers about attribution and the future.

It might be surprising to find the following statement from leading climate scientists (Kevin Walsh and many co-authors in 2015):

At present, there is no climate theory that can predict the formation rate of tropical cyclones from the mean climate state.

The subject gets a little involved so let’s dig into a few papers. First from Gabriel Vecchi and some co-authors in 2008 in the journal Science. The paper is very brief and essentially raises one question – has the recent rise in total Atlantic cyclone intensity been a result of increases in absolute sea surface temperature (SST) or relative sea surface temperature:

From Vecchi et al 2008

Figure 1

The top graph (above) shows a correlation of 0.79 between SST and PDI (power dissipation index). The bottom graph shows a correlation of 0.79 between relative SST (local sea surface temperature minus the average tropical sea surface temperature) and PDI.

With more CO2 in the atmosphere from burning fossil fuels we expect a warmer SST in the tropical Atlantic in 2100 than today. But we don’t expect the tropical Atlantic to warm faster than the tropics in general.

If cyclone intensity is dependent on local SST we expect more cyclones, or more powerful cyclones. If cyclone intensity is dependent on relative SST we expect no increase in cyclones. This is because climate models predict warmer SSTs in the future but not warmer Atlantic SSTs than the tropics. The paper also shows a few high resolution models – green symbols – sitting close to the zero change line.

Now predicting tropical cyclones with GCMs has a fundamental issue – the scale of a modern high resolution GCM is around 100km. But cyclone prediction requires a higher resolution due to their relatively small size.

Thomas Knutson and co-authors (including the great Isaac Held) produced a 2007 paper with an interesting method (of course, the idea is not at all new). They input actual meteorological data (i.e. real history from NCEP reanalysis) into a high resolution model which covered just the Atlantic region. Their aim was to see how well this model could reproduce tropical storms. There are some technicalities to the model – the output is constantly “nudged” back towards the actual climatology and out at the boundaries of the model we can’t expect good simulation results. The model resolution is 18km.

The main question addressed here is the following: Assuming one has essentially perfect knowledge of large-scale atmospheric conditions in the Atlantic over time, how well can one then simulate past variations in Atlantic hurricane activity using a dynamical model?

They comment that the cause of the recent (at that time) upswing in hurricane activity “remains unresolved”. (Of course, fast forward to 2016, prior to the recent two large landfall hurricanes, and the overall activity is at a 1970 low. In early 2018, this may be revised again..).

Two interesting graphs emerge. First an excellent match between model and observations for overall frequency year on year:

From Knutson et al 2007

Figure 2

Second, an inability to predict the most intense hurricanes. The black dots are observations, the red dots are simulations from the model. The vertical axis, a little difficult to read, is SLP, or sea level pressure:

From Knutson et al 2007

Figure 3

These results are a common theme of many papers – inputting the historical climatological data into a model we can get some decent results on year to year variation in tropical cyclones. But models under-predict the most intense cyclones (hurricanes).

Here is Morris Bender and co-authors (including Thomas Knutson, Gabriel Vecchi – a frequent author or co-author in this genre, and of course Isaac Held) from 2010:

Some statistical analyses suggest a link between warmer Atlantic SSTs and increased hurricane activity, although other studies contend that the spatial structure of the SST change may be a more important control on tropical cyclone frequency and intensity. A few studies suggest that greenhouse warming has already produced a substantial rise in Atlantic tropical cyclone activity, but others question that conclusion.

This is a very typical introduction in papers on this topic. I note in passing this is a huge blow to the idea that climate scientists only ever introduce more certainty and alarm on the harm from future CO2 emissions. They don’t. However, it is also true that some climate scientists believe that recent events have been accentuated due to the last century of fossil fuel burning and these perspectives might be reported in the media. I try to ignore the media and that is my recommendation to readers on just about all subjects except essential ones like the weather and celebrity news.

This paper used a weather prediction model starting a few days before each storm to predict the outcome. If you understand the idea behind Knutson 2007 then this is just one step further – a few days prior to the emergence of an intense storm, input the actual climate data into a high resolution model and see how well the high res model predicts the observations. They also used projected future climates from CMIP3 models (note 1).

In the set of graphs below there are three points I want to highlight – and you probably need to click on the graph to enlarge it.

First, in graph B, “Zetac” is the model used by Knutson et al 2007, whereas GFDL is the weather prediction model getting better results in this paper – you can see that observations and the GFDL are pretty close in the maximum wind speed distribution. Second, the climate change predictions in E show that predictions of the future show an overall reduction in frequency of tropical storms, but an increase in the frequency of storms with the highest wind speeds – this is a common theme in papers from this genre. Third, in graph F, the results (from the weather prediction model) fed by different GCMs for future climate show quite different distributions. For example, the UKMO model produces a distribution of future wind speeds that is lower than current values.

From Bender et al 2010

Figure 4 – Click to enlarge

In this graph (S3 from the Supplementary data) we see graphs of the difference between future projected climatologies and current climatologies for three relevant parameters for each of the four different models shown in graph F in the figure above:

From Bender et al 2010

Figure 5 – Click to enlarge

This illustrates that different projected future climatologies, which all show increased SST in the Atlantic region, generate quite different hurricane intensities. The paper suggests that the reduction in wind shear in the UKMO model produces a lower frequency of higher intensity hurricanes.

Conclusion

This article illustrates that feeding higher resolution models with current data can generate realistic cyclone data in some aspects, but less so in other aspects. As we increase the model resolution we can get even better results – but this is dependent on inputting the correct climate data. As we look towards 2100 the questions are – How realistic is the future climate data? How does that affect projections of hurricane frequencies and intensities?

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

Impacts – XII – Rainfall 2

Impacts – XIII – Rainfall 3

References

Hurricanes and climate: the US CLIVAR working group on hurricanes, American Meteorological Society, Kevin Walsh et al (2015) – free paper

Whither Hurricane Activity? Gabriel A Vecchi, Kyle L Swanson & Brian J. Soden, Science (2008) – free paper

Simulation of the Recent Multidecadal Increase of Atlantic Hurricane Activity Using an 18-km-Grid Regional Model, Thomas Knutson et al, American Meteorological Society, (2007) – free paper

Modeled Impact of Anthropogenic Warming on the Frequency of Intense Atlantic Hurricanes, Morris A Bender et al, Science (2010) – free paper

Notes

Note 1: The scenario is A1B, which is similar to RCP6 – that is, an approximate doubling of CO2 by the end of the century. The simulations came from the CMIP3 suite of model results.

Read Full Post »

In XII – Rainfall 2 we saw the results of many models on rainfall as GHGs increase. They project wetter tropics, drier subtropics and wetter higher latitude regions. We also saw an expectation that rainfall will increase globally, with something like 2-3% per ºC of warming.

Here is a (too small) graph from Allen & Ingram (2002) showing the model response of rainfall under temperature changes from GHG increases. The dashed line marked “C-C” is the famous (in climate physics) Clausius–Clapeyron relation which, at current temperatures, shows a 7% change in water vapor per ºC of warming. The red triangles are the precipitation changes from model simulations showing about half of that.

From Allen & Ingram (2002)

Figure 1

Here is another graph from the same paper showing global mean temperature change (top) and rainfall over land (bottom):

From Allen & Ingram (2002)

Figure 2

The temperature has increased over the last 50 years, and models and observations show that the precipitation has.. oh, it’s not changed. What is going on?

First, the authors explain some important background:

The distribution of moisture in the troposphere (the part of the atmosphere that is strongly coupled to the surface) is complex, but there is one clear and strong control: moisture condenses out of supersaturated air.

This constraint broadly accounts for the humidity of tropospheric air parcels above the boundary layer, because almost all such parcels will have reached saturation at some point in their recent history. Physically, therefore, it has long seemed plausible that the distribution of relative humidity would remain roughly constant under climate change, in which case the Clausius-Clapeyron relation implies that specific humidity would increase roughly exponentially with temperature.

This reasoning is strongest at higher latitudes where air is usually closer to saturation, and where relative humidity is indeed roughly constant through the substantial temperature changes of the seasonal cycle. For lower latitudes it has been argued that the real-world response might be different. But relative humidity seems to change little at low latitudes under a global warming scenario, even in models of very high vertical resolution, suggesting this may be a robust ’emergent constraint’ on which models have already converged.

They continue:

If tropospheric moisture loading is controlled by the constraints of (approximately) unchanged relative humidity and the Clausius-Clapeyron relation, should we expect a corresponding exponential increase in global precipitation and the overall intensity of the hydrological cycle as global temperatures rise?

This is certainly not what is observed in models.

To clarify, the point in the last sentence is that models do show an increase in precipitation, but not at the same rate as the expected increase in specific humidity (see note 1 for new readers).

They describe their figure 2 (our figure 1 above) and explain:

The explanation for these model results is that changes in the overall intensity of the hydrological cycle are controlled not by the availability of moisture, but by the availability of energy: specifically, the ability of the troposphere to radiate away latent heat released by precipitation.

At the simplest level, the energy budgets of the surface and troposphere can be summed up as a net radiative heating of the surface (from solar radiation, partly offset by radiative cooling) and a net radiative cooling of the troposphere to the surface and to space (R) being balanced by an upward latent heat flux (LP, where L is the latent heat of evaporation and P is global-mean precipitation): evaporation cools the surface and precipitation heats the troposphere.

[Emphasis added].

Basics Digression

Picture the atmosphere over a long period of time (like a decade), and for the whole globe. If it hasn’t heated up or cooled down we know that the energy in must equal energy out (or if it has only done so only marginally then energy in is almost equal to energy out). This is the first law of thermodynamics – energy is conserved.

What energy comes into the atmosphere?

  1. Solar radiation is partly absorbed by the atmosphere (most is transmitted through and heats the surface of the earth)
  2. Radiation emitted from the earth’s surface (we’ll call this terrestrial radiation) is mostly absorbed by the atmosphere (some is transmitted straight through to space)
  3. Warm air is convected up from the surface
  4. Heat stored in evaporated water vapor (latent heat) is convected up from the surface and the water vapor condenses out, releasing heat into the atmosphere when this happens

How does the atmosphere lose energy?

  1. It radiates downwards to the surface
  2. It radiates out to space

..end of digression

Changing Energy Budget

In a warmer world, if we have more evaporation we have more latent heat transfer from the surface into the troposphere. But the atmosphere has to be able to radiate this heat away. If it can’t, then the atmosphere becomes warmer, and this reduces convection. So with a warmer surface we may have a plentiful potential supply of latent heat (via water vapor) but the atmosphere needs a mechanism to radiate away this heat.

Allen & Ingram put forward a simple conceptual equation:

ΔRc + ΔRT = LΔP

where the change in radiative cooling ΔR, is split into two components: ΔRc that is independent of the change in atmospheric temperature; and ΔRT that depends only on the temperature

L = latent heat of water vapor (a constant), ΔP = change in rainfall (= change in evaporation, as evaporation is balanced by rainfall)

LΔP is about 1W/m² per 1% increase in global precipitation.

Now, if we double CO2, then before any temperature changes we decrease the outgoing longwave radiation through the tropopause (the top of the troposphere) by about 3-4W/m² and we increase atmospheric radiation to the surface by about 1W/m².

So doubling CO2, ΔRc = -2 to -3W/m²; prior to a temperature change ΔRT = 0; and so ΔP reduces.

The authors comment that increasing CO2 before any temperature change takes place reduces the intensity of the hydrological cycle and this effect was seen in early modeling experiments using prescribed sea surface temperatures.

Now, of course, the idea of doubling CO2 without any temperature change is just a thought experiment. But it’s an important thought experiment because it lets us isolate different factors.

The authors then consider their factor ΔRT:

The enhanced radiative cooling due to tropospheric warming, ΔRT, is approximately proportional to ΔT: tropospheric temperatures scale with the surface temperature change and warmer air radiates more energy, so ΔRT = kΔT, with k=3W/(m²K)..

All this is saying is that as the surface warms, the atmosphere warms at about the same rate, and the atmosphere then emits more radiation. This is why the model results of rainfall in our figure 2 above show no trend in rainfall over 50 years, and also match the observations – the constraint on rainfall is the changing radiative balance in the troposphere.

And so they point out:

Thus, although there is clearly usable information in fig. 3 [our figure 2], it would be physically unjustified to estimate ΔP/ΔT directly from 20th century observations and assume that the same quantity will apply in the future, when the balance between climate drivers will be very different.

There is a lot of other interesting commentary in their paper, although the paper itself is now quite dated (and unfortunately behind a paywall). In essence they discuss the difficulties of modeling precipitation changes, especially for a given region, and are looking for “emergent constraints” from more fundamental physics that might help constrain forecasts.

A forecasting system that rules out some currently conceivable futures as unlikely could be far more useful for long-range planning than a small number of ultra-high-resolution forecasts that simply rule in some (very detailed futures as possibilities).

This is a very important point when considering impacts.

Conclusion

Increasing the surface temperature by 1ºC is expected to increase the humidity over the ocean by about 7%. This is simply the basic physics of saturation. However, climate models predict an increase in mean rainfall of maybe 2-3% per ºC. The fundamental reason is that the movement of latent heat from the surface to the atmosphere has to be radiated away by the atmosphere, and so the constraint is the ability of the atmosphere to do this. And so the limiting factor in increasing rainfall is not the humidity increase, it is the radiative cooling of the atmosphere.

We also see that despite 50 years of warming, mean rainfall hasn’t changed. Models also predict this. This is believed to be a transient state, for reasons explained in the article.

References

Constraints on future changes in climate and the hydrologic cycle, MR Allen & WJ Ingram, Nature (2002)  – freely available [thanks, Robert]

Notes

1 Relative humidity is measured as a percentage. If the relative humidity = 100% it means the air is saturated with water vapor – it can’t hold any more water vapor. If the relative humidity = 0% it means the air is completely dry. As temperature increases the ability of air to hold water vapor increases non-linearly.

For example, at 0ºC, 1kg of air can carry around 4g of water vapor, at 10ºC that has doubled to 8g, and at 20ºC it has doubled again to 15g (I’m using approximate values).

So now imagine saturated air over the ocean at 20ºC rising up and therefore cooling (it is cooler higher up in the atmosphere). By the time the air parcel has cooled down to 0ºC (this might be anything from 2km to 5km altitude) it is still saturated but is only carrying 4g of water vapor, having condensed out 11g into water droplets.

Read Full Post »

I probably should have started a separate series on rainfall and then woven the results back into the Impacts series. It might take a few articles working through the underlying physics and how models and observations of current and past climate compare before being able to consider impacts.

There are a number of different ways to look at rainfall models and reality:

  • What underlying physics provides definite constraints regardless of individual models, groups of models or parameterizations?
  • How well do models represent the geographical distribution of rain over a climatological period like 30 years? (e.g. figure 8 in Impacts XI – Rainfall 1)
  • How well do models represent the time series changes of rainfall?
  • How well do models represent the land vs ocean? (when we think about impacts, rainfall over land is what we care about)
  • How well do models represent the distribution of rainfall and the changing distribution of rainfall, from lightest to heaviest?

In this article I thought I would highlight a set of conclusions from one paper among many. It’s a good starting point. The paper is A canonical response of precipitation characteristics to global warming from CMIP5 models by Lau and his colleagues, and is freely available, and as always I recommend people read the whole paper, along with the supporting information that is also available via the link.

As an introduction, the underlying physics perhaps provides some constraints. This is strongly believed in the modeling community. The constraint is a simple one – if we warm the ocean by 1K (= 1ºC) then the amount of water vapor above the ocean surface increases by about 7%. So we expect a warmer world to have more water vapor – at least in the boundary layer (typically 1km) and over the ocean. If we have more water vapor then we expect more rainfall. But GCMs and also simple models suggest a lower value, like 2-3% per K, not 7%/K. We will come back to why in another article.

It also seems from models that with global warming, rainfall increases more in regions and times of already high rainfall and reduces in regions and times of low rainfall – the “wet get wetter and the dry get drier”. (Also a marketing mantra that introducing a catchy slogan ensures better progress of an idea). So we also expect changes in the distribution of rainfall. One reason for this is a change in the tropical circulation. All to be covered later, so onto the paper..

We analyze the outputs of 14 CMIP5 models based on a 140 year experiment with a prescribed 1% per year increase in CO2 emission. This rate of CO2 increase is comparable to that prescribed for the RCP8.5, a relatively conservative business-as-usual scenario, except the latter includes also changes in other GHG and aerosols, besides CO2.

A 27-year period at the beginning of the integration is used as the control to compute rainfall and temperature statistics, and to compare with climatology (1979–2005) of rainfall data from the Global Precipitation Climatology Project (GPCP). Two similar 27-year periods in the experiment that correspond approximately to a doubling of CO2 emissions (DCO2) and a tripling of CO2 emissions (TCO2) compared to the control are chosen respectively to compute the same statistics..

Just a note that I disagree with the claim that RCP8.5 is a “relatively conservative business as usual scenario” (see Impacts – II – GHG Emissions Projections: SRES and RCP), but that’s just an opinion, as are all views about where the world will be in population, GDP and cumulative emissions 100-150 years from now. It doesn’t detract from the rainfall analysis in the paper.

For people wondering “what is CMIP5?” – this is the model inter-comparison project for the most recent IPCC report (AR5) where many models have to address the same questions so they can be compared.

Here we see (and along with other graphs you can click to enlarge) what the models show in temperature (top left), mean global rainfall (top right), zonal rainfall anomaly by latitude (bottom left) and the control vs the tripled CO2 comparison (bottom right). The many different colors in the first three graphs are each model, while the black line is the mean of the models (“ensemble mean”). The bottom right graph helps put the changes shown in the bottom left into a perspective – with the different between the red and the blue being the difference between tripling CO2 and today:

From Lau et al 2013

Figure 1 – Click to enlarge

In the figure above, the bottom left graph shows anomalies. We see one of the characteristics of models as a result of more GHGs – wetter tropics and drier sub-tropics, along with wetter conditions at higher latitudes.

From the supplementary material, below we see a better regional breakdown of fig 1d (bottom right in the figure above). I’ll highlight the bottom left graph (c) for the African region. Over the continent, the differences between present day and tripling CO2 seem minor as far as model predictions go for mean rainfall:

From Lau et al 2013

Figure 2 – Click to enlarge

The supplementary material also has a comparison between models and observations. The first graph below is what we are looking at (the second graph we will consider afterwards). TRMM (Tropical Rainfall Measuring Mission) is satellite data and GPCP one rainfall climatology that we met in the last article – so they are both observational datasets. We see that the models over-estimate tropic rainfall, especially south of the equator:

From Lau et al 2013

Figure 3 – Click to enlarge

Rainfall Distribution from Light through to Heavy Rain

Lau and his colleagues then look at rainfall distribution in terms of light rainfall through to heavier rainfall. So, take global rainfall and divide it into frequency of occurrence, with light rainfall to the left and heavy rainfall to the right. Take a look back at the bottom graph in the figure above (figure 3, their figure S1). Note that the horizontal axis is logarithmic, with a ratio of over 1000 from left to right.

It isn’t an immediately intuitive graph. Basically there are two sets of graphs. The left “cluster” is how often that rainfall amount occurred, and the black line is GPCP observations. The “right cluster” is how much rainfall fell (as a percentage of total rainfall) for that rainfall amount and again black is observations.

So lighter rainfall, like 1mm/day and below accounts for 50% of time, but being light rainfall accounts for less than 10% of total rainfall.

To facilitate discussion regarding rainfall characteristics in this work, we define, based on the ensemble model PDF, three major rain types: light rain (LR), moderate rain (MR), and heavy rain (HR) respectively as those with monthly mean rain rate below the 20th percentile (<0.3 mm/day), between response (TCO2 minus control, black) and the inter-model 1s the 40th–70th percentile (0.9–2.4mm/day), and above the 98.5% percentile (>9mm/day). An extremely heavy rain (EHR) type defined at the 99.9th percentile (>24 mm day1) will also be referred to, as appropriate.

Here is a geographical breakdown of the total and then the rainfall in these three categories, model mean on the left and observations on the right:

From Lau et al 2013

Figure 4 – Click to enlarge

We can see that the models tend to overestimate the heavy rain and underestimate the light rain. These graphics are excellent because they help us to see the geographical distribution.

Now in the graphs below we see at the top the changes in frequency of mean precipitation (60S-60N) as a function of rain rate; and at the bottom we see the % change in rainfall per K of temperature change, again as a function of rain rate. Note that the bottom graph also has a logarithmic scale for the % change, so as you move up each grid square the value is doubled.

The different models are also helpfully indicated so the spread can be seen:

From Lau et al 2013

Figure 5 – Click to enlarge

Notice that the models are all predicting quite a high % change in rainfall per K for the heaviest rain – something around 50%. In contrast the light rainfall is expected to be up a few % per K and the medium rainfall is expected to be down a few % per K.

Globally, rainfall increases by 4.5%, with a sensitivity (dP/P/dT) of 1.4% per K

Here is a table from their supplementary material with a zonal breakdown of changes in mean rainfall (so not divided into heavy, light etc). For the non-maths people the first row, dP/P is just the % change in precipitation (“d” in front of a variable means “change in that variable”), the second row is change in temperature and the third row is the % change in rainfall per K (or ºC) of warming from GHGs:

From Lau et al 2013

Figure 6 – Click to enlarge

Here are the projected geographical distributions of the changes in mean (top left), heavy (top right), medium (bottom left) and light rain (bottom right) – using their earlier definitions – under tripling CO2:

From Lau et al 2013

Figure 7 – Click to enlarge

And as a result of these projections, the authors also show the number of dry months and the projected changes in number of dry months:

From Lau et al 2013

Figure 8 – Click to enlarge

The authors conclude:

The IPCC CMIP5 models project a robust, canonical global response of rainfall characteristics to CO2 warming, featuring an increase in heavy rain, a reduction in moderate rain, and an increase in light rain occurrence and amount globally.

For a scenario of 1% CO2 increase per year, the model ensemble mean projects at the time of approximately tripling of the CO2 emissions, the probability of occurring of extremely heavy rain (monthly mean >24mm/day) will increase globally by 100%–250%, moderate rain will decrease by 5%–10% and light rain will increase by 10%–15%.

The increase in heavy rain is most pronounced in the equatorial central Pacific and the Asian monsoon regions. Moderate rain is reduced over extensive oceanic regions in the subtropics and extratropics, but increased over the extratropical land regions of North America, and Eurasia, and extratropical Southern Oceans. Light rain is mostly found to be inversely related to moderate rain locally, and with heavy rain in the central Pacific.

The model ensemble also projects a significant global increase up to 16% more frequent in the occurrences of dry months (drought conditions), mostly over the subtropics as well as marginal convective zone in equatorial land regions, reflecting an expansion of the desert and arid zones..

 

..Hence, the canonical global rainfall response to CO2 warming captured in the CMIP5 model projection suggests a global scale readjustment involving changes in circulation and rainfall characteristics, including possible teleconnection of extremely heavy rain and droughts separated by far distances. This adjustment is strongly constrained geographically by climatological rainfall pattern, and most likely by the GHG warming induced sea surface temperature anomalies with unstable moister and warmer regions in the deep tropics getting more heavy rain, at the expense of nearby marginal convective zones in the tropics and stable dry zones in the subtropics.

Our results are generally consistent with so-called “the rich-getting-richer, poor-getting-poorer” paradigm for precipitation response under global warming..

Conclusion

This article has basically presented the results of one paper, which demonstrates consistency in model response of rainfall to doubling and tripling of CO2 in the atmosphere. In subsequent articles we will look at the underlying physics constraints, at time-series over recent decades and try to make some kind of assessment.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

References

A canonical response of precipitation characteristics to global warming from CMIP5 models, William K.-M. Lau, H.-T. Wu, & K.-M. Kim, GRL (2013) – free paper

Further Reading

Here are a bunch of papers that I found useful for readers who want to dig into the subject. Most of them are available for free via Google Scholar, but one of the most helpful to me (first in the list) was Allen & Ingram 2002 and the only way I could access it was to pay $4 to rent it for a couple of days.

Allen MR, Ingram WJ (2002) Constraints on future changes in climate and the hydrologic cycle. Nature 419:224–232

Allan RP (2006) Variability in clear-sky longwave radiative cooling of the atmosphere. J Geophys Res 111:D22, 105

Allan, R. P., B. J. Soden, V. O. John, W. Ingram, and P. Good (2010), Current changes in tropical precipitation, Environ. Res. Lett., doi:10.1088/ 1748-9326/5/52/025205

Physically Consistent Responses of the Global Atmospheric Hydrological Cycle in Models and Observations, Richard P. Allan et al, Surv Geophys (2014)

Held IM, Soden BJ (2006) Robust responses of the hydrological cycle to global warming. J Clim 19:5686–5699

Changes in temperature and precipitation extremes in the CMIP5 ensemble, VV Kharin et al, Climatic Change (2013)

Energetic Constraints on Precipitation Under Climate Change, Paul A. O’Gorman et al, Surv Geophys (2012) 33:585–608

Trenberth, K. E. (2011), Changes in precipitation with climate change, Clim. Res., 47, 123–138, doi:10.3354/cr00953

Zahn M, Allan RP (2011) Changes in water vapor transports of the ascending branch of the tropical circulation. J Geophys Res 116:D18111

Read Full Post »

If we want to assess forecasts of floods, droughts and crop yields then we will need to know rainfall. We will also need to know temperature of course.

The forte of climate models is temperature. Rainfall is more problematic.

Before we get to model predictions about the future we need to review observations and the ability of models to reproduce them. Observations are also problematic – rainfall varies locally and over short durations. And historically we lacked effective observation systems in many locations and regions of the world, so data has to be pieced together and estimated from reanalysis.

Smith and his colleagues created a new rainfall dataset. Here is a comment from their 2012 paper:

Although many land regions have long precipitation records from gauges, there are spatial gaps in the sampling for undeveloped regions, areas with low populations, and over oceans. Since 1979 satellite data have been used to fill in those sampling gaps. Over longer periods gaps can only be filled using reconstructions or reanalyses..

Here are two views of the global precipitation data from a dataset which starts with the satellite era, that is, 1979 onwards – GPCP (Global Precipitation Climatology Project):

From Adler et al 2003

Figure 1

From Adler et al 2003

Figure 2

For historical data before satellites we only have rain gauge data. The GPCC dataset, explained in Becker et al 2013, shows the number of stations over time by region:

From Becker et al 2013

Figure 3- Click to expand

And the geographical distribution of rain gauge stations at different times:

From Becker et al 2013

Figure 4 – Click to expand

The IPCC compared the global trends over land from four different datasets over the last century and the last half-century:

From IPCC AR5 Ch. 2

Figure 5 – Click to expand

And the regional trends:

From IPCC AR5 Ch. 2

Figure 6 – Click to expand

The graphs for the annual change in rainfall, note the different scales for each region (as we would expect given the difference in average rainfall in different region):

From IPCC AR5 ch 2

Figure 7

We see that the decadal or half-decadal variation is much greater than any apparent long term trend. The trend data (as reviewed by the IPCC in figs 5 & 6) shows significant differences in the datasets but when we compare the time series it appears that the datasets match up better than indicated by the trend comparisons.

The data with the best historical coverage is 30ºN – 60ºN and the trend values for 1951-2000 (from different reconstructions) range from an annual increase of 1 to 1.5 mm/yr per decade (fig 6 / table 2.10 of IPCC report). This is against an absolute value of about 1000 mm/yr in this region (reading off the climatology in figure 2).

This is just me trying to put the trend data in perspective.

Models

Here is the IPCC AR5 chapter 9 on model comparisons to satellite-era rainfall observations. Top left is observations (basically the same dataset as figure 1 in this article over a slightly longer period with different colors) and bottom right is percentage error of model average with respect to observations:

From IPCC AR5 ch 9

Figure 8 – Click to expand

We can see that the average of all models has substantial errors on mean rainfall.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

References

IPCC AR5 Chapter 2

Improved Reconstruction of Global Precipitation since 1900, Smith, Arken, Ren & Shen, Journal of Atmospheric and Oceanic Technology (2012)

The Version-2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979–Present), Adler et al, Journal of Hydrometeorology (2003)

A description of the global land-surface precipitation data products of the Global Precipitation Climatology Centre with sample applications including centennial (trend) analysis from 1901–present, A Becker, Earth Syst. Sci. Data (2013)

Read Full Post »

In Part Nine – Data I – Ts vs OLR we looked at the monthly surface temperature (“skin temperature”) from NCAR vs OLR measured by CERES. The slope of the data was about 2 W/m² per 1K surface temperature change. Commentators pointed out that this was really the seasonal relationship – it probably didn’t indicate anything further.

In Part Ten we looked at anomaly data: first where monthly means were removed; and then where daily means were removed. Mostly the data appeared to be a big noisy scatter plot with no slope. The main reason that I could see for this lack of relationship was that anomaly data didn’t “keep going” in one direction for more than a few days. So it’s perhaps unreasonable to expect that we would find any relationship, given that most circulation changes take time.

We haven’t yet looked at regional versions of Ts vs OLR, the main reason is I can’t yet see what we can usefully plot. A large amount of heat is exported from the tropics to the poles and so without being able to itemize the amount of heat lost from a tropical region or the amount of heat gained by a mid-latitude or polar region, what could we deduce? One solution is to look at the whole globe in totality – which is what we have done.

In this article we’ll look at the mean global annual data. We only have CERES data for complete years from 2001 to 2013 (data wasn’t available to end of the 2014 when I downloaded it).

Here are the time-series plots for surface temperature and OLR:

Global annual Ts vs year & OLR  vs year 2001-2013

Figure 1

Here is the scatter plot of the above data, along with the best-fit linear interpolation:

Global annual Ts vs OLR 2001-2013

Figure 2

The calculated slope is similar to the results we obtained from the monthly data (which probably showed the seasonal relationship). This is definitely the year to year data, but also gives us a slope that indicates positive feedback. The correlation is not strong, as indicated by the R² value of 0.37, but it exists.

As explained in previous posts, a change of 3.6 W/m² per 1K is a “no feedback” relationship, where a uniform 1K change in surface & atmospheric temperature causes an OLR increase of 3.6 W/m² due to increased surface and atmospheric radiation – a greater increase in OLR would be negative feedback and a smaller increase would be positive feedback (e.g. see Part Eight with the plot of OLR changes vs latitude and height, which integrated globally gives 3.6 W/m²).

The problem of the”no feedback” calculation is perhaps a bit more complicated and I want to dig into this calculation at some stage.

I haven’t looked at whether the result is sensitive to the date of the start of year. Next, I want to look at the changes in humidity, especially upper tropospheric water vapor, which is a key area for radiative changes. This will be a bit of work, because AIRS data comes in big files (there is a lot of data).

Read Full Post »

[I was going to post this new article not long after the last article in the series, but felt I was missing something important and needed to think about it. Instead I’ve not had any new insights and am posting for comment.]

In Part Nine – Data I, we looked at the relationship between Ts (surface temperature) and OLR (outgoing longwave radiation), for reasons all explained there.

The relationship shown there appears to be primarily the seasonal relationship, which looks like a positive feedback due to the 2W/m² per 1K temperature increase. What about the feedback on a different timescale from the seasonal relationship?

From the 2001-2013 data, here is the monthly mean and the daily mean for both Ts and OLR:

Monthly mean & daily mean for Ts & OLR

Figure 1

If we remove the monthly mean from the data, here are those same relationships (shown in the last article as anomalies from the overall 2001-2013 mean):

OLR vs Ts - NCAR -CERES-monthlymeansremoved

Figure 2 – Click to Expand

On a lag of 1 day there is a possible relationship with a low correlation – and the rest of the lags show no relationship at all.

Of course, we have created a problem with this new dataset – as the lag increases we are “jumping boundaries”. For example, on the 7-day lag all of the Ts data in the last week of April is being compared with the OLR data in the first week of May. With slowly rising temperatures, the last week of April will be “positive temperature data”, but the first week of May will be “negative OLR data”. So we expect 1/4 of our data to show the opposite relationship.

So we can show the data with the “monthly boundary jumps removed” – which means we can only show lags of say 1-14 days (with 3% – 50% of the data cut out); and we can also show the data as anomalies from the daily mean. Both have the potential to demonstrate the feedback on shorter timescales than the seasonal cycle.

First, here is the data with daily means removed:

OLR vs Ts - NCAR -CERES-dailymeansremoved

Figure 3 – Click to Expand

Second, here is the data with the monthly means removed as in figure 2, but this time ensuring that no monthly boundaries are crossed (so some of the data is removed to ensure this):

OLR vs Ts - NCAR -CERES-monthlymeansremoved-noboundary

Figure 4 – Click to Expand

So basically this demonstrates no correlation between change in daily global OLR and change in daily global temperature on less than seasonal timescales. (Or “operator error” with the creation of my anomaly data). This is excluding (because we haven’t tested it here) the very short timescale of day to night change.

This was surprising at first sight.

That is, we see global Ts increasing on a given day but we can’t distinguish any corresponding change in global OLR from random changes, at least until we get to seasonal time periods? (See graph in last article).

Then what is probably the reason came into view. Remember that this is anomaly data (daily global temperature with monthly mean subtracted). This bar graph demonstrates that when we are looking at anomaly data, most of the changes in global Ts are reversed the next day, or usually within a few days:

Days temperature goes in same direction

Figure 5

This means that we are unlikely to see changes in Ts causing noticeable changes in OLR unless the climate response we are looking for (humidity and cloud changes) occurs within a day or two.

That’s my preliminary thinking, looking at the data – i.e., we can’t expect to see much of a relationship, and we don’t see any relationship.

One further point – explained in much more detail in the (short) series Measuring Climate Sensitivity – is that of course changes in temperature are not caused by some mechanism that is independent of radiative forcing.

That is, our measurement problem is compounded by changes in temperature being first caused by fluctuations in radiative forcing (the radiation balance) and ocean heat changes and then we are measuring the “resulting” change in the radiation balance resulting from this temperature change:

Radiation balance & ocean heat balance => Temperature change => Radiation balance & ocean heat balance

So we can’t easily distinguish the net radiation change caused by temperature changes from the radiative contribution to the original temperature changes.

I look forward to readers’ comments.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data

Read Full Post »

In the last article we looked at a paper which tried to unravel – for clear sky only – how the OLR (outgoing longwave radiation) changed with surface temperature. It did the comparison by region, by season and from year to year.

The key point for new readers to understand – why are we interested in how OLR changes with surface temperature? The concept is not so difficult. The practical analysis presents more problems.

Let’s review the concept – and for more background please read at least the start of the last article: if we increase the surface temperature, perhaps due to increases in GHGs, but it could be due to any reason, what happens to outgoing longwave radiation? Obviously, we expect OLR to increase. The real question is how by how much?

If there is no feedback then OLR should increase by about 3.6 W/m² for every 1K in surface temperature (these values are global averages):

  • If there is positive feedback, perhaps due to more humidity, then we expect OLR to increase by less than 3.6 W/m² – think “not enough heat got out to get things back to normal”
  • If there is negative feedback, then we expect OLR to increase by more than 3.6 W/m². In the paper we reviewed in the last article the authors found about 2 W/m² per 1K increase – a positive feedback, but were only considering clear sky areas

One reader asked about an outlier point on the regression slope and whether it affected the result. This motivated me to do something I have had on my list for a while now – get “all of the data” and analyse it. This way, we can review it and answer questions ourselves – like in the Visualizing Atmospheric Radiation series where we created an atmospheric radiation model (first principles physics) and used the detailed line by line absorption data from the HITRAN database to calculate how this change and that change affected the surface downward radiation (“back radiation”) and the top of atmosphere OLR.

With the raw surface temperature, OLR and humidity data “in hand” we can ask whatever questions we like and answer these questions ourselves..

NCAR reanalysis, CERES and AIRS

CERES and AIRS – satellite instruments – are explained in CERES, AIRS, Outgoing Longwave Radiation & El Nino.

CERES measures total OLR in a 1ºx 1º grid on a daily basis.

AIRS has a “hyper-spectral” instrument, which means it looks at lots of frequency channels. The intensity of radiation at these many wavelengths can be converted, via calculation, into measurements of atmospheric temperature at different heights, water vapor concentration at different heights, CO2 concentration, and concentration of various other GHGs. Additionally, AIRS calculates total OLR (it doesn’t measure it – i.e. it doesn’t have a measurement device from 4μm – 100μm). It also measures parameters like “skin temperature” in some locations and calculates the same in other locations.

For the purposes of this article, I haven’t yet dug into the “how” and the reliability of surface AIRS measurements. The main point to note about satellites is they sit at the “top of atmosphere” and their ability to measure stuff near the surface depends on clever ideas and is often subverted by factors including clouds and surface emissivity. (AIRS has microwave instruments specifically to independently measure surface temperature even in cloudy conditions, because of this problem).

NCAR is a “reanalysis product”. It is not measurement, but it is “informed by measurement”. It is part measurement, part model. Where there is reliable data measurement over a good portion of the globe the reanalysis is usually pretty reliable – only being suspect at the times when new measurement systems come on line (so trends/comparisons over long time periods are problematic). Where there is little reliable measurement the reanalysis depends on the model (using other parameters to allow calculation of the missing parameters).

Some more explanation in Water Vapor Trends under the sub-heading Reanalysis – or Filling in the Blanks.

For surface temperature measurements reanalysis is not subverted by models too much. However, the mainstream surface temperature series are surely better than NCAR – I know that there is an army of “climate interested people” who follow this subject very closely. (I am not in that group).

I used NCAR because it is simple to download and extract. And I expect – but haven’t yet verified – that it will be quite close to the various mainstream surface temperature series. If someone is interested and can provide daily global temperature from another surface temperature series as an Excel, csv, .nc – or pretty much any data format – we can run the same analysis.

For those interested, see note 1 on accessing the data.

Results – Global Averages

For our starting point in this article I decided to look at global averages from 2001 to 2013 inclusive (data from CERES not yet available for the whole of 2014). This was after:

  • looking at daily AIRS data
  • creating and comparing NCAR over 8 days with AIRS 8-day averages for surface skin temperature and surface air temperature
  • creating and comparing AIRS over 8-days with CERES for TOA OLR

More on those points in later articles.

The global relationship with surface temperature and OLR is what we have a primary interest in – for the purpose of determining feedbacks. Then we want to figure out some detail about why it occurs. I am especially interested in the AIRS data because it is the only global measurement of upper tropospheric water vapor (UTWV) – and UTWV along with clouds are the key factors in the question of feedback – how OLR changes with surface temperature. For now, we will look at the simple relationship between surface temperature (“skin temperature”) and OLR.

Here is the data, shown as an anomaly from the global mean values over the period Jan 1st, 2001 to Dec 31st, 2013. Each graph represents a different lag – how does global OLR (CERES) change with global surface temperature (NCAR) on a lag of 1 day, 7 days, 14 days and so on:

OLR vs Ts - NCAR -CERES

Figure 1 – Click to Expand

The slope gives the “apparent feedback” and the R² simply reflects how much of the graph is explained by the linear trend. This last value is easily estimated just by looking at each graph.

For reference, here is the timeseries data, as anomalies, with the temperature anomaly multiplied by a factor of 3 so its magnitude is similar to the OLR anomaly:

OLR from CERES vs Ts from NCAR as timeseries

Figure 2 – Click to Expand

Note on the calculation – I used the daily data to calculate a global mean value (area-weighted) and calculated one mean value over the whole time period then subtracted it from every daily data value to obtain an anomaly for each day. Obviously we would get the same slope and R² without using anomaly data (just a different intercept on the axes).

For reference, mean OLR = 238.9 W/m², mean Ts = 288.0 K.

My first question – before even producing the graphs – was whether a lag graph shows the change in OLR due to a change in Ts or due to a mixture of many effects. That is, what is the interpretation of the graphs?

The second question – what is the “right lag” to use? We don’t expect an instant response when we are looking for feedbacks:

  • The OLR through the window region will of course respond instantly to surface temperature change
  • The OLR as a result of changing humidity will depend upon how long it takes for more evaporated surface water to move into the mid- to upper-troposphere
  • The OLR as a result of changing atmospheric temperature, in turn caused by changing surface temperature, will depend upon the mixture of convection and radiative cooling

To say we know the right answer in advance pre-supposes that we fully understand atmospheric dynamics. This is the question we are asking, so we can’t pre-suppose anything. But at least we can suggest that something in the realm of a few days to a few months is the most likely candidate for a reasonable lag.

But the idea that there is one constant feedback and one constant lag is an idea that might well be fatally flawed, despite being seductively simple. (A little more on that in note 3).

And that is one of the problems of this topic. Non-linear dynamics means non-linear results – a subject I find hard to describe in simple words. But let’s say – changes in OLR from changes in surface temperature might be “spread over” multiple time scales and be different at different times. (I have half-written an article trying to explain this idea in words, hopefully more on that sometime soon).

But for the purpose of this article I only wanted to present the simple results – for discussion and for more analysis to follow in subsequent articles.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data

References

Wielicki, B. A., B. R. Barkstrom, E. F. Harrison, R. B. Lee III, G. L. Smith, and J. E. Cooper, 1996: Clouds and the Earth’s Radiant Energy System (CERES): An Earth Observing System Experiment, Bull. Amer. Meteor. Soc., 77, 853-868   – free paper

Kalnay et al.,The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996  – free paper

NCEP Reanalysis data provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/

Notes

Note 1: Boring Detail about Extracting Data

On the plus side, unlike many science journals, the data is freely available. Credit to the organizations that manage this data for their efforts in this regard, which includes visualization software and various ways of extracting data from their sites. However, you can still expect to spend a lot of time figuring out what files you want, where they are, downloading them, and then extracting the data from them. (Many traps for the unwary).

NCAR – data in .nc files, each parameter as a daily value (or 4x daily) in a separate annual .nc file on an (approx) 2.5º x 2.5º grid (actually T62 gaussian grid).

Data via ftp – ftp.cdc.noaa.gov. See http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surface.html.

You get lat, long, and time in the file as well as the parameter. Care needed to navigate to the right folder because the filenames are the same for the 4x daily and the daily data.

NCAR are using latest version .nc files (which Matlab circa 2010 would not open, I had to update to the latest version – many hours wasted trying to work out the reason for failure).

CERES – data in .nc files, you select the data you want and the time period but it has to be a less than 2G file and you get a file to download. I downloaded daily OLR data for each annual period. Data in a 1ºx 1º grid. CERES are using older version .nc so there should be no problem opening.

Data from http://ceres-tool.larc.nasa.gov/ord-tool/srbavg

AIRS – data in .hdf files, in daily, 8-day average, or monthly average. The data is “ascending” = daytime, “descending” = nighttime plus some other products. Daily data doesn’t give global coverage (some gaps). 8-day average does but there are some missing values due to quality issues. Data in a 1ºx 1º grid. I used v6 data.

Data access page – http://disc.sci.gsfc.nasa.gov/datacollection/AIRX3STD_V006.html?AIRX3STD&#tabs-1.

Data via ftp.

HDF is not trivial to open up. The AIRS team have helpfully provided a Matlab tool to extract data which helped me. I think I still spent many hours figuring out how to extract what I needed.

Files Sizes – it’s a lot of data:

NCAR files that I downloaded (skin temperature) are only 12MB per annual file.

CERES files with only 2 parameters are 190MB per annual file.

AIRS files as 8-day averages (or daily data) are 400MB per file.

Also the grid for each is different. Lat from S-pole to N-pole in CERES, the reverse for AIRS and NCAR. Long from 0.5º to 359.5º in CERES but -179.5 to 179.5 in AIRS. (Note for any Matlab people, it won’t regrid, say using interp2, unless the grid runs from lowest number to highest number).

Note 2: Checking data – because I plan on using the daily 1ºx1º grid data from CERES and NCAR, I used it to create the daily global averages. As a check I downloaded the global monthly averages from CERES and compared. There is a discrepancy, which averages at 0.1 W/m².

Here is the difference by month:

CERES-Monthly-discrepancy-by-month

Figure 3 – Click to expand

And a scatter plot by month of year, showing some systematic bias:

CERES-Monthly-discrepance-scatter-plot

Figure 4

As yet, I haven’t dug any deeper to find if this is documented – for example, is there a correction applied to the daily data product in monthly means? is there an issue with the daily data? or, more likely, have I %&^ed up somewhere?

Note 3: Extract from Measuring Climate Sensitivity – Part One:

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Cloud Feedbacks in the Climate System: A Critical Review, Stephens, Journal of Climate (2005):

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to “(1)” is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.

Read Full Post »

In Latent heat and Parameterization I showed a formula for calculating latent heat transfer from the surface into the atmosphere, as well as the “real” formula. The parameterized version has horizontal wind speed x humidity difference (between the surface and some reference height in the atmosphere, typically 10m) x “a coefficient”.

One commenter asked:

Why do we expect that vertical transport of water vapor to vary linearly with horizontal wind speed? Is this standard turbulent mixing?

The simple answer is “almost yes”. But as someone famously said, make it simple, but not too simple.

Charting a course between too simple and too hard is a challenge with this subject. By contrast, radiative physics is a cakewalk. I’ll begin with some preamble and eventually get to the destination.

There’s a set of equations describing motion of fluids and what they do is conserve momentum in 3 directions (x,y,z) – these are the Navier-Stokes equations, and they conserve mass. Then there are also equations to conserve humidity and heat. There is an exact solution to the equations but there is a bit of a problem in practice. The Navier-Stokes equations in a rotating frame can be seen in The Coriolis Effect and Geostrophic Motion under “Some Maths”.

Simple linear equations with simple boundary conditions can be re-arranged and you get a nice formula for the answer. Then you can plot this against that and everyone can see how the relationships change with different material properties or boundary conditions. In real life equations are not linear and the boundary conditions are not simple. So there is no “analytical solution”, where we want to know say the velocity of the fluid in the east-west direction as a function of time and get a nice equation for the answer. Instead we have to use numerical methods.

Let’s take a simple problem – if you want to know heat flow through an odd-shaped metal plate that is heated in one corner and cooled by steady air flow on the rest of its surface you can use these numerical methods and usually get a very accurate answer.

Turbulence is a lot more difficult due to the range of scales involved. Here’s a nice image of turbulence:

Figure 1

There is a cascade of energy from the largest scales down to the point where viscosity “eats up” the kinetic energy. In the atmosphere this is the sub 1mm scale. So if you want to accurately numerically model atmospheric motion across a 100km scale you need a grid size probably 100,000,000 x 100,000,000 x 10,000,000 and solving sub-second for a few days. Well, that’s a lot of calculation. I’m not sure where turbulence modeling via “direct numerical simulation” has got to but I’m pretty sure that is still too hard and in a decade it will still be a long way off. The computing power isn’t there.

Anyway, for atmospheric modeling you don’t really want to know the velocity in the x,y,z direction (usually annotated as u,v,w) at trillions of points every second. Who is going to dig through that data? What you want is a statistical description of the key features.

So if we take the Navier-Stokes equation and average, what do we get? We get a problem.

For the mathematically inclined the following is obvious, but of course many readers aren’t, so here’s a simple example:

Let’s take 3 numbers: 1, 10, 100:   the average = (1+10+100)/3 = 37.

Now let’s look at the square of those numbers: 1, 100, 10000:  the average of the square of those numbers = (1+100+10000)/3 = 3367.

But if we take the average of our original numbers and square it, we get 37² = 1369. It’s strange but the average squared is not the same as the average of the squared numbers. That’s non-linearity for you.

In the Navier Stokes equations we have values like east velocity x upwards velocity, written as uw. The average of uw, written as \overline{uw} is not equal to the average of u x the average of w, written as \overline{u}.\overline{w}. For the same reason we just looked at.

When we create the Reynolds averaged Navier-Stokes (RANS) equations we get lots of new terms like\overline{uw}. That is, we started with the original equations which gave us a complete solution – the same number of equations as unknowns. But when we average we end up with more unknowns than equations.

It’s like saying x + y = 1, what is x and y? No one can say. Perhaps 1 & 0. Perhaps 1000 & -999.

Digression on RANS for Slightly Interested People

The Reynolds approach is to take a value like u,v,w (velocity in 3 directions) and decompose into a mean and a “rapidly varying” turbulent component.

So u = \overline{u} + u', where \overline{u} = mean value;  u’ = the varying component. So \overline{u'} = 0. Likewise for the other directions.

And \overline{uw} = \overline{u} . \overline{w} + \overline{u'w'}

So in the original equation where we have a term like u . \frac{\partial u}{\partial x}, it turns into  (\overline{u} + u') . \frac{\partial (\overline{u} + u')}{\partial x}, which, when averaged, becomes:

\overline{u} . \frac{\partial \overline{u}}{\partial x} +\overline{u' . \frac{\partial u'}{\partial x}}

So 2 unknowns instead of 1. The first term is the averaged flow, the second term is the turbulent flow. (Well, it’s an advection term for the change in velocity following the flow)

When we look at the conservation of energy equation we end up with terms for the movement of heat upwards due to average flow (almost zero) and terms for the movement of heat upwards due to turbulent flow (often significant). That is, a term like \overline{\theta'w'} which is “the mean of potential temperature variations x upwards eddy velocity”.

Or, in plainer English, how heat gets moved up by turbulence.

..End of Digression

Closure and the Invention of New Ideas

“Closure” is a maths term. To “close the equations” when we have more unknowns that equations means we have to invent a new idea. Some geniuses like Reynolds, Prandtl and Kolmogoroff did come up with some smart new ideas.

Often the smart ideas are around “dimensionless terms” or “scaling terms”. The first time you encounter these ideas they seem odd or just plain crazy. But like everything, over time strange ideas start to seem normal.

The Reynolds number is probably the simplest to get used to. The Reynolds number seeks to relate fluid flows to other similar fluid flows. You can have fluid flow through a massive pipe that is identical in the way turbulence forms to that in a tiny pipe – so long as the viscosity and density change accordingly.

The Reynolds number, Re = density x length scale x mean velocity of the fluid / viscosity

And regardless of the actual physical size of the system and the actual velocity, turbulence forms for flow over a flat plate when the Reynolds number is about 500,000. By the way, for the atmosphere and ocean this is true most of the time.

Kolmogoroff came up with an idea in 1941 about the turbulent energy cascade using dimensional analysis and came to the conclusion that the energy of eddies increases with their size to the power 2/3 (in the “inertial subrange”). This is usually written vs frequency where it becomes a -5/3 power. Here’s a relatively recent experimental verification of this power law.

From Durbin & Reif 2010

From Durbin & Reif 2010

 Figure 2

In less genius like manner, people measure stuff and use these measured values to “close the equations” for “similar” circumstances. Unfortunately, the measurements are only valid in a small range around the experiments and with turbulence it is hard to predict where the cutoff is.

A nice simple example, to which I hope to return because it is critical in modeling climate, is vertical eddy diffusivity in the ocean. By way of introduction to this, let’s look at heat transfer by conduction.

If only all heat transfer was as simple as conduction. That’s why it’s always first on the list in heat transfer courses..

If have a plate of thickness d, and we hold one side at temperature T1 and the other side at temperature T2, the heat conduction per unit area:

H_z = \frac{k(T_2-T_1)}{d}

where k is a material property called conductivity. We can measure this property and it’s always the same. It might vary with temperature but otherwise if you take a plate of the same material and have widely different temperature differences, widely different thicknesses – the heat conduction always follows the same equation.

Now using these ideas, we can take the actual equation for vertical heat flux via turbulence:

H_z =\rho c_p\overline{w'\theta'}

where w = vertical velocity, θ = potential temperature

And relate that to the heat conduction equation and come up with (aka ‘invent’):

H_z = \rho c_p K . \frac{\partial \theta}{\partial z}

Now we have an equation we can actually use because we can measure how potential temperature changes with depth. The equation has a new “constant”, K. But this one is not really a constant, it’s not really a material property – it’s a property of the turbulent fluid in question. Many people have measured the “implied eddy diffusivity” and come up with a range of values which tells us how heat gets transferred down into the depths of the ocean.

Well, maybe it does. Maybe it doesn’t tell us very much that is useful. Let’s come back to that topic and that “constant” another day.

The Main Dish – Vertical Heat Transfer via Horizontal Wind

Back to the original question. If you imagine a sheet of paper as big as your desk then that pretty much gives you an idea of the height of the troposphere (lower atmosphere where convection is prominent).

It’s as thin as a sheet of desk size paper in comparison to the dimensions of the earth. So any large scale motion is horizontal, not vertical. Mean vertical velocities – which doesn’t include turbulence via strong localized convection – are very low. Mean horizontal velocities can be the order of 5 -10 m/s near the surface of the earth. Mean vertical velocities are the order of cm/s.

Let’s look at flow over the surface under “neutral conditions”. This means that there is little buoyancy production due to strong surface heating. In this case the energy for turbulence close to the surface comes from the kinetic energy of the mean wind flow – which is horizontal.

There is a surface drag which gets transmitted up through the boundary layer until there is “free flow” at some height. By using dimensional analysis, we can figure out what this velocity profile looks like in the absence of strong convection. It’s logarithmic:

Surface-wind-profile

Figure 3 – for typical ocean surface

Lots of measurements confirm this logarithmic profile.

We can then calculate the surface drag – or how momentum is transferred from the atmosphere to the ocean – using the simple formula derived and we come up with a simple expression:

\tau_0 = \rho C_D U_r^2

Where Ur is the velocity at some reference height (usually 10m), and CD is a constant calculated from the ratio of the reference height to the roughness height and the von Karman constant.

Using similar arguments we can come up with heat transfer from the surface. The principles are very similar. What we are actually modeling in the surface drag case is the turbulent vertical flux of horizontal momentum \rho \overline{u'w'} with a simple formula that just has mean horizontal velocity. We have “closed the equations” by some dimensional analysis.

Adding the Richardson number for non-neutral conditions we end up with a temperature difference along with a reference velocity to model the turbulent vertical flux of sensible heat \rho c_p . \overline{w'\theta'}. Similar arguments give latent heat flux L\rho . \overline{w'q'} in a simple form.

Now with a bit more maths..

At the surface the horizontal velocity must be zero. The vertical flux of horizontal momentum creates a drag on the boundary layer wind. The vertical gradient of the mean wind, U, can only depend on height z, density ρ and surface drag.

So the “characteristic wind speed” for dimensional analysis is called the friction velocity, u*, and u* = \sqrt\frac{\tau_0}{\rho}

This strange number has the units of velocity: m/s  – ask if you want this explained.

So dimensional analysis suggests that \frac{z}{u*} . \frac{\partial U}{\partial z} should be a constant – “scaled wind shear”. The inverse of that constant is known as the Von Karman constant, k = 0.4.

So a simple re-arrangement and integration gives:

U(z) = \frac{u*}{k} . ln(\frac{z}{z_0})

where z0 is a constant from the integration, which is roughness height – a physical property of the surface where the mean wind reaches zero.

The “real form” of the friction velocity is:

u*^2 = \frac{\tau_0}{\rho} = (\overline{u'w'}^2 + \overline{v'w'}^2)^\frac{1}{2},  where these eddy values are at the surface

we can pick a horizontal direction along the line of the mean wind (rotate coordinates) and come up with:

u*^2 = \overline{u'w'}

If we consider a simple constant gradient argument:

\tau = - \rho . \overline{u'w'} = \rho K \frac{\partial \overline{u}}{\partial z}

where the first expression is the “real” equation and the second is the “invented” equation, or “our attempt to close the equation” from dimensional analysis.

Of course, this is showing how momentum is transferred, but the approach is pretty similar, just slightly more involved, for sensible and latent heat.

Conclusion

Turbulence is a hard problem. The atmosphere and ocean are turbulent so calculating anything is difficult. Until a new paradigm in computing comes along, the real equations can’t be numerically solved from the small scales needed where viscous dissipation damps out the kinetic energy of the turbulence up to the large scale of the whole earth, or even of a synoptic scale event. However, numerical analysis has been used a lot to test out ideas that are hard to test in laboratory experiments. And can give a lot of insight into parts of the problems.

In the meantime, experiments, dimensional analysis and intuition have provided a lot of very useful tools for modeling real climate problems.

Read Full Post »

Older Posts »