Feeds:
Posts
Comments

Archive for the ‘Atmospheric Physics’ Category

In XII – Rainfall 2 we saw the results of many models on rainfall as GHGs increase. They project wetter tropics, drier subtropics and wetter higher latitude regions. We also saw an expectation that rainfall will increase globally, with something like 2-3% per ºC of warming.

Here is a (too small) graph from Allen & Ingram (2002) showing the model response of rainfall under temperature changes from GHG increases. The dashed line marked “C-C” is the famous (in climate physics) Clausius–Clapeyron relation which, at current temperatures, shows a 7% change in water vapor per ºC of warming. The red triangles are the precipitation changes from model simulations showing about half of that.

From Allen & Ingram (2002)

Figure 1

Here is another graph from the same paper showing global mean temperature change (top) and rainfall over land (bottom):

From Allen & Ingram (2002)

Figure 2

The temperature has increased over the last 50 years, and models and observations show that the precipitation has.. oh, it’s not changed. What is going on?

First, the authors explain some important background:

The distribution of moisture in the troposphere (the part of the atmosphere that is strongly coupled to the surface) is complex, but there is one clear and strong control: moisture condenses out of supersaturated air.

This constraint broadly accounts for the humidity of tropospheric air parcels above the boundary layer, because almost all such parcels will have reached saturation at some point in their recent history. Physically, therefore, it has long seemed plausible that the distribution of relative humidity would remain roughly constant under climate change, in which case the Clausius-Clapeyron relation implies that specific humidity would increase roughly exponentially with temperature.

This reasoning is strongest at higher latitudes where air is usually closer to saturation, and where relative humidity is indeed roughly constant through the substantial temperature changes of the seasonal cycle. For lower latitudes it has been argued that the real-world response might be different. But relative humidity seems to change little at low latitudes under a global warming scenario, even in models of very high vertical resolution, suggesting this may be a robust ’emergent constraint’ on which models have already converged.

They continue:

If tropospheric moisture loading is controlled by the constraints of (approximately) unchanged relative humidity and the Clausius-Clapeyron relation, should we expect a corresponding exponential increase in global precipitation and the overall intensity of the hydrological cycle as global temperatures rise?

This is certainly not what is observed in models.

To clarify, the point in the last sentence is that models do show an increase in precipitation, but not at the same rate as the expected increase in specific humidity (see note 1 for new readers).

They describe their figure 2 (our figure 1 above) and explain:

The explanation for these model results is that changes in the overall intensity of the hydrological cycle are controlled not by the availability of moisture, but by the availability of energy: specifically, the ability of the troposphere to radiate away latent heat released by precipitation.

At the simplest level, the energy budgets of the surface and troposphere can be summed up as a net radiative heating of the surface (from solar radiation, partly offset by radiative cooling) and a net radiative cooling of the troposphere to the surface and to space (R) being balanced by an upward latent heat flux (LP, where L is the latent heat of evaporation and P is global-mean precipitation): evaporation cools the surface and precipitation heats the troposphere.

[Emphasis added].

Basics Digression

Picture the atmosphere over a long period of time (like a decade), and for the whole globe. If it hasn’t heated up or cooled down we know that the energy in must equal energy out (or if it has only done so only marginally then energy in is almost equal to energy out). This is the first law of thermodynamics – energy is conserved.

What energy comes into the atmosphere?

  1. Solar radiation is partly absorbed by the atmosphere (most is transmitted through and heats the surface of the earth)
  2. Radiation emitted from the earth’s surface (we’ll call this terrestrial radiation) is mostly absorbed by the atmosphere (some is transmitted straight through to space)
  3. Warm air is convected up from the surface
  4. Heat stored in evaporated water vapor (latent heat) is convected up from the surface and the water vapor condenses out, releasing heat into the atmosphere when this happens

How does the atmosphere lose energy?

  1. It radiates downwards to the surface
  2. It radiates out to space

..end of digression

Changing Energy Budget

In a warmer world, if we have more evaporation we have more latent heat transfer from the surface into the troposphere. But the atmosphere has to be able to radiate this heat away. If it can’t, then the atmosphere becomes warmer, and this reduces convection. So with a warmer surface we may have a plentiful potential supply of latent heat (via water vapor) but the atmosphere needs a mechanism to radiate away this heat.

Allen & Ingram put forward a simple conceptual equation:

ΔRc + ΔRT = LΔP

where the change in radiative cooling ΔR, is split into two components: ΔRc that is independent of the change in atmospheric temperature; and ΔRT that depends only on the temperature

L = latent heat of water vapor (a constant), ΔP = change in rainfall (= change in evaporation, as evaporation is balanced by rainfall)

LΔP is about 1W/m² per 1% increase in global precipitation.

Now, if we double CO2, then before any temperature changes we decrease the outgoing longwave radiation through the tropopause (the top of the troposphere) by about 3-4W/m² and we increase atmospheric radiation to the surface by about 1W/m².

So doubling CO2, ΔRc = -2 to -3W/m²; prior to a temperature change ΔRT = 0; and so ΔP reduces.

The authors comment that increasing CO2 before any temperature change takes place reduces the intensity of the hydrological cycle and this effect was seen in early modeling experiments using prescribed sea surface temperatures.

Now, of course, the idea of doubling CO2 without any temperature change is just a thought experiment. But it’s an important thought experiment because it lets us isolate different factors.

The authors then consider their factor ΔRT:

The enhanced radiative cooling due to tropospheric warming, ΔRT, is approximately proportional to ΔT: tropospheric temperatures scale with the surface temperature change and warmer air radiates more energy, so ΔRT = kΔT, with k=3W/(m²K)..

All this is saying is that as the surface warms, the atmosphere warms at about the same rate, and the atmosphere then emits more radiation. This is why the model results of rainfall in our figure 2 above show no trend in rainfall over 50 years, and also match the observations – the constraint on rainfall is the changing radiative balance in the troposphere.

And so they point out:

Thus, although there is clearly usable information in fig. 3 [our figure 2], it would be physically unjustified to estimate ΔP/ΔT directly from 20th century observations and assume that the same quantity will apply in the future, when the balance between climate drivers will be very different.

There is a lot of other interesting commentary in their paper, although the paper itself is now quite dated (and unfortunately behind a paywall). In essence they discuss the difficulties of modeling precipitation changes, especially for a given region, and are looking for “emergent constraints” from more fundamental physics that might help constrain forecasts.

A forecasting system that rules out some currently conceivable futures as unlikely could be far more useful for long-range planning than a small number of ultra-high-resolution forecasts that simply rule in some (very detailed futures as possibilities).

This is a very important point when considering impacts.

Conclusion

Increasing the surface temperature by 1ºC is expected to increase the humidity over the ocean by about 7%. This is simply the basic physics of saturation. However, climate models predict an increase in mean rainfall of maybe 2-3% per ºC. The fundamental reason is that the movement of latent heat from the surface to the atmosphere has to be radiated away by the atmosphere, and so the constraint is the ability of the atmosphere to do this. And so the limiting factor in increasing rainfall is not the humidity increase, it is the radiative cooling of the atmosphere.

We also see that despite 50 years of warming, mean rainfall hasn’t changed. Models also predict this. This is believed to be a transient state, for reasons explained in the article.

References

Constraints on future changes in climate and the hydrologic cycle, MR Allen & WJ Ingram, Nature (2002)  – freely available [thanks, Robert]

Notes

1 Relative humidity is measured as a percentage. If the relative humidity = 100% it means the air is saturated with water vapor – it can’t hold any more water vapor. If the relative humidity = 0% it means the air is completely dry. As temperature increases the ability of air to hold water vapor increases non-linearly.

For example, at 0ºC, 1kg of air can carry around 4g of water vapor, at 10ºC that has doubled to 8g, and at 20ºC it has doubled again to 15g (I’m using approximate values).

So now imagine saturated air over the ocean at 20ºC rising up and therefore cooling (it is cooler higher up in the atmosphere). By the time the air parcel has cooled down to 0ºC (this might be anything from 2km to 5km altitude) it is still saturated but is only carrying 4g of water vapor, having condensed out 11g into water droplets.

Read Full Post »

I probably should have started a separate series on rainfall and then woven the results back into the Impacts series. It might take a few articles working through the underlying physics and how models and observations of current and past climate compare before being able to consider impacts.

There are a number of different ways to look at rainfall models and reality:

  • What underlying physics provides definite constraints regardless of individual models, groups of models or parameterizations?
  • How well do models represent the geographical distribution of rain over a climatological period like 30 years? (e.g. figure 8 in Impacts XI – Rainfall 1)
  • How well do models represent the time series changes of rainfall?
  • How well do models represent the land vs ocean? (when we think about impacts, rainfall over land is what we care about)
  • How well do models represent the distribution of rainfall and the changing distribution of rainfall, from lightest to heaviest?

In this article I thought I would highlight a set of conclusions from one paper among many. It’s a good starting point. The paper is A canonical response of precipitation characteristics to global warming from CMIP5 models by Lau and his colleagues, and is freely available, and as always I recommend people read the whole paper, along with the supporting information that is also available via the link.

As an introduction, the underlying physics perhaps provides some constraints. This is strongly believed in the modeling community. The constraint is a simple one – if we warm the ocean by 1K (= 1ºC) then the amount of water vapor above the ocean surface increases by about 7%. So we expect a warmer world to have more water vapor – at least in the boundary layer (typically 1km) and over the ocean. If we have more water vapor then we expect more rainfall. But GCMs and also simple models suggest a lower value, like 2-3% per K, not 7%/K. We will come back to why in another article.

It also seems from models that with global warming, rainfall increases more in regions and times of already high rainfall and reduces in regions and times of low rainfall – the “wet get wetter and the dry get drier”. (Also a marketing mantra that introducing a catchy slogan ensures better progress of an idea). So we also expect changes in the distribution of rainfall. One reason for this is a change in the tropical circulation. All to be covered later, so onto the paper..

We analyze the outputs of 14 CMIP5 models based on a 140 year experiment with a prescribed 1% per year increase in CO2 emission. This rate of CO2 increase is comparable to that prescribed for the RCP8.5, a relatively conservative business-as-usual scenario, except the latter includes also changes in other GHG and aerosols, besides CO2.

A 27-year period at the beginning of the integration is used as the control to compute rainfall and temperature statistics, and to compare with climatology (1979–2005) of rainfall data from the Global Precipitation Climatology Project (GPCP). Two similar 27-year periods in the experiment that correspond approximately to a doubling of CO2 emissions (DCO2) and a tripling of CO2 emissions (TCO2) compared to the control are chosen respectively to compute the same statistics..

Just a note that I disagree with the claim that RCP8.5 is a “relatively conservative business as usual scenario” (see Impacts – II – GHG Emissions Projections: SRES and RCP), but that’s just an opinion, as are all views about where the world will be in population, GDP and cumulative emissions 100-150 years from now. It doesn’t detract from the rainfall analysis in the paper.

For people wondering “what is CMIP5?” – this is the model inter-comparison project for the most recent IPCC report (AR5) where many models have to address the same questions so they can be compared.

Here we see (and along with other graphs you can click to enlarge) what the models show in temperature (top left), mean global rainfall (top right), zonal rainfall anomaly by latitude (bottom left) and the control vs the tripled CO2 comparison (bottom right). The many different colors in the first three graphs are each model, while the black line is the mean of the models (“ensemble mean”). The bottom right graph helps put the changes shown in the bottom left into a perspective – with the different between the red and the blue being the difference between tripling CO2 and today:

From Lau et al 2013

Figure 1 – Click to enlarge

In the figure above, the bottom left graph shows anomalies. We see one of the characteristics of models as a result of more GHGs – wetter tropics and drier sub-tropics, along with wetter conditions at higher latitudes.

From the supplementary material, below we see a better regional breakdown of fig 1d (bottom right in the figure above). I’ll highlight the bottom left graph (c) for the African region. Over the continent, the differences between present day and tripling CO2 seem minor as far as model predictions go for mean rainfall:

From Lau et al 2013

Figure 2 – Click to enlarge

The supplementary material also has a comparison between models and observations. The first graph below is what we are looking at (the second graph we will consider afterwards). TRMM (Tropical Rainfall Measuring Mission) is satellite data and GPCP one rainfall climatology that we met in the last article – so they are both observational datasets. We see that the models over-estimate tropic rainfall, especially south of the equator:

From Lau et al 2013

Figure 3 – Click to enlarge

Rainfall Distribution from Light through to Heavy Rain

Lau and his colleagues then look at rainfall distribution in terms of light rainfall through to heavier rainfall. So, take global rainfall and divide it into frequency of occurrence, with light rainfall to the left and heavy rainfall to the right. Take a look back at the bottom graph in the figure above (figure 3, their figure S1). Note that the horizontal axis is logarithmic, with a ratio of over 1000 from left to right.

It isn’t an immediately intuitive graph. Basically there are two sets of graphs. The left “cluster” is how often that rainfall amount occurred, and the black line is GPCP observations. The “right cluster” is how much rainfall fell (as a percentage of total rainfall) for that rainfall amount and again black is observations.

So lighter rainfall, like 1mm/day and below accounts for 50% of time, but being light rainfall accounts for less than 10% of total rainfall.

To facilitate discussion regarding rainfall characteristics in this work, we define, based on the ensemble model PDF, three major rain types: light rain (LR), moderate rain (MR), and heavy rain (HR) respectively as those with monthly mean rain rate below the 20th percentile (<0.3 mm/day), between response (TCO2 minus control, black) and the inter-model 1s the 40th–70th percentile (0.9–2.4mm/day), and above the 98.5% percentile (>9mm/day). An extremely heavy rain (EHR) type defined at the 99.9th percentile (>24 mm day1) will also be referred to, as appropriate.

Here is a geographical breakdown of the total and then the rainfall in these three categories, model mean on the left and observations on the right:

From Lau et al 2013

Figure 4 – Click to enlarge

We can see that the models tend to overestimate the heavy rain and underestimate the light rain. These graphics are excellent because they help us to see the geographical distribution.

Now in the graphs below we see at the top the changes in frequency of mean precipitation (60S-60N) as a function of rain rate; and at the bottom we see the % change in rainfall per K of temperature change, again as a function of rain rate. Note that the bottom graph also has a logarithmic scale for the % change, so as you move up each grid square the value is doubled.

The different models are also helpfully indicated so the spread can be seen:

From Lau et al 2013

Figure 5 – Click to enlarge

Notice that the models are all predicting quite a high % change in rainfall per K for the heaviest rain – something around 50%. In contrast the light rainfall is expected to be up a few % per K and the medium rainfall is expected to be down a few % per K.

Globally, rainfall increases by 4.5%, with a sensitivity (dP/P/dT) of 1.4% per K

Here is a table from their supplementary material with a zonal breakdown of changes in mean rainfall (so not divided into heavy, light etc). For the non-maths people the first row, dP/P is just the % change in precipitation (“d” in front of a variable means “change in that variable”), the second row is change in temperature and the third row is the % change in rainfall per K (or ºC) of warming from GHGs:

From Lau et al 2013

Figure 6 – Click to enlarge

Here are the projected geographical distributions of the changes in mean (top left), heavy (top right), medium (bottom left) and light rain (bottom right) – using their earlier definitions – under tripling CO2:

From Lau et al 2013

Figure 7 – Click to enlarge

And as a result of these projections, the authors also show the number of dry months and the projected changes in number of dry months:

From Lau et al 2013

Figure 8 – Click to enlarge

The authors conclude:

The IPCC CMIP5 models project a robust, canonical global response of rainfall characteristics to CO2 warming, featuring an increase in heavy rain, a reduction in moderate rain, and an increase in light rain occurrence and amount globally.

For a scenario of 1% CO2 increase per year, the model ensemble mean projects at the time of approximately tripling of the CO2 emissions, the probability of occurring of extremely heavy rain (monthly mean >24mm/day) will increase globally by 100%–250%, moderate rain will decrease by 5%–10% and light rain will increase by 10%–15%.

The increase in heavy rain is most pronounced in the equatorial central Pacific and the Asian monsoon regions. Moderate rain is reduced over extensive oceanic regions in the subtropics and extratropics, but increased over the extratropical land regions of North America, and Eurasia, and extratropical Southern Oceans. Light rain is mostly found to be inversely related to moderate rain locally, and with heavy rain in the central Pacific.

The model ensemble also projects a significant global increase up to 16% more frequent in the occurrences of dry months (drought conditions), mostly over the subtropics as well as marginal convective zone in equatorial land regions, reflecting an expansion of the desert and arid zones..

 

..Hence, the canonical global rainfall response to CO2 warming captured in the CMIP5 model projection suggests a global scale readjustment involving changes in circulation and rainfall characteristics, including possible teleconnection of extremely heavy rain and droughts separated by far distances. This adjustment is strongly constrained geographically by climatological rainfall pattern, and most likely by the GHG warming induced sea surface temperature anomalies with unstable moister and warmer regions in the deep tropics getting more heavy rain, at the expense of nearby marginal convective zones in the tropics and stable dry zones in the subtropics.

Our results are generally consistent with so-called “the rich-getting-richer, poor-getting-poorer” paradigm for precipitation response under global warming..

Conclusion

This article has basically presented the results of one paper, which demonstrates consistency in model response of rainfall to doubling and tripling of CO2 in the atmosphere. In subsequent articles we will look at the underlying physics constraints, at time-series over recent decades and try to make some kind of assessment.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

References

A canonical response of precipitation characteristics to global warming from CMIP5 models, William K.-M. Lau, H.-T. Wu, & K.-M. Kim, GRL (2013) – free paper

Further Reading

Here are a bunch of papers that I found useful for readers who want to dig into the subject. Most of them are available for free via Google Scholar, but one of the most helpful to me (first in the list) was Allen & Ingram 2002 and the only way I could access it was to pay $4 to rent it for a couple of days.

Allen MR, Ingram WJ (2002) Constraints on future changes in climate and the hydrologic cycle. Nature 419:224–232

Allan RP (2006) Variability in clear-sky longwave radiative cooling of the atmosphere. J Geophys Res 111:D22, 105

Allan, R. P., B. J. Soden, V. O. John, W. Ingram, and P. Good (2010), Current changes in tropical precipitation, Environ. Res. Lett., doi:10.1088/ 1748-9326/5/52/025205

Physically Consistent Responses of the Global Atmospheric Hydrological Cycle in Models and Observations, Richard P. Allan et al, Surv Geophys (2014)

Held IM, Soden BJ (2006) Robust responses of the hydrological cycle to global warming. J Clim 19:5686–5699

Changes in temperature and precipitation extremes in the CMIP5 ensemble, VV Kharin et al, Climatic Change (2013)

Energetic Constraints on Precipitation Under Climate Change, Paul A. O’Gorman et al, Surv Geophys (2012) 33:585–608

Trenberth, K. E. (2011), Changes in precipitation with climate change, Clim. Res., 47, 123–138, doi:10.3354/cr00953

Zahn M, Allan RP (2011) Changes in water vapor transports of the ascending branch of the tropical circulation. J Geophys Res 116:D18111

Read Full Post »

If we want to assess forecasts of floods, droughts and crop yields then we will need to know rainfall. We will also need to know temperature of course.

The forte of climate models is temperature. Rainfall is more problematic.

Before we get to model predictions about the future we need to review observations and the ability of models to reproduce them. Observations are also problematic – rainfall varies locally and over short durations. And historically we lacked effective observation systems in many locations and regions of the world, so data has to be pieced together and estimated from reanalysis.

Smith and his colleagues created a new rainfall dataset. Here is a comment from their 2012 paper:

Although many land regions have long precipitation records from gauges, there are spatial gaps in the sampling for undeveloped regions, areas with low populations, and over oceans. Since 1979 satellite data have been used to fill in those sampling gaps. Over longer periods gaps can only be filled using reconstructions or reanalyses..

Here are two views of the global precipitation data from a dataset which starts with the satellite era, that is, 1979 onwards – GPCP (Global Precipitation Climatology Project):

From Adler et al 2003

Figure 1

From Adler et al 2003

Figure 2

For historical data before satellites we only have rain gauge data. The GPCC dataset, explained in Becker et al 2013, shows the number of stations over time by region:

From Becker et al 2013

Figure 3- Click to expand

And the geographical distribution of rain gauge stations at different times:

From Becker et al 2013

Figure 4 – Click to expand

The IPCC compared the global trends over land from four different datasets over the last century and the last half-century:

From IPCC AR5 Ch. 2

Figure 5 – Click to expand

And the regional trends:

From IPCC AR5 Ch. 2

Figure 6 – Click to expand

The graphs for the annual change in rainfall, note the different scales for each region (as we would expect given the difference in average rainfall in different region):

From IPCC AR5 ch 2

Figure 7

We see that the decadal or half-decadal variation is much greater than any apparent long term trend. The trend data (as reviewed by the IPCC in figs 5 & 6) shows significant differences in the datasets but when we compare the time series it appears that the datasets match up better than indicated by the trend comparisons.

The data with the best historical coverage is 30ºN – 60ºN and the trend values for 1951-2000 (from different reconstructions) range from an annual increase of 1 to 1.5 mm/yr per decade (fig 6 / table 2.10 of IPCC report). This is against an absolute value of about 1000 mm/yr in this region (reading off the climatology in figure 2).

This is just me trying to put the trend data in perspective.

Models

Here is the IPCC AR5 chapter 9 on model comparisons to satellite-era rainfall observations. Top left is observations (basically the same dataset as figure 1 in this article over a slightly longer period with different colors) and bottom right is percentage error of model average with respect to observations:

From IPCC AR5 ch 9

Figure 8 – Click to expand

We can see that the average of all models has substantial errors on mean rainfall.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

References

IPCC AR5 Chapter 2

Improved Reconstruction of Global Precipitation since 1900, Smith, Arken, Ren & Shen, Journal of Atmospheric and Oceanic Technology (2012)

The Version-2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979–Present), Adler et al, Journal of Hydrometeorology (2003)

A description of the global land-surface precipitation data products of the Global Precipitation Climatology Centre with sample applications including centennial (trend) analysis from 1901–present, A Becker, Earth Syst. Sci. Data (2013)

Read Full Post »

In Part Nine – Data I – Ts vs OLR we looked at the monthly surface temperature (“skin temperature”) from NCAR vs OLR measured by CERES. The slope of the data was about 2 W/m² per 1K surface temperature change. Commentators pointed out that this was really the seasonal relationship – it probably didn’t indicate anything further.

In Part Ten we looked at anomaly data: first where monthly means were removed; and then where daily means were removed. Mostly the data appeared to be a big noisy scatter plot with no slope. The main reason that I could see for this lack of relationship was that anomaly data didn’t “keep going” in one direction for more than a few days. So it’s perhaps unreasonable to expect that we would find any relationship, given that most circulation changes take time.

We haven’t yet looked at regional versions of Ts vs OLR, the main reason is I can’t yet see what we can usefully plot. A large amount of heat is exported from the tropics to the poles and so without being able to itemize the amount of heat lost from a tropical region or the amount of heat gained by a mid-latitude or polar region, what could we deduce? One solution is to look at the whole globe in totality – which is what we have done.

In this article we’ll look at the mean global annual data. We only have CERES data for complete years from 2001 to 2013 (data wasn’t available to end of the 2014 when I downloaded it).

Here are the time-series plots for surface temperature and OLR:

Global annual Ts vs year & OLR  vs year 2001-2013

Figure 1

Here is the scatter plot of the above data, along with the best-fit linear interpolation:

Global annual Ts vs OLR 2001-2013

Figure 2

The calculated slope is similar to the results we obtained from the monthly data (which probably showed the seasonal relationship). This is definitely the year to year data, but also gives us a slope that indicates positive feedback. The correlation is not strong, as indicated by the R² value of 0.37, but it exists.

As explained in previous posts, a change of 3.6 W/m² per 1K is a “no feedback” relationship, where a uniform 1K change in surface & atmospheric temperature causes an OLR increase of 3.6 W/m² due to increased surface and atmospheric radiation – a greater increase in OLR would be negative feedback and a smaller increase would be positive feedback (e.g. see Part Eight with the plot of OLR changes vs latitude and height, which integrated globally gives 3.6 W/m²).

The problem of the”no feedback” calculation is perhaps a bit more complicated and I want to dig into this calculation at some stage.

I haven’t looked at whether the result is sensitive to the date of the start of year. Next, I want to look at the changes in humidity, especially upper tropospheric water vapor, which is a key area for radiative changes. This will be a bit of work, because AIRS data comes in big files (there is a lot of data).

Read Full Post »

[I was going to post this new article not long after the last article in the series, but felt I was missing something important and needed to think about it. Instead I’ve not had any new insights and am posting for comment.]

In Part Nine – Data I, we looked at the relationship between Ts (surface temperature) and OLR (outgoing longwave radiation), for reasons all explained there.

The relationship shown there appears to be primarily the seasonal relationship, which looks like a positive feedback due to the 2W/m² per 1K temperature increase. What about the feedback on a different timescale from the seasonal relationship?

From the 2001-2013 data, here is the monthly mean and the daily mean for both Ts and OLR:

Monthly mean & daily mean for Ts & OLR

Figure 1

If we remove the monthly mean from the data, here are those same relationships (shown in the last article as anomalies from the overall 2001-2013 mean):

OLR vs Ts - NCAR -CERES-monthlymeansremoved

Figure 2 – Click to Expand

On a lag of 1 day there is a possible relationship with a low correlation – and the rest of the lags show no relationship at all.

Of course, we have created a problem with this new dataset – as the lag increases we are “jumping boundaries”. For example, on the 7-day lag all of the Ts data in the last week of April is being compared with the OLR data in the first week of May. With slowly rising temperatures, the last week of April will be “positive temperature data”, but the first week of May will be “negative OLR data”. So we expect 1/4 of our data to show the opposite relationship.

So we can show the data with the “monthly boundary jumps removed” – which means we can only show lags of say 1-14 days (with 3% – 50% of the data cut out); and we can also show the data as anomalies from the daily mean. Both have the potential to demonstrate the feedback on shorter timescales than the seasonal cycle.

First, here is the data with daily means removed:

OLR vs Ts - NCAR -CERES-dailymeansremoved

Figure 3 – Click to Expand

Second, here is the data with the monthly means removed as in figure 2, but this time ensuring that no monthly boundaries are crossed (so some of the data is removed to ensure this):

OLR vs Ts - NCAR -CERES-monthlymeansremoved-noboundary

Figure 4 – Click to Expand

So basically this demonstrates no correlation between change in daily global OLR and change in daily global temperature on less than seasonal timescales. (Or “operator error” with the creation of my anomaly data). This is excluding (because we haven’t tested it here) the very short timescale of day to night change.

This was surprising at first sight.

That is, we see global Ts increasing on a given day but we can’t distinguish any corresponding change in global OLR from random changes, at least until we get to seasonal time periods? (See graph in last article).

Then what is probably the reason came into view. Remember that this is anomaly data (daily global temperature with monthly mean subtracted). This bar graph demonstrates that when we are looking at anomaly data, most of the changes in global Ts are reversed the next day, or usually within a few days:

Days temperature goes in same direction

Figure 5

This means that we are unlikely to see changes in Ts causing noticeable changes in OLR unless the climate response we are looking for (humidity and cloud changes) occurs within a day or two.

That’s my preliminary thinking, looking at the data – i.e., we can’t expect to see much of a relationship, and we don’t see any relationship.

One further point – explained in much more detail in the (short) series Measuring Climate Sensitivity – is that of course changes in temperature are not caused by some mechanism that is independent of radiative forcing.

That is, our measurement problem is compounded by changes in temperature being first caused by fluctuations in radiative forcing (the radiation balance) and ocean heat changes and then we are measuring the “resulting” change in the radiation balance resulting from this temperature change:

Radiation balance & ocean heat balance => Temperature change => Radiation balance & ocean heat balance

So we can’t easily distinguish the net radiation change caused by temperature changes from the radiative contribution to the original temperature changes.

I look forward to readers’ comments.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data

Read Full Post »

In the last article we looked at a paper which tried to unravel – for clear sky only – how the OLR (outgoing longwave radiation) changed with surface temperature. It did the comparison by region, by season and from year to year.

The key point for new readers to understand – why are we interested in how OLR changes with surface temperature? The concept is not so difficult. The practical analysis presents more problems.

Let’s review the concept – and for more background please read at least the start of the last article: if we increase the surface temperature, perhaps due to increases in GHGs, but it could be due to any reason, what happens to outgoing longwave radiation? Obviously, we expect OLR to increase. The real question is how by how much?

If there is no feedback then OLR should increase by about 3.6 W/m² for every 1K in surface temperature (these values are global averages):

  • If there is positive feedback, perhaps due to more humidity, then we expect OLR to increase by less than 3.6 W/m² – think “not enough heat got out to get things back to normal”
  • If there is negative feedback, then we expect OLR to increase by more than 3.6 W/m². In the paper we reviewed in the last article the authors found about 2 W/m² per 1K increase – a positive feedback, but were only considering clear sky areas

One reader asked about an outlier point on the regression slope and whether it affected the result. This motivated me to do something I have had on my list for a while now – get “all of the data” and analyse it. This way, we can review it and answer questions ourselves – like in the Visualizing Atmospheric Radiation series where we created an atmospheric radiation model (first principles physics) and used the detailed line by line absorption data from the HITRAN database to calculate how this change and that change affected the surface downward radiation (“back radiation”) and the top of atmosphere OLR.

With the raw surface temperature, OLR and humidity data “in hand” we can ask whatever questions we like and answer these questions ourselves..

NCAR reanalysis, CERES and AIRS

CERES and AIRS – satellite instruments – are explained in CERES, AIRS, Outgoing Longwave Radiation & El Nino.

CERES measures total OLR in a 1ºx 1º grid on a daily basis.

AIRS has a “hyper-spectral” instrument, which means it looks at lots of frequency channels. The intensity of radiation at these many wavelengths can be converted, via calculation, into measurements of atmospheric temperature at different heights, water vapor concentration at different heights, CO2 concentration, and concentration of various other GHGs. Additionally, AIRS calculates total OLR (it doesn’t measure it – i.e. it doesn’t have a measurement device from 4μm – 100μm). It also measures parameters like “skin temperature” in some locations and calculates the same in other locations.

For the purposes of this article, I haven’t yet dug into the “how” and the reliability of surface AIRS measurements. The main point to note about satellites is they sit at the “top of atmosphere” and their ability to measure stuff near the surface depends on clever ideas and is often subverted by factors including clouds and surface emissivity. (AIRS has microwave instruments specifically to independently measure surface temperature even in cloudy conditions, because of this problem).

NCAR is a “reanalysis product”. It is not measurement, but it is “informed by measurement”. It is part measurement, part model. Where there is reliable data measurement over a good portion of the globe the reanalysis is usually pretty reliable – only being suspect at the times when new measurement systems come on line (so trends/comparisons over long time periods are problematic). Where there is little reliable measurement the reanalysis depends on the model (using other parameters to allow calculation of the missing parameters).

Some more explanation in Water Vapor Trends under the sub-heading Reanalysis – or Filling in the Blanks.

For surface temperature measurements reanalysis is not subverted by models too much. However, the mainstream surface temperature series are surely better than NCAR – I know that there is an army of “climate interested people” who follow this subject very closely. (I am not in that group).

I used NCAR because it is simple to download and extract. And I expect – but haven’t yet verified – that it will be quite close to the various mainstream surface temperature series. If someone is interested and can provide daily global temperature from another surface temperature series as an Excel, csv, .nc – or pretty much any data format – we can run the same analysis.

For those interested, see note 1 on accessing the data.

Results – Global Averages

For our starting point in this article I decided to look at global averages from 2001 to 2013 inclusive (data from CERES not yet available for the whole of 2014). This was after:

  • looking at daily AIRS data
  • creating and comparing NCAR over 8 days with AIRS 8-day averages for surface skin temperature and surface air temperature
  • creating and comparing AIRS over 8-days with CERES for TOA OLR

More on those points in later articles.

The global relationship with surface temperature and OLR is what we have a primary interest in – for the purpose of determining feedbacks. Then we want to figure out some detail about why it occurs. I am especially interested in the AIRS data because it is the only global measurement of upper tropospheric water vapor (UTWV) – and UTWV along with clouds are the key factors in the question of feedback – how OLR changes with surface temperature. For now, we will look at the simple relationship between surface temperature (“skin temperature”) and OLR.

Here is the data, shown as an anomaly from the global mean values over the period Jan 1st, 2001 to Dec 31st, 2013. Each graph represents a different lag – how does global OLR (CERES) change with global surface temperature (NCAR) on a lag of 1 day, 7 days, 14 days and so on:

OLR vs Ts - NCAR -CERES

Figure 1 – Click to Expand

The slope gives the “apparent feedback” and the R² simply reflects how much of the graph is explained by the linear trend. This last value is easily estimated just by looking at each graph.

For reference, here is the timeseries data, as anomalies, with the temperature anomaly multiplied by a factor of 3 so its magnitude is similar to the OLR anomaly:

OLR from CERES vs Ts from NCAR as timeseries

Figure 2 – Click to Expand

Note on the calculation – I used the daily data to calculate a global mean value (area-weighted) and calculated one mean value over the whole time period then subtracted it from every daily data value to obtain an anomaly for each day. Obviously we would get the same slope and R² without using anomaly data (just a different intercept on the axes).

For reference, mean OLR = 238.9 W/m², mean Ts = 288.0 K.

My first question – before even producing the graphs – was whether a lag graph shows the change in OLR due to a change in Ts or due to a mixture of many effects. That is, what is the interpretation of the graphs?

The second question – what is the “right lag” to use? We don’t expect an instant response when we are looking for feedbacks:

  • The OLR through the window region will of course respond instantly to surface temperature change
  • The OLR as a result of changing humidity will depend upon how long it takes for more evaporated surface water to move into the mid- to upper-troposphere
  • The OLR as a result of changing atmospheric temperature, in turn caused by changing surface temperature, will depend upon the mixture of convection and radiative cooling

To say we know the right answer in advance pre-supposes that we fully understand atmospheric dynamics. This is the question we are asking, so we can’t pre-suppose anything. But at least we can suggest that something in the realm of a few days to a few months is the most likely candidate for a reasonable lag.

But the idea that there is one constant feedback and one constant lag is an idea that might well be fatally flawed, despite being seductively simple. (A little more on that in note 3).

And that is one of the problems of this topic. Non-linear dynamics means non-linear results – a subject I find hard to describe in simple words. But let’s say – changes in OLR from changes in surface temperature might be “spread over” multiple time scales and be different at different times. (I have half-written an article trying to explain this idea in words, hopefully more on that sometime soon).

But for the purpose of this article I only wanted to present the simple results – for discussion and for more analysis to follow in subsequent articles.

Articles in the Series

Part One – introducing some ideas from Ramanathan from ERBE 1985 – 1989 results

Part One – Responses – answering some questions about Part One

Part Two – some introductory ideas about water vapor including measurements

Part Three – effects of water vapor at different heights (non-linearity issues), problems of the 3d motion of air in the water vapor problem and some calculations over a few decades

Part Four – discussion and results of a paper by Dessler et al using the latest AIRS and CERES data to calculate current atmospheric and water vapor feedback vs height and surface temperature

Part Five – Back of the envelope calcs from Pierrehumbert – focusing on a 1995 paper by Pierrehumbert to show some basics about circulation within the tropics and how the drier subsiding regions of the circulation contribute to cooling the tropics

Part Six – Nonlinearity and Dry Atmospheres – demonstrating that different distributions of water vapor yet with the same mean can result in different radiation to space, and how this is important for drier regions like the sub-tropics

Part Seven – Upper Tropospheric Models & Measurement – recent measurements from AIRS showing upper tropospheric water vapor increases with surface temperature

Part Eight – Clear Sky Comparison of Models with ERBE and CERES – a paper from Chung et al (2010) showing clear sky OLR vs temperature vs models for a number of cases

Part Nine – Data I – Ts vs OLR – data from CERES on OLR compared with surface temperature from NCAR – and what we determine

Part Ten – Data II – Ts vs OLR – more on the data

References

Wielicki, B. A., B. R. Barkstrom, E. F. Harrison, R. B. Lee III, G. L. Smith, and J. E. Cooper, 1996: Clouds and the Earth’s Radiant Energy System (CERES): An Earth Observing System Experiment, Bull. Amer. Meteor. Soc., 77, 853-868   – free paper

Kalnay et al.,The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996  – free paper

NCEP Reanalysis data provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/

Notes

Note 1: Boring Detail about Extracting Data

On the plus side, unlike many science journals, the data is freely available. Credit to the organizations that manage this data for their efforts in this regard, which includes visualization software and various ways of extracting data from their sites. However, you can still expect to spend a lot of time figuring out what files you want, where they are, downloading them, and then extracting the data from them. (Many traps for the unwary).

NCAR – data in .nc files, each parameter as a daily value (or 4x daily) in a separate annual .nc file on an (approx) 2.5º x 2.5º grid (actually T62 gaussian grid).

Data via ftp – ftp.cdc.noaa.gov. See http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surface.html.

You get lat, long, and time in the file as well as the parameter. Care needed to navigate to the right folder because the filenames are the same for the 4x daily and the daily data.

NCAR are using latest version .nc files (which Matlab circa 2010 would not open, I had to update to the latest version – many hours wasted trying to work out the reason for failure).

CERES – data in .nc files, you select the data you want and the time period but it has to be a less than 2G file and you get a file to download. I downloaded daily OLR data for each annual period. Data in a 1ºx 1º grid. CERES are using older version .nc so there should be no problem opening.

Data from http://ceres-tool.larc.nasa.gov/ord-tool/srbavg

AIRS – data in .hdf files, in daily, 8-day average, or monthly average. The data is “ascending” = daytime, “descending” = nighttime plus some other products. Daily data doesn’t give global coverage (some gaps). 8-day average does but there are some missing values due to quality issues. Data in a 1ºx 1º grid. I used v6 data.

Data access page – http://disc.sci.gsfc.nasa.gov/datacollection/AIRX3STD_V006.html?AIRX3STD&#tabs-1.

Data via ftp.

HDF is not trivial to open up. The AIRS team have helpfully provided a Matlab tool to extract data which helped me. I think I still spent many hours figuring out how to extract what I needed.

Files Sizes – it’s a lot of data:

NCAR files that I downloaded (skin temperature) are only 12MB per annual file.

CERES files with only 2 parameters are 190MB per annual file.

AIRS files as 8-day averages (or daily data) are 400MB per file.

Also the grid for each is different. Lat from S-pole to N-pole in CERES, the reverse for AIRS and NCAR. Long from 0.5º to 359.5º in CERES but -179.5 to 179.5 in AIRS. (Note for any Matlab people, it won’t regrid, say using interp2, unless the grid runs from lowest number to highest number).

Note 2: Checking data – because I plan on using the daily 1ºx1º grid data from CERES and NCAR, I used it to create the daily global averages. As a check I downloaded the global monthly averages from CERES and compared. There is a discrepancy, which averages at 0.1 W/m².

Here is the difference by month:

CERES-Monthly-discrepancy-by-month

Figure 3 – Click to expand

And a scatter plot by month of year, showing some systematic bias:

CERES-Monthly-discrepance-scatter-plot

Figure 4

As yet, I haven’t dug any deeper to find if this is documented – for example, is there a correction applied to the daily data product in monthly means? is there an issue with the daily data? or, more likely, have I %&^ed up somewhere?

Note 3: Extract from Measuring Climate Sensitivity – Part One:

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Cloud Feedbacks in the Climate System: A Critical Review, Stephens, Journal of Climate (2005):

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to “(1)” is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.

Read Full Post »

In Latent heat and Parameterization I showed a formula for calculating latent heat transfer from the surface into the atmosphere, as well as the “real” formula. The parameterized version has horizontal wind speed x humidity difference (between the surface and some reference height in the atmosphere, typically 10m) x “a coefficient”.

One commenter asked:

Why do we expect that vertical transport of water vapor to vary linearly with horizontal wind speed? Is this standard turbulent mixing?

The simple answer is “almost yes”. But as someone famously said, make it simple, but not too simple.

Charting a course between too simple and too hard is a challenge with this subject. By contrast, radiative physics is a cakewalk. I’ll begin with some preamble and eventually get to the destination.

There’s a set of equations describing motion of fluids and what they do is conserve momentum in 3 directions (x,y,z) – these are the Navier-Stokes equations, and they conserve mass. Then there are also equations to conserve humidity and heat. There is an exact solution to the equations but there is a bit of a problem in practice. The Navier-Stokes equations in a rotating frame can be seen in The Coriolis Effect and Geostrophic Motion under “Some Maths”.

Simple linear equations with simple boundary conditions can be re-arranged and you get a nice formula for the answer. Then you can plot this against that and everyone can see how the relationships change with different material properties or boundary conditions. In real life equations are not linear and the boundary conditions are not simple. So there is no “analytical solution”, where we want to know say the velocity of the fluid in the east-west direction as a function of time and get a nice equation for the answer. Instead we have to use numerical methods.

Let’s take a simple problem – if you want to know heat flow through an odd-shaped metal plate that is heated in one corner and cooled by steady air flow on the rest of its surface you can use these numerical methods and usually get a very accurate answer.

Turbulence is a lot more difficult due to the range of scales involved. Here’s a nice image of turbulence:

Figure 1

There is a cascade of energy from the largest scales down to the point where viscosity “eats up” the kinetic energy. In the atmosphere this is the sub 1mm scale. So if you want to accurately numerically model atmospheric motion across a 100km scale you need a grid size probably 100,000,000 x 100,000,000 x 10,000,000 and solving sub-second for a few days. Well, that’s a lot of calculation. I’m not sure where turbulence modeling via “direct numerical simulation” has got to but I’m pretty sure that is still too hard and in a decade it will still be a long way off. The computing power isn’t there.

Anyway, for atmospheric modeling you don’t really want to know the velocity in the x,y,z direction (usually annotated as u,v,w) at trillions of points every second. Who is going to dig through that data? What you want is a statistical description of the key features.

So if we take the Navier-Stokes equation and average, what do we get? We get a problem.

For the mathematically inclined the following is obvious, but of course many readers aren’t, so here’s a simple example:

Let’s take 3 numbers: 1, 10, 100:   the average = (1+10+100)/3 = 37.

Now let’s look at the square of those numbers: 1, 100, 10000:  the average of the square of those numbers = (1+100+10000)/3 = 3367.

But if we take the average of our original numbers and square it, we get 37² = 1369. It’s strange but the average squared is not the same as the average of the squared numbers. That’s non-linearity for you.

In the Navier Stokes equations we have values like east velocity x upwards velocity, written as uw. The average of uw, written as \overline{uw} is not equal to the average of u x the average of w, written as \overline{u}.\overline{w}. For the same reason we just looked at.

When we create the Reynolds averaged Navier-Stokes (RANS) equations we get lots of new terms like\overline{uw}. That is, we started with the original equations which gave us a complete solution – the same number of equations as unknowns. But when we average we end up with more unknowns than equations.

It’s like saying x + y = 1, what is x and y? No one can say. Perhaps 1 & 0. Perhaps 1000 & -999.

Digression on RANS for Slightly Interested People

The Reynolds approach is to take a value like u,v,w (velocity in 3 directions) and decompose into a mean and a “rapidly varying” turbulent component.

So u = \overline{u} + u', where \overline{u} = mean value;  u’ = the varying component. So \overline{u'} = 0. Likewise for the other directions.

And \overline{uw} = \overline{u} . \overline{w} + \overline{u'w'}

So in the original equation where we have a term like u . \frac{\partial u}{\partial x}, it turns into  (\overline{u} + u') . \frac{\partial (\overline{u} + u')}{\partial x}, which, when averaged, becomes:

\overline{u} . \frac{\partial \overline{u}}{\partial x} +\overline{u' . \frac{\partial u'}{\partial x}}

So 2 unknowns instead of 1. The first term is the averaged flow, the second term is the turbulent flow. (Well, it’s an advection term for the change in velocity following the flow)

When we look at the conservation of energy equation we end up with terms for the movement of heat upwards due to average flow (almost zero) and terms for the movement of heat upwards due to turbulent flow (often significant). That is, a term like \overline{\theta'w'} which is “the mean of potential temperature variations x upwards eddy velocity”.

Or, in plainer English, how heat gets moved up by turbulence.

..End of Digression

Closure and the Invention of New Ideas

“Closure” is a maths term. To “close the equations” when we have more unknowns that equations means we have to invent a new idea. Some geniuses like Reynolds, Prandtl and Kolmogoroff did come up with some smart new ideas.

Often the smart ideas are around “dimensionless terms” or “scaling terms”. The first time you encounter these ideas they seem odd or just plain crazy. But like everything, over time strange ideas start to seem normal.

The Reynolds number is probably the simplest to get used to. The Reynolds number seeks to relate fluid flows to other similar fluid flows. You can have fluid flow through a massive pipe that is identical in the way turbulence forms to that in a tiny pipe – so long as the viscosity and density change accordingly.

The Reynolds number, Re = density x length scale x mean velocity of the fluid / viscosity

And regardless of the actual physical size of the system and the actual velocity, turbulence forms for flow over a flat plate when the Reynolds number is about 500,000. By the way, for the atmosphere and ocean this is true most of the time.

Kolmogoroff came up with an idea in 1941 about the turbulent energy cascade using dimensional analysis and came to the conclusion that the energy of eddies increases with their size to the power 2/3 (in the “inertial subrange”). This is usually written vs frequency where it becomes a -5/3 power. Here’s a relatively recent experimental verification of this power law.

From Durbin & Reif 2010

From Durbin & Reif 2010

 Figure 2

In less genius like manner, people measure stuff and use these measured values to “close the equations” for “similar” circumstances. Unfortunately, the measurements are only valid in a small range around the experiments and with turbulence it is hard to predict where the cutoff is.

A nice simple example, to which I hope to return because it is critical in modeling climate, is vertical eddy diffusivity in the ocean. By way of introduction to this, let’s look at heat transfer by conduction.

If only all heat transfer was as simple as conduction. That’s why it’s always first on the list in heat transfer courses..

If have a plate of thickness d, and we hold one side at temperature T1 and the other side at temperature T2, the heat conduction per unit area:

H_z = \frac{k(T_2-T_1)}{d}

where k is a material property called conductivity. We can measure this property and it’s always the same. It might vary with temperature but otherwise if you take a plate of the same material and have widely different temperature differences, widely different thicknesses – the heat conduction always follows the same equation.

Now using these ideas, we can take the actual equation for vertical heat flux via turbulence:

H_z =\rho c_p\overline{w'\theta'}

where w = vertical velocity, θ = potential temperature

And relate that to the heat conduction equation and come up with (aka ‘invent’):

H_z = \rho c_p K . \frac{\partial \theta}{\partial z}

Now we have an equation we can actually use because we can measure how potential temperature changes with depth. The equation has a new “constant”, K. But this one is not really a constant, it’s not really a material property – it’s a property of the turbulent fluid in question. Many people have measured the “implied eddy diffusivity” and come up with a range of values which tells us how heat gets transferred down into the depths of the ocean.

Well, maybe it does. Maybe it doesn’t tell us very much that is useful. Let’s come back to that topic and that “constant” another day.

The Main Dish – Vertical Heat Transfer via Horizontal Wind

Back to the original question. If you imagine a sheet of paper as big as your desk then that pretty much gives you an idea of the height of the troposphere (lower atmosphere where convection is prominent).

It’s as thin as a sheet of desk size paper in comparison to the dimensions of the earth. So any large scale motion is horizontal, not vertical. Mean vertical velocities – which doesn’t include turbulence via strong localized convection – are very low. Mean horizontal velocities can be the order of 5 -10 m/s near the surface of the earth. Mean vertical velocities are the order of cm/s.

Let’s look at flow over the surface under “neutral conditions”. This means that there is little buoyancy production due to strong surface heating. In this case the energy for turbulence close to the surface comes from the kinetic energy of the mean wind flow – which is horizontal.

There is a surface drag which gets transmitted up through the boundary layer until there is “free flow” at some height. By using dimensional analysis, we can figure out what this velocity profile looks like in the absence of strong convection. It’s logarithmic:

Surface-wind-profile

Figure 3 – for typical ocean surface

Lots of measurements confirm this logarithmic profile.

We can then calculate the surface drag – or how momentum is transferred from the atmosphere to the ocean – using the simple formula derived and we come up with a simple expression:

\tau_0 = \rho C_D U_r^2

Where Ur is the velocity at some reference height (usually 10m), and CD is a constant calculated from the ratio of the reference height to the roughness height and the von Karman constant.

Using similar arguments we can come up with heat transfer from the surface. The principles are very similar. What we are actually modeling in the surface drag case is the turbulent vertical flux of horizontal momentum \rho \overline{u'w'} with a simple formula that just has mean horizontal velocity. We have “closed the equations” by some dimensional analysis.

Adding the Richardson number for non-neutral conditions we end up with a temperature difference along with a reference velocity to model the turbulent vertical flux of sensible heat \rho c_p . \overline{w'\theta'}. Similar arguments give latent heat flux L\rho . \overline{w'q'} in a simple form.

Now with a bit more maths..

At the surface the horizontal velocity must be zero. The vertical flux of horizontal momentum creates a drag on the boundary layer wind. The vertical gradient of the mean wind, U, can only depend on height z, density ρ and surface drag.

So the “characteristic wind speed” for dimensional analysis is called the friction velocity, u*, and u* = \sqrt\frac{\tau_0}{\rho}

This strange number has the units of velocity: m/s  – ask if you want this explained.

So dimensional analysis suggests that \frac{z}{u*} . \frac{\partial U}{\partial z} should be a constant – “scaled wind shear”. The inverse of that constant is known as the Von Karman constant, k = 0.4.

So a simple re-arrangement and integration gives:

U(z) = \frac{u*}{k} . ln(\frac{z}{z_0})

where z0 is a constant from the integration, which is roughness height – a physical property of the surface where the mean wind reaches zero.

The “real form” of the friction velocity is:

u*^2 = \frac{\tau_0}{\rho} = (\overline{u'w'}^2 + \overline{v'w'}^2)^\frac{1}{2},  where these eddy values are at the surface

we can pick a horizontal direction along the line of the mean wind (rotate coordinates) and come up with:

u*^2 = \overline{u'w'}

If we consider a simple constant gradient argument:

\tau = - \rho . \overline{u'w'} = \rho K \frac{\partial \overline{u}}{\partial z}

where the first expression is the “real” equation and the second is the “invented” equation, or “our attempt to close the equation” from dimensional analysis.

Of course, this is showing how momentum is transferred, but the approach is pretty similar, just slightly more involved, for sensible and latent heat.

Conclusion

Turbulence is a hard problem. The atmosphere and ocean are turbulent so calculating anything is difficult. Until a new paradigm in computing comes along, the real equations can’t be numerically solved from the small scales needed where viscous dissipation damps out the kinetic energy of the turbulence up to the large scale of the whole earth, or even of a synoptic scale event. However, numerical analysis has been used a lot to test out ideas that are hard to test in laboratory experiments. And can give a lot of insight into parts of the problems.

In the meantime, experiments, dimensional analysis and intuition have provided a lot of very useful tools for modeling real climate problems.

Read Full Post »

Older Posts »