Feeds:
Posts
Comments

Archive for the ‘Atmospheric Physics’ Category

In Part Nine – Data I – Ts vs OLR we looked at the monthly surface temperature (“skin temperature”) from NCAR vs OLR measured by CERES. The slope of the data was about 2 W/m² per 1K surface temperature change. Commentators pointed out that this was really the seasonal relationship – it probably didn’t indicate anything further.

In Part Ten we looked at anomaly data: first where monthly means were removed; and then where daily means were removed. Mostly the data appeared to be a big noisy scatter plot with no slope. The main reason that I could see for this lack of relationship was that anomaly data didn’t “keep going” in one direction for more than a few days. So it’s perhaps unreasonable to expect that we would find any relationship, given that most circulation changes take time.

We haven’t yet looked at regional versions of Ts vs OLR, the main reason is I can’t yet see what we can usefully plot. A large amount of heat is exported from the tropics to the poles and so without being able to itemize the amount of heat lost from a tropical region or the amount of heat gained by a mid-latitude or polar region, what could we deduce? One solution is to look at the whole globe in totality – which is what we have done.

In this article we’ll look at the mean global annual data. We only have CERES data for complete years from 2001 to 2013 (data wasn’t available to end of the 2014 when I downloaded it).

Here are the time-series plots for surface temperature and OLR:

Global annual Ts vs year & OLR  vs year 2001-2013

Figure 1

Here is the scatter plot of the above data, along with the best-fit linear interpolation:

Global annual Ts vs OLR 2001-2013

Figure 2

The calculated slope is similar to the results we obtained from the monthly data (which probably showed the seasonal relationship). This is definitely the year to year data, but also gives us a slope that indicates positive feedback. The correlation is not strong, as indicated by the R² value of 0.37, but it exists.

As explained in previous posts, a change of 3.6 W/m² per 1K is a “no feedback” relationship, where a uniform 1K change in surface & atmospheric temperature causes an OLR increase of 3.6 W/m² due to increased surface and atmospheric radiation – a greater increase in OLR would be negative feedback and a smaller increase would be positive feedback (e.g. see Part Eight with the plot of OLR changes vs latitude and height, which integrated globally gives 3.6 W/m²).

The problem of the”no feedback” calculation is perhaps a bit more complicated and I want to dig into this calculation at some stage.

I haven’t looked at whether the result is sensitive to the date of the start of year. Next, I want to look at the changes in humidity, especially upper tropospheric water vapor, which is a key area for radiative changes. This will be a bit of work, because AIRS data comes in big files (there is a lot of data).

Read Full Post »

[I was going to post this new article not long after the last article in the series, but felt I was missing something important and needed to think about it. Instead I’ve not had any new insights and am posting for comment.]

In Part Nine – Data I, we looked at the relationship between Ts (surface temperature) and OLR (outgoing longwave radiation), for reasons all explained there.

The relationship shown there appears to be primarily the seasonal relationship, which looks like a positive feedback due to the 2W/m² per 1K temperature increase. What about the feedback on a different timescale from the seasonal relationship?

From the 2001-2013 data, here is the monthly mean and the daily mean for both Ts and OLR:

Monthly mean & daily mean for Ts & OLR

Figure 1

If we remove the monthly mean from the data, here are those same relationships (shown in the last article as anomalies from the overall 2001-2013 mean):

OLR vs Ts - NCAR -CERES-monthlymeansremoved

Figure 2 – Click to Expand

On a lag of 1 day there is a possible relationship with a low correlation – and the rest of the lags show no relationship at all.

Of course, we have created a problem with this new dataset – as the lag increases we are “jumping boundaries”. For example, on the 7-day lag all of the Ts data in the last week of April is being compared with the OLR data in the first week of May. With slowly rising temperatures, the last week of April will be “positive temperature data”, but the first week of May will be “negative OLR data”. So we expect 1/4 of our data to show the opposite relationship.

So we can show the data with the “monthly boundary jumps removed” – which means we can only show lags of say 1-14 days (with 3% – 50% of the data cut out); and we can also show the data as anomalies from the daily mean. Both have the potential to demonstrate the feedback on shorter timescales than the seasonal cycle.

First, here is the data with daily means removed:

OLR vs Ts - NCAR -CERES-dailymeansremoved

Figure 3 – Click to Expand

Second, here is the data with the monthly means removed as in figure 2, but this time ensuring that no monthly boundaries are crossed (so some of the data is removed to ensure this):

OLR vs Ts - NCAR -CERES-monthlymeansremoved-noboundary

Figure 4 – Click to Expand

So basically this demonstrates no correlation between change in daily global OLR and change in daily global temperature on less than seasonal timescales. (Or “operator error” with the creation of my anomaly data). This is excluding (because we haven’t tested it here) the very short timescale of day to night change.

This was surprising at first sight.

That is, we see global Ts increasing on a given day but we can’t distinguish any corresponding change in global OLR from random changes, at least until we get to seasonal time periods? (See graph in last article).

Then what is probably the reason came into view. Remember that this is anomaly data (daily global temperature with monthly mean subtracted). This bar graph demonstrates that when we are looking at anomaly data, most of the changes in global Ts are reversed the next day, or usually within a few days:

Days temperature goes in same direction

Figure 5

This means that we are unlikely to see changes in Ts causing noticeable changes in OLR unless the climate response we are looking for (humidity and cloud changes) occurs within a day or two.

That’s my preliminary thinking, looking at the data – i.e., we can’t expect to see much of a relationship, and we don’t see any relationship.

One further point – explained in much more detail in the (short) series Measuring Climate Sensitivity – is that of course changes in temperature are not caused by some mechanism that is independent of radiative forcing.

That is, our measurement problem is compounded by changes in temperature being first caused by fluctuations in radiative forcing (the radiation balance) and ocean heat changes and then we are measuring the “resulting” change in the radiation balance resulting from this temperature change:

Radiation balance & ocean heat balance => Temperature change => Radiation balance & ocean heat balance

So we can’t easily distinguish the net radiation change caused by temperature changes from the radiative contribution to the original temperature changes.

I look forward to readers’ comments.

Read Full Post »

In the last article we looked at a paper which tried to unravel – for clear sky only – how the OLR (outgoing longwave radiation) changed with surface temperature. It did the comparison by region, by season and from year to year.

The key point for new readers to understand – why are we interested in how OLR changes with surface temperature? The concept is not so difficult. The practical analysis presents more problems.

Let’s review the concept – and for more background please read at least the start of the last article: if we increase the surface temperature, perhaps due to increases in GHGs, but it could be due to any reason, what happens to outgoing longwave radiation? Obviously, we expect OLR to increase. The real question is how by how much?

If there is no feedback then OLR should increase by about 3.6 W/m² for every 1K in surface temperature (these values are global averages):

  • If there is positive feedback, perhaps due to more humidity, then we expect OLR to increase by less than 3.6 W/m² – think “not enough heat got out to get things back to normal”
  • If there is negative feedback, then we expect OLR to increase by more than 3.6 W/m². In the paper we reviewed in the last article the authors found about 2 W/m² per 1K increase – a positive feedback, but were only considering clear sky areas

One reader asked about an outlier point on the regression slope and whether it affected the result. This motivated me to do something I have had on my list for a while now – get “all of the data” and analyse it. This way, we can review it and answer questions ourselves – like in the Visualizing Atmospheric Radiation series where we created an atmospheric radiation model (first principles physics) and used the detailed line by line absorption data from the HITRAN database to calculate how this change and that change affected the surface downward radiation (“back radiation”) and the top of atmosphere OLR.

With the raw surface temperature, OLR and humidity data “in hand” we can ask whatever questions we like and answer these questions ourselves..

NCAR reanalysis, CERES and AIRS

CERES and AIRS – satellite instruments – are explained in CERES, AIRS, Outgoing Longwave Radiation & El Nino.

CERES measures total OLR in a 1ºx 1º grid on a daily basis.

AIRS has a “hyper-spectral” instrument, which means it looks at lots of frequency channels. The intensity of radiation at these many wavelengths can be converted, via calculation, into measurements of atmospheric temperature at different heights, water vapor concentration at different heights, CO2 concentration, and concentration of various other GHGs. Additionally, AIRS calculates total OLR (it doesn’t measure it – i.e. it doesn’t have a measurement device from 4μm – 100μm). It also measures parameters like “skin temperature” in some locations and calculates the same in other locations.

For the purposes of this article, I haven’t yet dug into the “how” and the reliability of surface AIRS measurements. The main point to note about satellites is they sit at the “top of atmosphere” and their ability to measure stuff near the surface depends on clever ideas and is often subverted by factors including clouds and surface emissivity. (AIRS has microwave instruments specifically to independently measure surface temperature even in cloudy conditions, because of this problem).

NCAR is a “reanalysis product”. It is not measurement, but it is “informed by measurement”. It is part measurement, part model. Where there is reliable data measurement over a good portion of the globe the reanalysis is usually pretty reliable – only being suspect at the times when new measurement systems come on line (so trends/comparisons over long time periods are problematic). Where there is little reliable measurement the reanalysis depends on the model (using other parameters to allow calculation of the missing parameters).

Some more explanation in Water Vapor Trends under the sub-heading Reanalysis – or Filling in the Blanks.

For surface temperature measurements reanalysis is not subverted by models too much. However, the mainstream surface temperature series are surely better than NCAR – I know that there is an army of “climate interested people” who follow this subject very closely. (I am not in that group).

I used NCAR because it is simple to download and extract. And I expect – but haven’t yet verified – that it will be quite close to the various mainstream surface temperature series. If someone is interested and can provide daily global temperature from another surface temperature series as an Excel, csv, .nc – or pretty much any data format – we can run the same analysis.

For those interested, see note 1 on accessing the data.

Results – Global Averages

For our starting point in this article I decided to look at global averages from 2001 to 2013 inclusive (data from CERES not yet available for the whole of 2014). This was after:

  • looking at daily AIRS data
  • creating and comparing NCAR over 8 days with AIRS 8-day averages for surface skin temperature and surface air temperature
  • creating and comparing AIRS over 8-days with CERES for TOA OLR

More on those points in later articles.

The global relationship with surface temperature and OLR is what we have a primary interest in – for the purpose of determining feedbacks. Then we want to figure out some detail about why it occurs. I am especially interested in the AIRS data because it is the only global measurement of upper tropospheric water vapor (UTWV) – and UTWV along with clouds are the key factors in the question of feedback – how OLR changes with surface temperature. For now, we will look at the simple relationship between surface temperature (“skin temperature”) and OLR.

Here is the data, shown as an anomaly from the global mean values over the period Jan 1st, 2001 to Dec 31st, 2013. Each graph represents a different lag – how does global OLR (CERES) change with global surface temperature (NCAR) on a lag of 1 day, 7 days, 14 days and so on:

OLR vs Ts - NCAR -CERES

Figure 1 – Click to Expand

The slope gives the “apparent feedback” and the R² simply reflects how much of the graph is explained by the linear trend. This last value is easily estimated just by looking at each graph.

For reference, here is the timeseries data, as anomalies, with the temperature anomaly multiplied by a factor of 3 so its magnitude is similar to the OLR anomaly:

OLR from CERES vs Ts from NCAR as timeseries

Figure 2 – Click to Expand

Note on the calculation – I used the daily data to calculate a global mean value (area-weighted) and calculated one mean value over the whole time period then subtracted it from every daily data value to obtain an anomaly for each day. Obviously we would get the same slope and R² without using anomaly data (just a different intercept on the axes).

For reference, mean OLR = 238.9 W/m², mean Ts = 288.0 K.

My first question – before even producing the graphs – was whether a lag graph shows the change in OLR due to a change in Ts or due to a mixture of many effects. That is, what is the interpretation of the graphs?

The second question – what is the “right lag” to use? We don’t expect an instant response when we are looking for feedbacks:

  • The OLR through the window region will of course respond instantly to surface temperature change
  • The OLR as a result of changing humidity will depend upon how long it takes for more evaporated surface water to move into the mid- to upper-troposphere
  • The OLR as a result of changing atmospheric temperature, in turn caused by changing surface temperature, will depend upon the mixture of convection and radiative cooling

To say we know the right answer in advance pre-supposes that we fully understand atmospheric dynamics. This is the question we are asking, so we can’t pre-suppose anything. But at least we can suggest that something in the realm of a few days to a few months is the most likely candidate for a reasonable lag.

But the idea that there is one constant feedback and one constant lag is an idea that might well be fatally flawed, despite being seductively simple. (A little more on that in note 3).

And that is one of the problems of this topic. Non-linear dynamics means non-linear results – a subject I find hard to describe in simple words. But let’s say – changes in OLR from changes in surface temperature might be “spread over” multiple time scales and be different at different times. (I have half-written an article trying to explain this idea in words, hopefully more on that sometime soon).

But for the purpose of this article I only wanted to present the simple results – for discussion and for more analysis to follow in subsequent articles.

References

Wielicki, B. A., B. R. Barkstrom, E. F. Harrison, R. B. Lee III, G. L. Smith, and J. E. Cooper, 1996: Clouds and the Earth’s Radiant Energy System (CERES): An Earth Observing System Experiment, Bull. Amer. Meteor. Soc., 77, 853-868   – free paper

Kalnay et al.,The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996  – free paper

NCEP Reanalysis data provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/

Notes

Note 1: Boring Detail about Extracting Data

On the plus side, unlike many science journals, the data is freely available. Credit to the organizations that manage this data for their efforts in this regard, which includes visualization software and various ways of extracting data from their sites. However, you can still expect to spend a lot of time figuring out what files you want, where they are, downloading them, and then extracting the data from them. (Many traps for the unwary).

NCAR – data in .nc files, each parameter as a daily value (or 4x daily) in a separate annual .nc file on an (approx) 2.5º x 2.5º grid (actually T62 gaussian grid).

Data via ftp – ftp.cdc.noaa.gov. See http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surface.html.

You get lat, long, and time in the file as well as the parameter. Care needed to navigate to the right folder because the filenames are the same for the 4x daily and the daily data.

NCAR are using latest version .nc files (which Matlab circa 2010 would not open, I had to update to the latest version – many hours wasted trying to work out the reason for failure).

CERES – data in .nc files, you select the data you want and the time period but it has to be a less than 2G file and you get a file to download. I downloaded daily OLR data for each annual period. Data in a 1ºx 1º grid. CERES are using older version .nc so there should be no problem opening.

Data from http://ceres-tool.larc.nasa.gov/ord-tool/srbavg

AIRS – data in .hdf files, in daily, 8-day average, or monthly average. The data is “ascending” = daytime, “descending” = nighttime plus some other products. Daily data doesn’t give global coverage (some gaps). 8-day average does but there are some missing values due to quality issues. Data in a 1ºx 1º grid. I used v6 data.

Data access page – http://disc.sci.gsfc.nasa.gov/datacollection/AIRX3STD_V006.html?AIRX3STD&#tabs-1.

Data via ftp.

HDF is not trivial to open up. The AIRS team have helpfully provided a Matlab tool to extract data which helped me. I think I still spent many hours figuring out how to extract what I needed.

Files Sizes – it’s a lot of data:

NCAR files that I downloaded (skin temperature) are only 12MB per annual file.

CERES files with only 2 parameters are 190MB per annual file.

AIRS files as 8-day averages (or daily data) are 400MB per file.

Also the grid for each is different. Lat from S-pole to N-pole in CERES, the reverse for AIRS and NCAR. Long from 0.5º to 359.5º in CERES but -179.5 to 179.5 in AIRS. (Note for any Matlab people, it won’t regrid, say using interp2, unless the grid runs from lowest number to highest number).

Note 2: Checking data – because I plan on using the daily 1ºx1º grid data from CERES and NCAR, I used it to create the daily global averages. As a check I downloaded the global monthly averages from CERES and compared. There is a discrepancy, which averages at 0.1 W/m².

Here is the difference by month:

CERES-Monthly-discrepancy-by-month

Figure 3 – Click to expand

And a scatter plot by month of year, showing some systematic bias:

CERES-Monthly-discrepance-scatter-plot

Figure 4

As yet, I haven’t dug any deeper to find if this is documented – for example, is there a correction applied to the daily data product in monthly means? is there an issue with the daily data? or, more likely, have I %&^ed up somewhere?

Note 3: Extract from Measuring Climate Sensitivity – Part One:

Linear Feedback Relationship?

One of the biggest problems with the idea of climate sensitivity, λ, is the idea that it exists as a constant value.

From Cloud Feedbacks in the Climate System: A Critical Review, Stephens, Journal of Climate (2005):

The relationship between global-mean radiative forcing and global-mean climate response (temperature) is of intrinsic interest in its own right. A number of recent studies, for example, discuss some of the broad limitations of (1) and describe procedures for using it to estimate Q from GCM experiments (Hansen et al. 1997; Joshi et al. 2003; Gregory et al. 2004) and even procedures for estimating from observations (Gregory et al. 2002).

While we cannot necessarily dismiss the value of (1) and related interpretation out of hand, the global response, as will become apparent in section 9, is the accumulated result of complex regional responses that appear to be controlled by more local-scale processes that vary in space and time.

If we are to assume gross time–space averages to represent the effects of these processes, then the assumptions inherent to (1) certainly require a much more careful level of justification than has been given. At this time it is unclear as to the specific value of a global-mean sensitivity as a measure of feedback other than providing a compact and convenient measure of model-to-model differences to a fixed climate forcing (e.g., Fig. 1).

[Emphasis added and where the reference to “(1)” is to the linear relationship between global temperature and global radiation].

If, for example, λ is actually a function of location, season & phase of ENSO.. then clearly measuring overall climate response is a more difficult challenge.

Read Full Post »

In Latent heat and Parameterization I showed a formula for calculating latent heat transfer from the surface into the atmosphere, as well as the “real” formula. The parameterized version has horizontal wind speed x humidity difference (between the surface and some reference height in the atmosphere, typically 10m) x “a coefficient”.

One commenter asked:

Why do we expect that vertical transport of water vapor to vary linearly with horizontal wind speed? Is this standard turbulent mixing?

The simple answer is “almost yes”. But as someone famously said, make it simple, but not too simple.

Charting a course between too simple and too hard is a challenge with this subject. By contrast, radiative physics is a cakewalk. I’ll begin with some preamble and eventually get to the destination.

There’s a set of equations describing motion of fluids and what they do is conserve momentum in 3 directions (x,y,z) – these are the Navier-Stokes equations, and they conserve mass. Then there are also equations to conserve humidity and heat. There is an exact solution to the equations but there is a bit of a problem in practice. The Navier-Stokes equations in a rotating frame can be seen in The Coriolis Effect and Geostrophic Motion under “Some Maths”.

Simple linear equations with simple boundary conditions can be re-arranged and you get a nice formula for the answer. Then you can plot this against that and everyone can see how the relationships change with different material properties or boundary conditions. In real life equations are not linear and the boundary conditions are not simple. So there is no “analytical solution”, where we want to know say the velocity of the fluid in the east-west direction as a function of time and get a nice equation for the answer. Instead we have to use numerical methods.

Let’s take a simple problem – if you want to know heat flow through an odd-shaped metal plate that is heated in one corner and cooled by steady air flow on the rest of its surface you can use these numerical methods and usually get a very accurate answer.

Turbulence is a lot more difficult due to the range of scales involved. Here’s a nice image of turbulence:

Figure 1

There is a cascade of energy from the largest scales down to the point where viscosity “eats up” the kinetic energy. In the atmosphere this is the sub 1mm scale. So if you want to accurately numerically model atmospheric motion across a 100km scale you need a grid size probably 100,000,000 x 100,000,000 x 10,000,000 and solving sub-second for a few days. Well, that’s a lot of calculation. I’m not sure where turbulence modeling via “direct numerical simulation” has got to but I’m pretty sure that is still too hard and in a decade it will still be a long way off. The computing power isn’t there.

Anyway, for atmospheric modeling you don’t really want to know the velocity in the x,y,z direction (usually annotated as u,v,w) at trillions of points every second. Who is going to dig through that data? What you want is a statistical description of the key features.

So if we take the Navier-Stokes equation and average, what do we get? We get a problem.

For the mathematically inclined the following is obvious, but of course many readers aren’t, so here’s a simple example:

Let’s take 3 numbers: 1, 10, 100:   the average = (1+10+100)/3 = 37.

Now let’s look at the square of those numbers: 1, 100, 10000:  the average of the square of those numbers = (1+100+10000)/3 = 3367.

But if we take the average of our original numbers and square it, we get 37² = 1369. It’s strange but the average squared is not the same as the average of the squared numbers. That’s non-linearity for you.

In the Navier Stokes equations we have values like east velocity x upwards velocity, written as uw. The average of uw, written as \overline{uw} is not equal to the average of u x the average of w, written as \overline{u}.\overline{w}. For the same reason we just looked at.

When we create the Reynolds averaged Navier-Stokes (RANS) equations we get lots of new terms like\overline{uw}. That is, we started with the original equations which gave us a complete solution – the same number of equations as unknowns. But when we average we end up with more unknowns than equations.

It’s like saying x + y = 1, what is x and y? No one can say. Perhaps 1 & 0. Perhaps 1000 & -999.

Digression on RANS for Slightly Interested People

The Reynolds approach is to take a value like u,v,w (velocity in 3 directions) and decompose into a mean and a “rapidly varying” turbulent component.

So u = \overline{u} + u', where \overline{u} = mean value;  u’ = the varying component. So \overline{u'} = 0. Likewise for the other directions.

And \overline{uw} = \overline{u} . \overline{w} + \overline{u'w'}

So in the original equation where we have a term like u . \frac{\partial u}{\partial x}, it turns into  (\overline{u} + u') . \frac{\partial (\overline{u} + u')}{\partial x}, which, when averaged, becomes:

\overline{u} . \frac{\partial \overline{u}}{\partial x} +\overline{u' . \frac{\partial u'}{\partial x}}

So 2 unknowns instead of 1. The first term is the averaged flow, the second term is the turbulent flow. (Well, it’s an advection term for the change in velocity following the flow)

When we look at the conservation of energy equation we end up with terms for the movement of heat upwards due to average flow (almost zero) and terms for the movement of heat upwards due to turbulent flow (often significant). That is, a term like \overline{\theta'w'} which is “the mean of potential temperature variations x upwards eddy velocity”.

Or, in plainer English, how heat gets moved up by turbulence.

..End of Digression

Closure and the Invention of New Ideas

“Closure” is a maths term. To “close the equations” when we have more unknowns that equations means we have to invent a new idea. Some geniuses like Reynolds, Prandtl and Kolmogoroff did come up with some smart new ideas.

Often the smart ideas are around “dimensionless terms” or “scaling terms”. The first time you encounter these ideas they seem odd or just plain crazy. But like everything, over time strange ideas start to seem normal.

The Reynolds number is probably the simplest to get used to. The Reynolds number seeks to relate fluid flows to other similar fluid flows. You can have fluid flow through a massive pipe that is identical in the way turbulence forms to that in a tiny pipe – so long as the viscosity and density change accordingly.

The Reynolds number, Re = density x length scale x mean velocity of the fluid / viscosity

And regardless of the actual physical size of the system and the actual velocity, turbulence forms for flow over a flat plate when the Reynolds number is about 500,000. By the way, for the atmosphere and ocean this is true most of the time.

Kolmogoroff came up with an idea in 1941 about the turbulent energy cascade using dimensional analysis and came to the conclusion that the energy of eddies increases with their size to the power 2/3 (in the “inertial subrange”). This is usually written vs frequency where it becomes a -5/3 power. Here’s a relatively recent experimental verification of this power law.

From Durbin & Reif 2010

From Durbin & Reif 2010

 Figure 2

In less genius like manner, people measure stuff and use these measured values to “close the equations” for “similar” circumstances. Unfortunately, the measurements are only valid in a small range around the experiments and with turbulence it is hard to predict where the cutoff is.

A nice simple example, to which I hope to return because it is critical in modeling climate, is vertical eddy diffusivity in the ocean. By way of introduction to this, let’s look at heat transfer by conduction.

If only all heat transfer was as simple as conduction. That’s why it’s always first on the list in heat transfer courses..

If have a plate of thickness d, and we hold one side at temperature T1 and the other side at temperature T2, the heat conduction per unit area:

H_z = \frac{k(T_2-T_1)}{d}

where k is a material property called conductivity. We can measure this property and it’s always the same. It might vary with temperature but otherwise if you take a plate of the same material and have widely different temperature differences, widely different thicknesses – the heat conduction always follows the same equation.

Now using these ideas, we can take the actual equation for vertical heat flux via turbulence:

H_z =\rho c_p\overline{w'\theta'}

where w = vertical velocity, θ = potential temperature

And relate that to the heat conduction equation and come up with (aka ‘invent’):

H_z = \rho c_p K . \frac{\partial \theta}{\partial z}

Now we have an equation we can actually use because we can measure how potential temperature changes with depth. The equation has a new “constant”, K. But this one is not really a constant, it’s not really a material property – it’s a property of the turbulent fluid in question. Many people have measured the “implied eddy diffusivity” and come up with a range of values which tells us how heat gets transferred down into the depths of the ocean.

Well, maybe it does. Maybe it doesn’t tell us very much that is useful. Let’s come back to that topic and that “constant” another day.

The Main Dish – Vertical Heat Transfer via Horizontal Wind

Back to the original question. If you imagine a sheet of paper as big as your desk then that pretty much gives you an idea of the height of the troposphere (lower atmosphere where convection is prominent).

It’s as thin as a sheet of desk size paper in comparison to the dimensions of the earth. So any large scale motion is horizontal, not vertical. Mean vertical velocities – which doesn’t include turbulence via strong localized convection – are very low. Mean horizontal velocities can be the order of 5 -10 m/s near the surface of the earth. Mean vertical velocities are the order of cm/s.

Let’s look at flow over the surface under “neutral conditions”. This means that there is little buoyancy production due to strong surface heating. In this case the energy for turbulence close to the surface comes from the kinetic energy of the mean wind flow – which is horizontal.

There is a surface drag which gets transmitted up through the boundary layer until there is “free flow” at some height. By using dimensional analysis, we can figure out what this velocity profile looks like in the absence of strong convection. It’s logarithmic:

Surface-wind-profile

Figure 3 – for typical ocean surface

Lots of measurements confirm this logarithmic profile.

We can then calculate the surface drag – or how momentum is transferred from the atmosphere to the ocean – using the simple formula derived and we come up with a simple expression:

\tau_0 = \rho C_D U_r^2

Where Ur is the velocity at some reference height (usually 10m), and CD is a constant calculated from the ratio of the reference height to the roughness height and the von Karman constant.

Using similar arguments we can come up with heat transfer from the surface. The principles are very similar. What we are actually modeling in the surface drag case is the turbulent vertical flux of horizontal momentum \rho \overline{u'w'} with a simple formula that just has mean horizontal velocity. We have “closed the equations” by some dimensional analysis.

Adding the Richardson number for non-neutral conditions we end up with a temperature difference along with a reference velocity to model the turbulent vertical flux of sensible heat \rho c_p . \overline{w'\theta'}. Similar arguments give latent heat flux L\rho . \overline{w'q'} in a simple form.

Now with a bit more maths..

At the surface the horizontal velocity must be zero. The vertical flux of horizontal momentum creates a drag on the boundary layer wind. The vertical gradient of the mean wind, U, can only depend on height z, density ρ and surface drag.

So the “characteristic wind speed” for dimensional analysis is called the friction velocity, u*, and u* = \sqrt\frac{\tau_0}{\rho}

This strange number has the units of velocity: m/s  – ask if you want this explained.

So dimensional analysis suggests that \frac{z}{u*} . \frac{\partial U}{\partial z} should be a constant – “scaled wind shear”. The inverse of that constant is known as the Von Karman constant, k = 0.4.

So a simple re-arrangement and integration gives:

U(z) = \frac{u*}{k} . ln(\frac{z}{z_0})

where z0 is a constant from the integration, which is roughness height – a physical property of the surface where the mean wind reaches zero.

The “real form” of the friction velocity is:

u*^2 = \frac{\tau_0}{\rho} = (\overline{u'w'}^2 + \overline{v'w'}^2)^\frac{1}{2},  where these eddy values are at the surface

we can pick a horizontal direction along the line of the mean wind (rotate coordinates) and come up with:

u*^2 = \overline{u'w'}

If we consider a simple constant gradient argument:

\tau = - \rho . \overline{u'w'} = \rho K \frac{\partial \overline{u}}{\partial z}

where the first expression is the “real” equation and the second is the “invented” equation, or “our attempt to close the equation” from dimensional analysis.

Of course, this is showing how momentum is transferred, but the approach is pretty similar, just slightly more involved, for sensible and latent heat.

Conclusion

Turbulence is a hard problem. The atmosphere and ocean are turbulent so calculating anything is difficult. Until a new paradigm in computing comes along, the real equations can’t be numerically solved from the small scales needed where viscous dissipation damps out the kinetic energy of the turbulence up to the large scale of the whole earth, or even of a synoptic scale event. However, numerical analysis has been used a lot to test out ideas that are hard to test in laboratory experiments. And can give a lot of insight into parts of the problems.

In the meantime, experiments, dimensional analysis and intuition have provided a lot of very useful tools for modeling real climate problems.

Read Full Post »

The atmosphere cools to space by radiation. Well, without getting into all the details, the surface cools to space as well by radiation but not much radiation is emitted by the surface that escapes directly to space (note 1). Most surface radiation is absorbed by the atmosphere. And of course the surface mostly cools by convection into the troposphere (lower atmosphere).

If there were no radiatively-active gases (aka “GHG”s) in the atmosphere then the atmosphere couldn’t cool to space at all.

Technically, the emissivity of the atmosphere would be zero. Emission is determined by the local temperature of the atmosphere and its emissivity. Wavelength by wavelength emissivity is equal to absorptivity, another technical term, which says what proportion of radiation is absorbed by the atmosphere. If the atmosphere can’t emit, it can’t absorb (note 2).

So as you increase the GHGs in the atmosphere you increase its ability to cool to space. A lot of people realize this at some point during their climate science journey and finally realize how they have been duped by climate science all along! It’s irrefutable – more GHGs more cooling to space, more GHGs mean less global warming!

Ok, it’s true. Now the game’s up, I’ll pack up Science of Doom into a crate and start writing about something else. Maybe cognitive dissonance..

Bye everyone!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Halfway through boxing everything up I realized there was a little complication to the simplicity of that paragraph. The atmosphere with more GHGs has a higher emissivity, but also a higher absorptivity.

Let’s draw a little diagram. Here are two “layers” (see note 3) of the atmosphere in two different cases. On the left 400 ppmv CO2, on the right 500ppmv CO2 (and relative humidity of water vapor was set at 50%, surface temperature at 288K):

Cooling-to-space-2a

Figure 1

It’s clear that the two layers are both emitting more radiation with more CO2.More cooling to space.

For interest, the “total emissivity” of the top layer is 0.190 in the first case and 0.197 in the second case. The layer below has 0.389 and 0.395.

Let’s take a look at all of the numbers and see what is going on. This diagram is a little busier:

Cooling-to-space-3a

Figure 2

The key point is that the OLR (outgoing longwave radiation) is lower in the case with more CO2. Yet each layer is emitting more radiation. How can this be?

Take a look at the radiation entering the top layer on the left = 265.1, and add to that the emitted radiation = 23.0 – the total is 288.1. Now subtract the radiation leaving through the top boundary = 257.0 and we get the radiation absorbed in the layer. This is 31.1 W/m².

Compare that with the same calculation with more CO2 – the absorption is 32.2 W/m².

This is the case all the way up through the atmosphere – each layer emits more because its emissivity has increased, but it also absorbs more because its absorptivity has increased by the same amount.

So more cooling to space, but unfortunately more absorption of the radiation below – two competing terms.

So why don’t they cancel out?

Emission of radiation is a result of local temperature and emissivity.

Absorption of radiation is the result of the incident radiation and absorptivity. Incident upwards radiation started lower in the atmosphere where it is hotter. So absorption changes always outweigh emission changes (note 4).

Conceptual Problems?

If it’s still not making sense then think about what happens as you reduce the GHGs in the atmosphere. The atmosphere emits less but absorbs even less of the radiation from below. So the outgoing longwave radiation increases. More surface radiation is making it to the top of atmosphere without being absorbed. So there is less cooling to space from the atmosphere, but more cooling to space from the surface and the atmosphere.

If you add lagging to a pipe, the temperature of the pipe increases (assuming of course it is “internally” heated with hot water). And yet, the pipe cools to the surrounding room via the lagging! Does that mean more lagging, more cooling? No, it’s just the transfer mechanism for getting the heat out.

That was just an analogy. Analogies don’t prove anything. If well chosen, they can be useful in illustrating problems. End of analogy disclaimer.

If you want to understand more about how radiation travels through the atmosphere and how GHG changes affect this journey, take a look at the series Visualizing Atmospheric Radiation.

 

Notes

Note 1: For more on the details see

Note 2: A very basic point – absolutely essential for understanding anything at all about climate science – is that the absorptivity of the atmosphere can be (and is) totally different from its emissivity when you are considering different wavelengths. The atmosphere is quite transparent to solar radiation, but quite opaque to terrestrial radiation – because they are at different wavelengths. 99% of solar radiation is at wavelengths less than 4 μm, and 99% of terrestrial radiation is at wavelengths greater than 4 μm. That’s because the sun’s surface is around 6000K while the earth’s surface is around 290K. So the atmosphere has low absorptivity of solar radiation (<4 μm) but high emissivity of terrestrial radiation.

Note 3: Any numerical calculation has to create some kind of grid. This is a very course grid, with 10 layers of roughly equal pressure in the atmosphere from the surface to 200mbar. The grid assumes there is just one temperature for each layer. Of course the temperature is decreasing as you go up. We could divide the atmosphere into 30 layers instead. We would get more accurate results. We would find the same effect.

Note 4: The equations for radiative transfer are found in Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. The equations prove this effect.

Read Full Post »

In Part One we had a look at some introductory ideas. In this article we will look at one of the ground-breaking papers in chaos theory – Deterministic nonperiodic flow, Edward Lorenz (1963). It has been cited more than 13,500 times.

There might be some introductory books on non-linear dynamics and chaos that don’t include a discussion of this paper – or at least a mention – but they will be in a small minority.

Lorenz was thinking about convection in the atmosphere, or any fluid heated from below, and reduced the problem to just three simple equations. However, the equations were still non-linear and because of this they exhibit chaotic behavior.

Cencini et al describe Lorenz’s problem:

Consider a fluid, initially at rest, constrained by two infinite horizontal plates maintained at constant temperature and at a fixed distance from each other. Gravity acts on the system perpendicular to the plates. If the upper plate is maintained hotter than the lower one, the fluid remains at rest and in a state of conduction, i.e., a linear temperature gradient establishes between the two plates.

If the temperatures are inverted, gravity induced buoyancy forces tend to rise toward the top the hotter, and thus lighter fluid, that is at the bottom. This tendency is contrasted by viscous and dissipative forces of the fluid so that the conduction state may persist.

However, as the temperature differential exceeds a certain amount, the conduction state is replaced by a steady convection state: the fluid motion consists of steady counter-rotating vortices (rolls) which transport upwards the hot/light fluid in contact with the bottom plate and downwards the cold heavy fluid in contact with the upper one.

The steady convection state remains stable up to another critical temperature difference above which it becomes unsteady, very irregular and hardly predictable.

Willem Malkus and Lou Howard of MIT came up with an equivalent system – the simplest version is shown in this video:

Figure 1

Steven Strogatz (1994), an excellent introduction to dynamic and chaotic systems – explains and derives the equivalence between the classic Lorenz equations and this tilted waterwheel.

L63 (as I’ll call these equations) has three variables apart from time: intensity of convection (x), temperature difference between ascending and descending currents (y), deviation of temperature from a linear profile (z).

Here are some calculated results for L63  for the “classic” parameter values and three very slightly different initial conditions (blue, red, green in each plot) over 5,000 seconds, showing the start and end 50 seconds – click to expand:

Lorenz63-5ksecs-x-y-vs-time-499px

Figure 2 – click to expand – initial conditions x,y,z = 0, 1, 0;  0, 1.001, 0;  0, 1.002, 0

We can see that quite early on the two conditions diverge, and 5000 seconds later the system still exhibits similar “non-periodic” characteristics.

For interest let’s zoom in on just over 10 seconds of ‘x’ near the start and end:

Lorenz63-5ksecs-x-vs-time-zoom-499px

Figure 3

Going back to an important point from the first post, some chaotic systems will have predictable statistics even if the actual state at any future time is impossible to determine (due to uncertainty over the initial conditions).

So we’ll take a look at the statistics via a running average – click to expand:

Lorenz63-5ksecs-x-y-vs-time-average-499px

Figure 4 – click to expand

Two things stand out – first of all the running average over more than 100 “oscillations” still shows a large amount of variability. So at any one time, if we were to calculate the average from our current and historical experience we could easily end up calculating a value that was far from the “long term average”. Second – the “short term” average, if we can call it that, shows large variation at any given time between our slightly divergent initial conditions.

So we might believe – and be correct – that the long term statistics of slightly different initial conditions are identical, yet be fooled in practice.

Of course, surely it sorts itself out over a longer time scale?

I ran the same simulation (with just the first two starting conditions) for 25,000 seconds and then used a filter window of 1,000 seconds – click to expand:

Lorenz63-25ksecs-x-time-1000s-average-499px

 Figure 5 – click to expand

The total variability is less, but we have a similar problem – it’s just lower in magnitude. Again we see that the statistics of two slightly different initial conditions – if we were to view them by the running average at any one time –  are likely to be different even over this much longer time frame.

From this 25,000 second simulation:

  • take 10,000 random samples each of 25 second length and plot a histogram of the means of each sample (the sample means)
  • same again for 100 seconds
  • same again for 500 seconds
  • same again for 3,000 seconds

Repeat for the data from the other initial condition.

Here is the result:

Lorenz-25000s-histogram-of-means-2-conditions

Figure 6

To make it easier to see, here is the difference between the two sets of histograms, normalized by the maximum value in each set:

Lorenz-25000s-delta-histogram

Figure 7

This is a different way of viewing what we saw in figures 4 & 5.

The spread of sample means shrinks as we increase the time period but the difference between the two data sets doesn’t seem to disappear (note 2).

Attractors and Phase Space

The above plots show how variables change with time. There’s another way to view the evolution of system dynamics and that is by “phase space”. It’s a name for a different kind of plot.

So instead of plotting x vs time, y vs time and z vs time – let’s plot x vs y vs z – click to expand:

Lorenz63-first-50s-x-y-z-499px

Figure 8 – Click to expand – the colors blue, red & green represent the same initial conditions as in figure 2

Without some dynamic animation we can’t now tell how fast the system evolves. But we learn something else that turns out to be quite amazing. The system always end up on the same “phase space”. Perhaps that doesn’t seem amazing yet..

Figure 7 was with three initial conditions that are almost identical. Let’s look at three initial conditions that are very different: x,y,z = 0, 1, 0;   5, 5, 5;   20, 8, 1:

Lorenz63-first-50s-x-y-z-3-different-conditions-499px

Figure 9 – Click to expand

Here’s an example (similar to figure 7) from Strogatz – a set of 10,000 closely separated initial conditions and how they separate at 3, 6, 9 and 15 seconds. The two key points:

  1. the fast separation of initial conditions
  2. the long term position of any of the initial conditions is still on the “attractor”
From Strogatz 1994

From Strogatz 1994

Figure 10

A dynamic visualization on Youtube with 500,000 initial conditions:

Figure 11

There’s lot of theory around all of this as you might expect. But in brief, in a “dissipative system” the “phase volume” contracts exponentially to zero. Yet for the Lorenz system somehow it doesn’t quite manage that. Instead, there are an infinite number of 2-d surfaces. Or something. For the sake of a not overly complex discussion a wide range of initial conditions ends up on something very close to a 2-d surface.

This is known as a strange attractor. And the Lorenz strange attractor looks like a butterfly.

Conclusion

Lorenz 1963 reduced convective flow (e.g., heating an atmosphere from the bottom) to a simple set of equations. Obviously these equations are a massively over-simplified version of anything like the real atmosphere. Yet, even with this very simple set of equations we find chaotic behavior.

Chaotic behavior in this example means:

  • very small differences get amplified extremely quickly so that no matter how much you increase your knowledge of your starting conditions it doesn’t help much (note 3)
  • starting conditions within certain boundaries will always end up within “attractor” boundaries, even though there might be non-periodic oscillations around this attractor
  • the long term (infinite) statistics can be deterministic but over any “smaller” time period the statistics can be highly variable

References

Deterministic nonperiodic flow, EN Lorenz, Journal of the Atmospheric Sciences (1963)

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Non Linear Dynamics and Chaos, Steven H. Strogatz, Perseus Books  (1994)

Notes

Note 1: The Lorenz equations:

dx/dt = σ (y-x)

dy/dt = rx – y – xz

dz/dt = xy – bz

where

x = intensity of convection

y = temperature difference between ascending and descending currents

z = devision of temperature from a linear profile

σ = Prandtl number, ratio of momentum diffusivity to thermal diffusivity

r = Rayleigh number

b = “another parameter”

And the “classic parameters” are σ=10, b = 8/3, r = 28

Note 2: Lorenz 1963 has over 13,000 citations so I haven’t been able to find out if this system of equations is transitive or intransitive. Running Matlab on a home Mac reaches some limitations and I maxed out at 25,000 second simulations mapped onto a 0.01 second time step.

However, I’m not trying to prove anything specifically about the Lorenz 1963 equations, more illustrating some important characteristics of chaotic systems

Note 3: Small differences in initial conditions grow exponentially, until we reach the limits of the attractor. So it’s easy to show the “benefit” of more accurate data on initial conditions.

If we increase our precision on initial conditions by 1,000,000 times the increase in prediction time is a massive 2½ times longer.

Read Full Post »

Over the last few years I’ve written lots of articles relating to the inappropriately-named “greenhouse” effect and covered some topics in great depth. I’ve also seen lots of comments and questions which has helped me understand common confusion and misunderstandings.

This article, with huge apologies to regular long-suffering readers, covers familiar ground in simple terms. It’s a reference article. I’ve referenced other articles and series as places to go to understand a particular topic in more detail.

One of the challenges of writing a short simple explanation is it opens you up to the criticism of having omitted important technical details that you left out in order to keep it short. Remember this is the simple version..

Preamble

First of all, the “greenhouse” effect is not AGW. In maths, physics, engineering and other hard sciences, one block is built upon another block. AGW is built upon the “greenhouse” effect. If AGW is wrong, it doesn’t invalidate the greenhouse effect. If the greenhouse effect is wrong, it does invalidate AGW.

The greenhouse effect is built on very basic physics, proven for 100 years or so, that is not in any dispute in scientific circles. Fantasy climate blogs of course do dispute it.

Second, common experience of linearity in everyday life cause many people to question how a tiny proportion of “radiatively-active” molecules can have such a profound effect. Common experience is not a useful guide. Non-linearity is the norm in real science. Since the enlightenment at least, scientists have measured things rather than just assumed consequences based on everyday experience.

The Elements of the “Greenhouse” Effect

Atmospheric Absorption

1. The “radiatively-active” gases in the atmosphere:

  • water vapor
  • CO2
  • CH4
  • N2O
  • O3
  • and others

absorb radiation from the surface and transfer this energy via collision to the local atmosphere. Oxygen and nitrogen absorb such a tiny amount of terrestrial radiation that even though they constitute an overwhelming proportion of the atmosphere their radiative influence is insignificant (note 1).

How do we know all this? It’s basic spectroscopy, as detailed in exciting journals like the Journal of Quantitative Spectroscopy and Radiative Transfer over many decades. Shine radiation of a specific wavelength through a gas and measure the absorption. Simple stuff and irrefutable.

Atmospheric Emission

2. The “radiatively-active” gases in the atmosphere also emit radiation. Gases that absorb at a wavelength also emit at that wavelength. Gases that don’t absorb at that wavelength don’t emit at that wavelength. This is a consequence of Kirchhoff’s law.

The intensity of emission of radiation from a local portion of the atmosphere is set by the atmospheric emissivity and the temperature.

Convection

3. The transfer of heat within the troposphere is mostly by convection. The sun heats the surface of the earth through the (mostly) transparent atmosphere (note 2). The temperature profile, known as the “lapse rate”, is around 6K/km in the tropics. The lapse rate is principally determined by non-radiative factors – as a parcel of air ascends it expands into the lower pressure and cools during that expansion (note 3).

The important point is that the atmosphere is cooler the higher you go (within the troposphere).

Energy Balance

4. The overall energy in the climate system is determined by the absorbed solar radiation and the emitted radiation from the climate system. The absorbed solar radiation – globally annually averaged – is approximately 240 W/m² (note 4). Unsurprisingly, the emitted radiation from the climate system is also (globally annually averaged) approximately 240 W/m². Any change in this and the climate is cooling or warming.

Emission to Space

5. Most of the emission of radiation to space by the climate system is from the atmosphere, not from the surface of the earth. This is a key element of the “greenhouse” effect. The intensity of emission depends on the local atmosphere. So the temperature of the atmosphere from which the emission originates determines the amount of radiation.

If the place of emission of radiation – on average – moves upward for some reason then the intensity decreases. Why? Because it is cooler the higher up you go in the troposphere. Likewise, if the place of emission – on average – moves downward for some reason, then the intensity increases (note 5).

More GHGs

6. If we add more radiatively-active gases (like water vapor and CO2) then the atmosphere becomes more “opaque” to terrestrial radiation and the consequence is the emission to space from the atmosphere moves higher up (on average). Higher up is colder. See note 6.

So this reduces the intensity of emission of radiation, which reduces the outgoing radiation, which therefore adds energy into the climate system. And so the climate system warms (see note 7).

That’s it!

It’s as simple as that. The end.

A Few Common Questions

CO2 is Already Saturated

There are almost 315,000 individual absorption lines for CO2 recorded in the HITRAN database. Some absorption lines are stronger than others. At the strongest point of absorption – 14.98 μm (667.5 cm-1), 95% of radiation is absorbed in only 1m of the atmosphere (at standard temperature and pressure at the surface). That’s pretty impressive.

By contrast, from 570 – 600 cm-1 (16.7 – 17.5 μm) and 730 – 770 cm-1 (13.0 – 13.7 μm) the CO2 absorption through the atmosphere is nowhere near “saturated”. It’s more like 30% absorbed through a 1km path.

You can see the complexity of these results in many graphs in Atmospheric Radiation and the “Greenhouse” Effect – Part Nine – calculations of CO2 transmittance vs wavelength in the atmosphere using the 300,000 absorption lines from the HITRAN database, and see also Part Eight – interesting actual absorption values of CO2 in the atmosphere from Grant Petty’s book

The complete result combining absorption and emission is calculated in Visualizing Atmospheric Radiation – Part Seven – CO2 increases – changes to TOA in flux and spectrum as CO2 concentration is increased

CO2 Can’t Absorb Anything of Note Because it is Only .04% of the Atmosphere

See the point above. Many spectroscopy professionals have measured the absorptivity of CO2. It has a huge variability in absorption, but the most impressive is that 95% of 14.98 μm radiation is absorbed in just 1m. How can that happen? Are spectroscopy professionals charlatans? You need evidence, not incredulity. Science involves measuring things and this has definitely been done. See the HITRAN database.

Water Vapor Overwhelms CO2

This is an interesting point, although not correct when we consider energy balance for the climate. See Visualizing Atmospheric Radiation – Part Four – Water Vapor – results of surface (downward) radiation and upward radiation at TOA as water vapor is changed.

The key point behind all the detail is that the top of atmosphere radiation change (as CO2 changes) is the important one. The surface change (forcing) from increasing CO2 is not important, is definitely much weaker and is often insignificant. Surface radiation changes from CO2 will, in many cases, be overwhelmed by water vapor.

Water vapor does not overwhelm CO2 high up in the atmosphere because there is very little water vapor there – and the radiative effect of water vapor is dramatically impacted by its concentration, due to the “water vapor continuum”.

The Calculation of the “Greenhouse” Effect is based on “Average Surface Temperature” and there is No Such Thing

Simplified calculations of the “greenhouse” effect use some averages to make some points. They help to create a conceptual model.

Real calculations, using the equations of radiative transfer, don’t use an “average” surface temperature and don’t rely on a 33K “greenhouse” effect. Would the temperature decrease 33K if all of the GHGs were removed from the atmosphere? Almost certainly not. Because of feedbacks. We don’t know the effect of all of the feedbacks. But would the climate be colder? Definitely.

See The Rotational Effect – why the rotation of the earth has absolutely no effect on climate, or so a parody article explains..

The Second Law of Thermodynamics Prohibits the Greenhouse Effect, or so some Physicists Demonstrated..

See The Three Body Problem – a simple example with three bodies to demonstrate how a “with atmosphere” earth vs a “without atmosphere earth” will generate different equilibrium temperatures. Please review the entropy calculations and explain (you will be the first) where they are wrong or perhaps, or perhaps explain why entropy doesn’t matter (and revolutionize the field).

See Gerlich & Tscheuschner for the switch and bait routine by this operatic duo.

And see Kramm & Dlugi On Dodging the “Greenhouse” Bullet – Kramm & Dlugi demonstrate that the “greenhouse” effect doesn’t exist by writing a few words in a conclusion but carefully dodging the actual main point throughout their entire paper. However, they do recover Kepler’s laws and point out a few errors in a few websites. And note that one of the authors kindly showed up to comment on this article but never answered the important question asked of him. Probably just too busy.. Kramm & Dlugi also helpfully (unintentionally) explain that G&T were wrong, see Kramm & Dlugi On Illuminating the Confusion of the Unclear – Kramm & Dlugi step up as skeptics of the “greenhouse” effect, fans of Gerlich & Tscheuschner and yet clarify that colder atmospheric radiation is absorbed by the warmer earth..

And for more on that exciting subject, see Confusion over the Basics under the sub-heading The Second Law of Thermodynamics.

Feedbacks overwhelm the Greenhouse Effect

This is a totally different question. The “greenhouse” effect is the “greenhouse” effect. If the effect of more CO2 is totally countered by some feedback then that will be wonderful. But that is actually nothing to do with the “greenhouse” effect. It would be a consequence of increasing temperature.

As noted in the preamble, it is important to separate out the different building blocks in understanding climate.

Miskolczi proved that the Greenhouse Effect has no Effect

Miskolczi claimed that the greenhouse effect was true. He also claimed that more CO2 was balanced out by a corresponding decrease in water vapor. See the Miskolczi series for a tedious refutation of his paper that was based on imaginary laws of thermodynamics and questionable experimental evidence.

Once again, it is important to be able to separate out two ideas. Is the greenhouse effect false? Or is the greenhouse effect true but wiped out by a feedback?

If you don’t care, so long as you get the right result you will be in ‘good’ company (well, you will join an extremely large company of people). But this blog is about science. Not wishful thinking. Don’t mix the two up..

Convection “Short-Circuits” the Greenhouse Effect

Let’s assume that regardless of the amount of energy arriving at the earth’s surface, that the lapse rate stays constant and so the more heat arriving, the more heat leaves. That is, the temperature profile stays constant. (It’s a questionable assumption that also impacts the AGW question).

It doesn’t change the fact that with more GHGs, the radiation to space will be from a higher altitude. A higher altitude will be colder. Less radiation to space and so the climate warms – even with this “short-circuit”.

In a climate without convection, the surface temperature will start off higher, and the GHG effect from doubling CO2 will be higher. See Radiative Atmospheres with no Convection.

In summary, this isn’t an argument against the greenhouse effect, this is possibly an argument about feedbacks. The issue about feedbacks is a critical question in AGW, not a critical question for the “greenhouse” effect. Who can say whether the lapse rate will be constant in a warmer world?

Notes

Note 1 – An important exception is O2 absorbing solar radiation high up above the troposphere (lower atmosphere). But O2 does not absorb significant amounts of terrestrial radiation.

Note 2 – 99% of solar radiation has a wavelength <4μm. In these wavelengths, actually about 1/3 of solar radiation is absorbed in the atmosphere. By contrast, most of the terrestrial radiation, with a wavelength >4μm, is absorbed in the atmosphere.

Note 3 – see:

Density, Stability and Motion in Fluids – some basics about instability
Potential Temperature – explaining “potential temperature” and why the “potential temperature” increases with altitude
Temperature Profile in the Atmosphere – The Lapse Rate – lots more about the temperature profile in the atmosphere

Note 4 – see Earth’s Energy Budget – a series on the basics of the energy budget

Note 5 – the “place of emission” is a useful conceptual tool but in reality the emission of radiation takes place from everywhere between the surface and the stratosphere. See Visualizing Atmospheric Radiation – Part Three – Average Height of Emission – the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions.

Also, take a look at the complete series: Visualizing Atmospheric Radiation.

Note 6 – the balance between emission and absorption are found in the equations of radiative transfer. These are derived from fundamental physics – see Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations – the equations of radiative transfer including the plane parallel assumption and it’s nothing to do with blackbodies. The fundamental physics is not just proven in the lab, spectral measurements at top of atmosphere and the surface match the calculated values using the radiative transfer equations – see Theory and Experiment – Atmospheric Radiation – real values of total flux and spectra compared with the theory.

Also, take a look at the complete series: Atmospheric Radiation and the “Greenhouse” Effect

Note 7 – this calculation is under the assumption of “all other things being equal”. Of course, in the real climate system, all other things are not equal. However, to understand an effect “pre-feedback” we need to separate it from the responses to the system.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 378 other followers