Feeds:
Posts
Comments

Archive for 2010

There’s a paper out which has created some excitement, On Falsification Of The Atmospheric CO2 Greenhouse Effects, by Gerlich & Tscheuschner (2009). It was published in International Journal of Modern Physics B. I don’t know what the B stands for.

Usually I would try and read a paper all the way through to understand it, then reread it.. but I got as far as page 55 out of 115 – even the seminal Climate Modeling through Radiative Convective Methods by Ramanathan & Coakley (1978) paper only had 25 pages.

Quite a few points have already jumped out at me that made me not want to read the whole thing:

First, a lot of time was spent showing that greenhouses and bodies surrounded by glass (or anything that stops air movement) retain heat not because of absorption and reradiation of longwave energy but because convection is reduced.

Why spend so long on it when everyone agrees. Sadly the “so-called greenhouse effect” became that because it passed into common language to describe this effect even though it’s not the right description.

Even in CO2 – An Insignificant Trace Gas? Part Six – Visualization I said:

I tried to think of a good analogy, something to bring it to life..

But didn’t mention greenhouses, because the greenhouse isn’t a good analogy..

This is a concern if it’s a serious paper, because attacking arguments that no one agrees with is the strawman fallacy, a refuge of people with no strong argument.

Here is a nice example, commenting on a paper by Lee, who says that the “greenhouse” term is a misnomer:

Lee continues his analysis with a calculation based on radiative balance equations, which are physically questionable.. Nevertheless, Lee’s paper is a milestone marking the day after which every serious scientist or science educator is no longer allowed to compare the greenhouse with the atmosphere, even in the classroom, which Lee explicitly refers to.

The authors of this paper don’t actually explain where Lee’s equations are questionable, instead draw attention to a day that should be marked down in history.. and use that to show that anyone mentioning “greenhouses” have got it wrong.

None of the papers that discuss the radiative-convective method actually argue from the greenhouse. So why are the authors of this paper spending so much time on it?

Second, attacking poor presentations with a mixture of correct (but really irrelevant) and incorrect arguments.

They cite, not a paper, but an Encyclopedia..

In the 1974 edition of Meyer’s Enzyklopadischem Lexikon one finds under “glass house eff ect”:

Name for the influence of the Earth’s atmosphere on the radiation and heat budget of the Earth, which compares to the e ffect of a glass house: Water vapor and carbon dioxide in the atmosphere let short wave solar radiation go through down to the Earth’s surface with a relative weak attenuation and, however, reflect the portion of long wave (heat) radiation which is emitted from the Earth’s surface
(atmospheric backradiation).

Disproof: Firstly, the main part of the solar radiation lies outside the visible light. Secondly, reflection is confused with emission.

Nice. They have brought this up a few times. Yes, technically we call infrared that part of the radiation that is longer wavelength than visible light. So anything >700nm is infrared. Any yet, in common terminology, often cited to a point of pain, we use “longwave” to mean that radiation over 4μm because 99% of it is radiated from the earth, and we use “shortwave” to mean that radiation under 4μm which is solar radiation.

So their first “disproof” isn’t a disproof. And their second one is simply picking a terminology mistake in an encyclopedia. Yes, the encyclopedia has mixed up the phenomenon.

Why are they citing from this source?

Third, another example of “destroying” the opponent’s argument..

They quote another source:

The infrared radiation that is emitted downwards from the atmosphere (the so-called back-radiation) raises the energy supply of the Earth’s surface.

And comment:

The assumption that if gases emit heat radiation, then they will emit it only downwards, is rather obscure.

It wasn’t what their source actually said. Their source didn’t say, or imply, that radiation was emitted only downward.

Fourth, and most importantly, the paper gives the appearance of discussing prior work by discussing a real mix of very old work and lots of more recent comments by people in their “introduction” to something quite different. That is they are citing from papers which are introducing another subject while not attempting to demonstrate any formal proof of the inappropriately named “greenhouse effect”. They don’t discuss the relevant modern work that attempts to prove the relevance and solution of the radiative transfer equations.

They do reference one key paper but never discuss it to point out any problems.

The paper in question is S. Manabe and R.F. Strickler, Thermal Equilibrium of the Atmosphere with Convective Adjustment, J. Atmosph. Sciences 21, 361-385 (1964)

It is referenced through this quote:

The influence of CO2 on the climate was also discussed thoroughly in a number of publications that appeared between 1909 and 1980, mainly in Germany. The most influential authors were Moller, who also wrote a textbook on meteorology, and Manabe (the citation). It seems, that the joint work of Moller and Manabe has had a signifi cant influence on the formulation of the modern atmospheric CO2 greenhouse conjectures and hypotheses, respectively.

The work that most recent papers on the solution to the radiative transfer equations discuss or cite is Ramanathan and Coakley (1978) – often along with a citation of M&S – and of course Ramanathan and Coakley cite and discuss Manabe and Strickler (1964). That is, anyone calculating the effect of CO2 and other trace gases on the surface temperatures.

Why not open up these two great papers and show the flaws? Ramanathan and Coakley are never even cited. Manabe and Strickler aren’t discussed.

R&C is 25 pages long and works through a lot of thermodynamics in their paper. If Gerlich & Tscheuschner want to get a result, show their flaws. It should be a breeze for them..

This doesn’t instill any confidence in the paper. I starting writing this post a few weeks ago and at the time wrote:

One day I may find the energy to read and reread all 115 pages and do them justice. Perhaps there is some revelation inside. More likely, they are having a laugh. Otherwise why is half the paper nothing to do with disproving the theory that modern atmospheric physicists believe?

I’m sure they would say otherwise.. And I’m certain we would get on great over a few drinks. If we drank enough I’m sure they would admit they did it for a bet..

Non-Conclusion

If Gerlich & Tscheuschner want to be taken seriously maybe they can write a paper which is 20-30 pages long – it should be enough – and they can ignore greenhouses and encyclopedia references and what people say in introductions to less relevant works.

Their paper could reference and discuss recent work which from first principles demonstrate and solve the radiative transfer equations. And they should show the flaw in these papers. Use Ramanathan and Coakley (1978) – everyone else references it.

On that paper: Climate Modeling through Radiative Convective Methods – R&C are into the maths by page 2 and don’t mention greenhouses. I would recommend this excellent paper (you should be able to find it online without paying) to anyone who wants to learn more about the approach to solving this difficult but well-understood problem. Even if you don’t want to follow their maths there is lots to learn.

Gerlich & Tscheuschner waste 50 pages with irrelevance and poorly directed criticism.. if they have produced a great insight it will be lost on many.

In New Theory Proves AGW Wrong! I commented that many ideas come along which are widely celebrated.

Some “disprove” the “greenhouse effect” or modify it to the extent that if their ideas are correct our ideas about the (inappropriately named) greenhouse effect are quite wrong.

Some disprove AGW (anthropogenic global warming). There is a world of difference between the two.

This paper falls into the first category. I also commented that the papers in the first category usually disprove each other as well, so it’s not “one more nail in the greenhouse effect” – it’s “one more nail in the last theory” and the theories that will inevitably follow.

Interestingly (for me), since I wrote that article: New Theory Proves AGW Wrong! someone produced a list of a few papers that I should “disprove”. One was this paper by Gerlich and Tschueschner, another was by Miskolczi. Yet they disprove each other and both disprove what this person promoted as their own theory.

This doesn’t prove anyone wrong – just they can’t all be right. One or zero..

And I’ll be the first to admit I haven’t proven Gerlich and Tschueschner wrong in their central theory. I have pointed out a few “areas for improvement” in their paper but these are all distractions from the main event. More interesting stuff to do.

Update – new post: On the Miseducation of the Uninformed by Gerlich and Tscheuschner (2009)

Read Full Post »

Gary Thompson at American Thinker recently produced an article The AGW Smoking Gun. In the article he takes three papers and claims to demonstrate that they are at odds with AGW.

A key component of the scientific argument for anthropogenic global warming (AGW) has been disproven. The results are hiding in plain sight in peer-reviewed journals.

The article got discussed on Skeptical Science, with the article Have American Thinker Disproven Global Warming? although the blog article really just covered the second paper. The discussion was especially worth reading because Gary Thompson joined in and showed himself to be a thoughtful and courteous fellow.

He did claim in that discussion that:

First off, I never stated in the article that I was disproving the greenhouse effect. My aim was to disprove the AGW hypothesis as I stated in the article “increased emission of CO2 into the atmosphere (by humans) is causing the Earth to warm at such a rate that it threatens our survival.” I think I made it clear in the article that the greenhouse effect is not only real but vital for our planet (since we’d be much cooler than we are now if it didn’t exist).

However, the papers he cites are really demonstrating the reality of the “greenhouse” effect. If his conclusions – different from the authors of the papers – are correct, then he has demonstrated a problem with the “greenhouse” effect, which is a component – a foundation – of AGW.

This article will cover the first paper which appears to be part of a conference proceeding: Changes in the earth’s resolved outgoing longwave radiation field as seen from the IRIS and IMG instruments by H.E. Brindley et al. If you are new to understanding the basics on longwave and shortwave radiation and absorption by trace gases, take a look at CO2 – An Insignificant Trace Gas?

Take one look at a smoking gun and you know it’s been fired. One look at a paper on a complex subject like atmospheric physics and you might easily jump to the wrong conclusion. Let’s hope I haven’t fallen into the same trap..

Even their mother couldn't tell them apart

Even their mother couldn't tell them apart

The Concept Behind the Paper

The paper examines the difference between satellite measurements of longwave radiation from 1970 and 1997. The measurements are only for clear sky conditions, to remove the complexity associated with the radiative effects of clouds (they did this by removing the measurements that appeared to be under cloudy conditions). And the measurements are in the Pacific, with the data presented divided between east and west. Data is from April-June in both cases.

The Measurement

The spectral data is from 7.1 – 14.1 μm (1400 cm-1 – 710 cm-1 using the convention of spectral people, see note 1 at end). Unfortunately, the measurements closer to the 15μm band had too much noise so were not reliable.

Their first graph shows the difference of 1997 – 1970 spectral results converted from W/m2 into Brightness Temperature (the equivalent blackbody radiation temperature). I highlighted the immediate area of concern, the “smoking gun”:

Spectral difference - 1997 less 1970 over East and West Pacific, Brindley

Spectral difference - 1997 less 1970 over East and West Pacific, Brindley

Note first that the 3 lines on each graph correspond to the measurement (middle) and the error bars either side.

I added wavelength in μm under the cm-1 axis for reference.

What Gary Thompson draws attention to is the fact that OLR (outgoing longwave radiation) has increased even in the 13.5+μm range, which is where CO2 absorbs radiation – and CO2 has increased during the period in question (about 330ppm to 380ppm). Surely, with an increase in CO2 there should be more absorption and therefore the measurement should be negative for the observed 13.5μm-14.1μm wavelengths.

One immediate thought without any serious analysis or model results is that we aren’t quite into the main absorption of the CO2 band, which is 14 – 16μm. But let’s read on and understand what the data and the theory are telling us.

Analysis

The key question we need to ask before we can draw any conclusions is what is the difference between the surface and atmosphere in these two situations?

We aren’t comparing the global average over a decade with an earlier decade. We are comparing 3 months in one region with 3 months 27 years earlier in the same region.

Herein seems to lie the key to understanding the data..

For the authors of the paper to assess the spectral results against theory they needed to know the atmospheric profile of temperature and humidity, as well as changes in the well-studied trace gases like CO2 and methane. Why? Well, the only way to work out the “expected” results – or what the theory predicts – is to solve the radiative transfer equations (RTE) for that vertical profile through the atmosphere. Solving those equations, as you can see in CO2 – Part Three, Four and Five – requires knowledge of the temperature profile as well as the concentration of the various gases that absorb longwave radiation. This includes water vapor and, therefore, we need to know humidity.

Atmospheric Temperature Profile, Brindley

Change in Atmospheric Temperature Profile, Brindley

I’ve broken up their graphs, this is temperature change – the humidity graphs are below.

Now it is important to understand where the temperature profiles came from. They came from model results, by using the recorded sea surface temperatures during the two periods. The temperature profiles through the atmosphere are not usually available with any kind of geographic and vertical granularity, especially in 1970. This is even more the case for humidity.

Note that the temperature – the real sea surface temperature – in 1997 for these 3 months is higher than 1970.

Higher temperature = higher radiation across the spectrum of emission.

Now the humidity:

Change in Humidity Profile through the atmosphere, Brindley

Change in Humidity Profile through the atmosphere, Brindley

The top graph is change in specific humidity – how many grams of water vapor per kg of air. The bottom is change in relative humidity. Not relevant to the subject of the post, but you can see how even though the difference in relative humidity is large high up in the atmosphere it doesn’t affect the absolute amount of water vapor in any meaningful way – because it is so cold high up in the atmosphere. Cold air cannot hold as much water vapor as warm air.

It’s no surprise to see higher humidity when the sea temperature is warmer. Warmer air has a higher ability to absorb water vapor, and there is no shortage of water to evaporate from the surface of the ocean.

Model Results of Expected Longwave Radiation

Now here are some important graphs which initially can be a little confusing. It’s worth taking a few minutes to see what these graphs tell us. Stay with me..

Top - model results not including trace gases; Bottom - model results including all effects

Top - model results not including trace gases; Bottom - model results including all effects

The top graph. The bold line is the model results of expected longwave radiation – not including the effect of CO2, methane, etc – but taking into account sea surface temperature and modeled atmospheric temperature and humidity profiles.

This calculation includes solving the radiative transfer equations through the atmosphere (see CO2 – An Insignificant Trace Gas? Part Five for more explanation on this, and you will see why the vertical temperature profile through the atmosphere is needed).

The breakdown is especially interesting – the three fainter lines. Notice how the two fainter lines at the top are the separate effects of the warmer surface and the higher atmospheric temperature creating more longwave radiation. Now the 3rd fainter line below the bold line is the effect of water vapor. As a greenhouse gas, water vapor absorbs longwave radiation through a wide spectral range – and therefore pulls the longwave radiation down.

So the bold line in the top graph is the composite of these three effects. Notice that without any CO2 effect in the model, the graph towards the left edge trends up: 700 cm-1 to 750 cm-1 (or 13.5μm to 14.1μm). This is because water vapor is absorbing a lot of radiation to the right (wavelengths below 13.5μm) – dragging that part of the graph proportionately down.

The bottom graph. The bold line in the bottom graph shows the modeled spectral results including the effects of the long-term changes in the trace gases CO2, O3, N2O, CH4, CFC11 and CFC12. (The bottom graph also confuses us by including some inter-annual temperature changes – the fainter lines – let’s ignore those).

Compare the top and bottom bold graphs to see the effect of the trace gases. In the middle of the graph you see O3 at 1040 cm-1 (9.6μm). Over on the right around 1300cm-1 you see methane absorption. And on the left around 700cm-1 you see the start of CO2 absorption, which would continue on to its maximum effect at 667cm-1 or 15μm.

Of course we want to compare this bottom graph – the full model results – more easily with the observed results. And the vertical axes are slightly different.

First for completeness, the same graphs for the West Pacific:

Model results for West Pacific

Model results for West Pacific

Let’s try the comparison of observation to the full model, it’s slightly ugly because I don’t have source data, just a graphics package to try and line them up on comparable vertical axes.

Here is the East Pacific. Top is observed with (1 standard deviation) error bars. Bottom is model results based on: observed SST; modeled atmospheric profile for temperature and humidity; plus effect of trace gases:

Comparison on similar vertical axes - top, observed; bottom, model

Comparison on similar vertical axes - top, observed; bottom, model

Now the West Pacific:

Comparison, West Pacific, Observed (top) vs Model (bottom)

Comparison, West Pacific, Observed (top) vs Model (bottom)

We notice a few things.

First, the model and the results aren’t perfect replicas.

Second, the model and the results both show a very similar change in the profile around methane (right “dip”), ozone (middle “dip”) and CO2 (left “dip”).

Third, the models show a negative value in change of brightness temperature (-1K) at the 700 cm-1 wavelength, whereas the actual results for the East Pacific is around 1K and for West Pacific is around -0.5K. The 1 standard deviation error bars for measurement include the model results – easily for West Pacific and just for East Pacific.

It appears to be this last observation that has prompted the article in American Thinker.

Conclusion

Hopefully, those who have taken the time to review:

  • the results
  • the actual change in surface and atmospheric conditions between 1970 and 1997
  • the models without trace gas effects
  • the models with trace gas effects

might reach a different conclusion to Gary Thompson.

The radiative transfer equations as part of the modeled results have done a pretty good job of explaining the observed results but aren’t exactly the same. However, if we don’t include the effect of trace gases in the model we can’t explain some of the observed features – just compare the earlier graphs of model results with and without trace gases.

It’s possible that the biggest error is the water vapor effect not being modeled well. If you compare observed vs model (the last 2 sets of graphs) from 800cm-1 to 1000cm-1 there seems to be a “trend line” error. The effect of water vapor has the potential to cause the most variation for two reasons:

  • water vapor is a strong greenhouse gas
  • water vapor concentration varies significantly vertically through the atmosphere and geographically (due to local vaporization, condensation, convection and lateral winds)

It’s also the case that the results for the radiative transfer equations will have a certain amount of error using “band models” compared with the “line by line” (LBL) codes for all trace gases. (A subject for another post but see note 2 below). It is rare that climate models – even just 1d profiles – are run with LBL codes because it takes a huge amount of computer time due to the very detailed absorption lines for every single gas.

The band models get good results but not perfect – however, they are much quicker to run.

Comparing two spectra from two different real world situations where one has higher sea surface temperatures and declaring the death of the model seems premature. Perhaps Gary ran the RTE calculations through a pen and paper/pocket calculator model like so many others have done.

There is a reason why powerful computers are needed to solve the radiative transfer equations. And even then they won’t be perfect. But for those who want to see a better experiment that compared real and modeled conditions, take a look at Part Six – Visualization where actual measurements of humidity and temperature through the atmosphere were taken, the detailed spectra of downwards longwave radiation was measured and the model and measured values were compared.

The results might surprise even Gary Thompson.

Notes:

1. Wavelength has long been converted to wavenumber, or cm-1. This convention is very simple. 10,000/wavenumber in cm-1 = wavelength in μm.

e.g. CO2 central absorption wavelength of 15μm => 667cm-1 (=10,000/15)

2. Solving the radiative transfer equations through the atmosphere requires knowledge of the absorption spectra of each gas. These are extremely detailed and consequently the numerical solution to the equations require days or weeks of computational time. The detailed versions are known as LBL – line by line transfer codes. The approximations, often accurate to within 10% are called “band models”. These require much less computational time and so the band models are almost always used.

Read Full Post »

The title should really be:

The Real Measure of Global Warming – Part Two – How Big Should Error Bars be, and the Sad Case of the Expendable Bathythermographs

But that was slightly too long.

This post picks up from The Real Measure of Global Warming which in turn followed Why Global Mean Surface Temperature Should be Relegated, Or Mostly Ignored

The discussion was about ocean heat content being a better measure of global warming than air temperature. However, ocean heat down into the deep has been less measured than air temperature, so is subject to more uncertainty the further back in time we travel.

We had finished up with a measure of changes in OHC (ocean heat content) over 50 years from Levitus (2005):

Ocean heat change, Levitus (2005)

Ocean heat change, Levitus (2005)

Some of the earlier graphs were a little small but you could probably see that the error bars further back in time are substantial. Unfortunately, it’s often the case that the error bars themselves are placed with too much confidence, and so it transpired here.

In 2006, GRL (Geophysical Research Letters) published the paper How much is the ocean really warming? by Gouretski and Koltermann.

They pointed out a significant error source in XBTs (expendable bathythermographs ). The XBT’s estimate temperature against depth by estimating depth from fall rate, a value which was found to be inaccurate.

The largest discrepancies are found between the expendable bathythermographs (XBT) and bottle and CTD data, with XBT temperatures being positively biased by 0.2–0.4C on average. Since the XBT data are the largest proportion of the dataset, this bias results in a significant World Ocean warming artefact when time periods before and after introduction of XBT are compared.

And conclude:

Comparison with LAB2005 [Levitus 2005] results shows that the estimates of global warming are rather sensitive to the data base and analysis method chosen, especially for the deep ocean layers with inadequate sampling. Clearly instrumental biases are an important issue and further studies to refine estimates of these biases and their impact on ocean heat content are required. Finally, our best estimate of the increase of the global ocean heat content between 1957–66 and 1987–96 is 12.8 ± 8.0 x 1022 J with the XBT offsets corrected. However, using only the CTD and bottle data reduces this estimate to 4.3 ± 8.0 x 1022 J.

If we refer back to Levitus, they had calculated a value over the same time period of 15×1022 J.

Gouretski and Koltermann are saying, in layman’s terms, if I might paraphrase:

Might be around what Levitus said, might be a lot less, might even be zero.. we don’t know.

Some readers might be asking, does this heretical stuff really get published?

Well, moving back to ocean heat content, we don’t want to drown in statistical analysis because anything more than a standard deviation and I am out of my depth, so to speak.. Better just to see what the various experts have concluded as our measure of uncertainty.

Ocean Heat Content is one of the hot topics, so no surprise to see others weighing in..

Domingues et al

In 2008, Nature then published Improved estimates of upper-ocean warming and multi-decadal sea-level rise by Domingues et al.

Remembering that the major problem of ocean heat content is first a lack of data, and now revealed, problematic data in the major data source.. Domingues says in the abstract:

..using statistical techniques that allow for sparse data coverage..

My brief excursion into statistics was quickly abandoned when the first paper cited (Reduced space optimal interpolation of historical marine sea level pressure: 1854-1992, Kaplan 2000) states:

..A novel procedure of covariance adjustment brought the results of the analysis to the consistency with the a priori assumptions on the signal covariance structure..

Let’s avoid the need for strong headache medication and just see their main points, interesting asides and conclusions. Which are interesting.

OHC 1951-2004, Domingues (2008)

OHC 1951-2004, Domingues (2008)

The black line is their story. Note their “error bars” in the top graph, the grey shading around the black line is one standard deviation. This helps us see “a measure” of uncertainty as we go back in time. The red line is the paper we have just considered, Levitus 2005.

Domingues calculates the 1961-2003 increase in OHC as 16 x1022 J, with their error bars as ±3 x1022 J. They calculate a number very close to Levitus (2005).

Interesting aside:

Climate models, however, do not reproduce the large decadal variability in globally averaged ocean heat content inferred from the sparse observational database.

From one of the papers they cite (Simulated and observed variability in ocean temperature and heat content, AchutaRao 2007) :

Several studies have reported that models may significantly underestimate the observed OHC variability, raising concerns about the reliability of detection and attribution findings.

And on to Levitus et al 2009

From GRL, Global ocean heat content 1955–2008 in light of recently revealed instrumentation problems

Or, having almost the last word with his updated paper:

Ocean heat change 1955- 2009 - Levitus (2009)

Ocean heat change 1955- 2009 - Levitus (2009)

The red line being the updated version, the black dotted line the old version.

Willis Back, 2006 and Forwards, 2009

In the meantime, Josh Willis, using the brand new Argo floats, (see part one for the Argo floats) published a paper (GRL 2006) showing such a sharp reduction in ocean heat from 2003 – 2005 that there was no explanation for.

And then a revised paper in 2009 in Journal of Atmospheric and Oceanic Technology showing that the previous correction was a mistake, instrument problems again.. now it’s all flat for a few years:

no significant warming or cooling is observed in upper-ocean heat content between 2003 and 2006

Probably more papers we could investigate, including one which I planned to cover before realizing I can’t find it and this post has gone on way too long already.

Conclusion

We are looking at a very important measurement, ocean heat content. We aren’t as sure as we would like to be about the history of OHC and not much can be done about that, although novel statistical methods of covariance adjustment may have their place.

Some could say, based on one of the papers presented here, “No ocean warming for 50 years”. It’s a possibility, but probably a distant one. One day when we get to the sea level “budget”, more usefully called “sea level rise”, we will probably think that the rise of sea level is usefully explained by the ocean heat content going up.

We do have excellent measurements in place now, and since around 2000, although even that exciting project has been confused by instrument uncertainty, or uncertainty about instrument uncertainty.

We have seen a great example that error bars aren’t really error bars. They are “statistics”, not real life.

And perhaps, most useful of all, we might have seen that papers which show “a lot less warming” and “unexplained cooling”, still make it into print with peer-reviewed science journals like GRL. This last factor may give us more confidence than anything that we are seeing real science in progress. And save us from having to analyze 310,000 temperature profiles with and without covariance adjustments. Instead, we can wait for the next few papers to see what the final consensus is.

Or spend a lifetime in study of statistics.

Read Full Post »

In an earlier post – Why Global Mean Surface Temperature Should be Relegated, Or Mostly Ignored – I commented:

There’s a huge amount of attention paid to the air temperature 6ft off the ground all around the continents of the world. And there’s an army of bloggers busy re-analyzing the data.

It seems like one big accident of history. We had them, so we used them, then analyzed them, homogenized them, area-weighted them, re-analyzed them, wrote papers about them and in so doing gave them much more significance than they deserve. Consequently, many people are legitimately confused about whether the earth is warming up.

Then we looked at some of the problems of measuring the surface temperature of the earth via the temperature of a light ephemeral substance approximately 6ft off the ground.

In Warming of the World Ocean 1955-2003, Levitus (2005) shows an interesting comparison of estimates of absorbed heat over almost half a century:

Heat absorbed in different elements of the climate, Levitus (2005)

Heat absorbed in different elements of the climate, Levitus (2005)

Once you find out that the oceans have around 1000x the heat capacity of the atmosphere, the above chart won’t be surprising.

For those who haven’t considered this relative difference in heat capacity before:

  • if the oceans cooled down by a tiny 0.1°, transferring their heat to the atmosphere, the atmosphere would heat up by 100°C (it wouldn’t happen like this but it gives an idea of the relative energy in both)
  • if the atmosphere transferred so much heat to the oceans that the air temperature went from an average of 15°C to a freezing -15°C, the oceans would heat up by a tiny, almost unnoticeable 0.03°C

So if we want to understand the energy in the climate system, if we want to understand whether the earth is warming up, we need to measure the energy in the oceans.

An Accident of History

Measuring the temperature of the earth’s surface by measuring the highly mobile atmosphere 6ft off the ground is a problem. By contrast, measuring ocean heat is simple..

Except we didn’t start until much later. Sea surface temperatures date back to the 19th century, but that doesn’t tell us much. We want to know the temperature down into the deep all around the world.

Ocean temperature vs depth in one location, Bigg (2003)

Ocean temperature vs depth in one location, "Oceans and Climate", Bigg (2003)

Here is a typical sample. Unlike the atmosphere, the oceans are more “stratified” – see Why Global Mean Surface Temperature Should be Relegated, Or Mostly Ignored for more on the basic physics of why the ocean is warmer at the surface. However, the oceans have complex global currents so we need to take a lot of measurements.

Measurements of the temperature down into the ocean depths didn’t really start until the 1940s and progressed very slowly since then. Levitus says:

Most of the data from the deep ocean are from research expeditions. The amount of data at intermediate and deep depths decreases as we go back further in time.

Fast forward to 2000 and the Argo project began to be deployed. By early 2010, over 3300 sensors have been moved into place around the world’s oceans. The Argo sensors drop to 2km in depth every 10 days and automatically measure temperature and salinity from the surface to this 2km depth:

Argo profile, Temperature and Salinity vs Depth

Argo profile, Temperature and Salinity vs Depth

Why salinity? Salinity is the other major factor apart from temperature which affects ocean density and therefore controls the ocean currents. See Predictability? With a Pinch of Salt please.. for more..

As we go back from 2010 there is progressively less data available. Even during the last 10 years measurement issues have created waves. But more on that later..

The Leviathan

It’s often best to step back a little to understand a subject better.

In 2000, Science published the paper Warming of the World Ocean by Sydney Levitus and a few co-workers. The paper has a thorough analysis of the previous 50 years of ocean history.

Ocean heat change, upper 3000m, 1955-1996, from Levitus (2000)

Ocean heat change, upper 3000m, 1955-1996, from Levitus (2000)

Now and again the large number of joules (unit of energy) are turned into a comparison W/m2 absorbed for the time period in question. 1W/m2 for a year (averaged over the entire surface of the earth) translates into 1.6×1022J.

But it’s better to get used to the idea that change in energy in the oceans is usually expressed as 1022J.

The graphs above show a lot of variability between oceans but still they all demonstrate the similar warming pattern.

Comparison of OHC in top 3000m, top 800m, top 300m, Levitus (2000)

Comparison of OHC in top 3000m, top 800m, top 300m, Levitus (2000)

Here is the data shown (from left to right) as the energy change in the deeper 3000m, 800m and 300m.

We are used to seeing temperature graphs, even sea surface temperature graphs that go up and down from year to year. Of course we want to understand exactly why, for example see Is climate more than weather? Is weather just noise? It’s easy to think of reasons why that might happen, even in a warming world (or a cooling world) – with one of the main reasons being that heat has moved around in the oceans.

For example, due to ocean currents colder water has been brought to the surface. The measured sea surface temperature would be significantly lower but the total heat hasn’t necessarily changed – because we are only measuring the temperature at one vertical location (the top).

So we wouldn’t expect to see a big yearly decline in total energy.. not if the planet was “warming up”.

So this is quite surprising! See the change downward in the 1980’s:

Ocean heat change - global summary, Levitus (2000). Numbers in 10^22 J

Ocean heat change - global summary, Levitus (2000). Numbers in 10^22 J

What caused this drop?

Here’s a another fascinating look into the depths that we don’t usually get to see:

Temperature comparison 1750m down. 1970-74 cf 55-59 & 1988-92 cf 70-74

Temperature comparison 1750m down. 1970-74 cf 55-59 & 1988-92 cf 70-74

Here we see changes in the deeper North Atlantic in two comparison periods about 15 years apart. (As a minor note the reason for the comparisons of averaged 5-year periods is the sparsity of data below the surface of the oceans).

See how the 1990 period has cooled from 15 years earlier.

Levitus, Antonov and Boyer updated their paper in 2005 (reference below).

They comment:

Here we present new yearly estimates for the 1955– 2003 period for the upper 300 m and 700 m layers and pentadal (5-year) estimates for the 1955–1959 through 1994–1998 period for the upper 3000 m of the world ocean.

The heat content estimates we present are based on an additional 1.7 million temperature profiles that have become available as part of the World Ocean Database 2001.

Also, we have processed approximately 310,000 additional temperature profiles since the release of WOD01 and include these in our analyses.

(My emphasis added). Think re-doing GISS and CRU is challenging? And for those who like to know where the data lives, check out the World Ocean Database and World Ocean Atlas Series

Ocean heat change, Levitus (2005)

Ocean heat change, Levitus (2005)

Here’s a handy comparison of the changing heat when we look at progressively deeper sections of the ocean with the more up-to-date data.

The actual numbers (change in energy) from 1955-1998 were calculated to be:

  • 0-300m:   7×1022J
  • 0-700m:   11×1022J
  • 0-3000m:   15×1022J
  • 1000-3000m:   1.3×1022J

So the oceans below 1000m only accounted for 9% of the change. This gives an idea of the relative importance of measuring the temperatures as we go deeper.

In their 2005 paper they comment on the question of the early 80’s cooling:

One dominant feature .. is the large decrease in ocean heat content beginning around 1980. The 0–700 m layer exhibits a decrease of approximately 6 x 1022 J between 1980 and 1983. This corresponds to a cooling rate of 1.2 Wm2 (per unit area of Earth’s total surface).

Most of this decrease occurs in the Pacific Ocean.. Most of the net decrease occurred at 5°S, 20°N, and 40°N. Gregory et al. [2004] have cast doubt on the reality of this decrease but we disagree. Inspection of pentadal data distributions at 400 m depth (not shown here) indicates excellent data coverage for these two pentads.

And they also comment:

However, the large decrease in ocean heat content starting around 1980 suggests that internal variability of the Earth system significantly affects Earth’s heat balance on decadal time-scales.


So far so interesting, but as the article is already long enough we will come back to the subject in a later post with the follow up:

How Big Should Error Bars be and the Sad Case of the Expendable Bathythermographs.

And for one reader, in anticipation:

XBT

XBT

Update – follow up post – The Real Measure of Global Warming – Part Two – How Big Should Error Bars be, and the Sad Case of the Expendable Bathythermographs

References

Warming of the World Ocean, Levitus et al, Science (2000)

Warming of the World Ocean 1955-2003, Levitus et al, GRL (2005)

Read Full Post »

There’s a huge amount of attention paid to the air temperature 6ft off the ground all around the continents of the world. And there’s an army of bloggers busy re-analyzing the data.

It seems like one big accident of history. We had them, so we used them, then analyzed them, homogenized them, area-weighted them, re-analyzed them, wrote papers about them and in so doing gave them much more significance than they deserve. Consequently, many people are legitimately confused about whether the earth is warming up.

I didn’t say land surface temperatures should be abolished. Everyone’s fascinated by their local temperature. They should just be relegated to a place of less importance in climate science.

Problems with Air Surface Temperature over Land

If you’ve spent any time following debates about climate, then this one won’t be new. Questions over urban heat island, questions over “value-added” data, questions about which stations and why in each index. And in journal-land, some papers show no real UHI, others show real UHI..

One of the reasons I posted the UHI in Japan article was I hadn’t seen that paper discussed, and it’s interesting in so many ways.

The large number of stations (561) with high quality data revealed a very interesting point. Even though there was a clear correlation between population density and “urban heat island” effect, the correlation was quite low – only 0.44.

Lots of scatter around the trend:

Estimate of actual UHI by referencing the closest rural stations

Estimate of actual UHI by referencing the closest rural stations - again categorized by population density

This doesn’t mean the “trend” wasn’t significant, as the result had a 99% confidence around it. What it meant was there was a lot of variability in the results.

The reason for the high variability was explained as micro-climate effects. The very local landscape, including trees, bushes, roads, new buildings, new vegetation, changing local wind patterns..

Interestingly, the main effect of UHI is on night-time temperatures:

Temperature change per decade: time of day vs population density

Temperature change per decade: time of day vs population density

Take a look at the top left graphic (the others are just the regional breakdown in Japan). Category 6 is the highest population density and category 3 the lowest.

What is it showing?

If we look at the midday to mid-afternoon temperatures then the average temperature change per decade is lowest and almost identical in the big cities and the countryside.

If we look at the late at night to early morning temperatures then average change per decade is very dependent on the population density. Rural areas have experienced very little change. And big cities have experienced much larger changes.

Night time temperatures have gone up a lot in cities.

A quick “digression” into some basic physics..

Why is the Bottom of the Atmosphere Warmer than the Top while the Oceans are Colder at the Bottom?

The ocean surface temperature somewhere on the planet is around 25°C, while the bottom of the ocean is perhaps 2°C.

Ocean temperature vs depth, Grant Bigg, Oceans and Climate (2003)

Ocean temperature vs depth, Grant Bigg, Oceans and Climate (2003)

The atmosphere at the land interface somewhere on the planet is around 25°C, while the top of the troposphere is around -60°C. (Ok, the stratosphere above the troposphere increases in temperature but there’s almost no atmosphere there and so little heat).

Typical temperature profile in the troposphere

Typical temperature profile in the troposphere

The reason why it’s all upside down is to do with solar radiation.

Solar radiation, mostly between wavelengths of 100nm to 4μm, goes through most of the atmosphere as if it isn’t there (apart from O2-O3 absorption of ultraviolet). But the land and sea do absorb solar radiation and, therefore, heat up and radiate longwave energy back out.

See the CO2 series for a little more on this if you wonder why it’s longwave getting radiated out and not shortwave.

The top of the ocean absorbs the sun’s energy, heats up, expands, and floats.. but it was already at the top so nothing changes and that’s why the ocean is mostly “stratified” (although see Predictability? With a Pinch of Salt please.. for a little about the complexity of ocean currents in the global view)

The very bottom of the atmosphere gets warmed up by the ground and expands. So now it’s less dense. So it floats up. Convective turbulence.

This means the troposphere is well-mixed during the day. Everything is all stirred up nicely and so there are more predictable temperatures – less affected by micro-climate. But at night, what happens?

At night, the sun doesn’t shine, the ground cools down very rapidly, the lowest level in the atmosphere absorbs no heat from the ground and it cools down fastest. So it doesn’t expand, and doesn’t rise. Therefore, at night the atmosphere is more stratified. The convective turbulence stops.

But if it’s windy because of larger scale effects in the atmosphere there is more “stirring up”. Consequently, the night-time temperature measured 6ft off the ground is very dependent on the larger scale effects in the atmosphere – quite apart from any tarmac, roads, buildings, air-conditioners – or urban heat island effects (apart from tall buildings preventing local windy conditions)

There’s a very interesting paper by Roger Pielke Sr (reference below) which covers this and other temperature measurement subjects in an accessible summary. (The paper used to be available free from his website but I can’t find it there now).

One of the fascinating observations is the high dependency of measured night temperatures on height above the ground, and on wind speed.

Micro-climate and Macro-climate

Perhaps the micro-climate explains much of the problems of temperature measurement.

But let’s turn to a thought experiment. No research in the thought experiment.. let’s take the decent-sized land mass of Australia. Let’s say large scale wind effects are mostly from the north to south – so the southern part of Australia is warmed up by the hot deserts.

Now we have a change in weather patterns. More wind blows from the south to the north. So now the southern part of Australia is cooled down by Antarctica.

This change will have a significant “weather” impact. And in terms of land-based air surface temperature we will have a significant change which will impact on average surface temperatures (GMST). And yet the energy in the climate system hasn’t changed.

Of course, we expect that these things average themselves out. But do they? Maybe our assumption is incorrect. At best, someone had better start doing a major re-analysis of changing wind patterns vs local temperature measurements. (Someone probably did it already, as it’s a thought experiment, there’s the luxury of making stuff up).

How much Energy is Stored in the Atmosphere?

The atmosphere stores 1000x less energy than the oceans. The total heat capacity of the global atmosphere corresponds to that of only a 3.2 m layer of the ocean.

So if we want a good indicator – a global mean indicator – of climate change we should be measuring the energy stored in the oceans. This avoids all the problems of measuring the temperature in a highly, and inconsistently, mobile lightweight gaseous substance.

Right now the ocean heat content (OHC) is imperfectly measured. But it’s clearly a much more useful measure of how much the globe is warming up than the air temperature a few feet off the ground.

If the primary measure was OHC with the appropriately-sized error bars, then at least the focus would go into making that measurement more reliable. And no urban heat island effects to worry about.

How to Average

There’s another problem with the current “index” – averaging of temperatures, a mix of air over land and sea surface temperatures. There is a confusing recent paper by Essex (2007), see the reference below, just the journal title says it’s not for the faint-hearted, which says we can’t average global temperatures at all –  however, this is a different point of view.

There is an issue of averaging land and sea surface temperatures (two different substances). But even if we put that to one side there is still a big question about how to average (which I think is part of the point of the confusing Essex paper..)

Here’s a thought experiment.

Suppose the globe is divided into 7 equal sized sections, equatorial region, 2 sub-tropics, 2 mid-latitude regions, 2 polar regions. (Someone with a calculator and a sense of spherical geometry would know where the dividing lines are.. and we might need to change the descriptions appropriately).

Now suppose that in 1999 the average annual temperatures are as follows:

  • Equatorial region: 30°C
  • Sub-tropics: 22°C, 22°C
  • Mid-latitude regions: 12°C, 12°C
  • Polar regions: 0°C, 0°C

So the “global mean surface temperature” = 14°C

Now in 2009 the new numbers are:

  • Equatorial region: 26°C
  • Sub-tropics: 20°C, 20°C
  • Mid-latitude regions: 12°C, 12°C
  • Polar regions: 5°C, 5°C

So the “global mean surface temperature” = 14.3°C – an increase of 0.3°C. The earth has heated up 0.3°C in 10 years!

After all, that’s how you average, right? Well, that’s how we are averaging now.

But if we look at it from more a thermodynamics point of view we could ask – how much energy is the earth radiating out? And how has the radiation changed?

After all, if we aren’t going to look at total heat, then maybe the next best thing is to use how much energy the earth is radiating to get a better feel for the energy balance and how it has changed.

Energy is radiated proportional to σT4, where T is absolute temperature (K).  0°C = 273K. And σ is a well-known constant.

Let’s reconsider the values above and average the amount of energy radiated and find out if it has gone up or down. After all, if temperature has gone up by 0.3°C the energy radiated must have gone up as well.

What we will do now is compare the old and new values of effective energy radiated. (And rather than work out exactly what it means in W/m2, we just calculate the σT4 value for each region and sum).

  • 1999 value = 2714.78 (W/arbitrary area)
  • 2009 value = 2714.41 (W/arbitrary area – but the same units)

Interesting? The “average” temperature went up. The energy radiated went down.

The more mathematically inclined will probably see why straight away. Once you have relationships that aren’t linear the results doesn’t usually change in proportion to the inputs.

Well, energy radiated out is more important in climate than some “arithmetic average of temperature”.

When Trenberth and Kiehl updated their excellent 1997 paper in 2008 the average energy radiated up from the earth’s surface was changed from 390W/m2 to 396W/m2. The reason? You can’t average the temperature and then work out the energy radiated from that one average (how they did it in 1997). Instead you have to work out the energy radiated all around the world and then average those numbers (how they did it in 2008).

Conclusion

Measuring the temperature of air to work out the temperature of the ground is problematic and expensive to get right. And requires lot of knowledge about changing wind patterns at night.

And even if we measure it accurately, how useful is it?

Oceans store heat, the atmosphere is an irrelevance as far as heat storage is concerned. If the oceans cool, the atmosphere will follow. If the oceans heat up, the atmosphere will follow.

And why take a lot of measurements and take an arithmetic average? If we want to get something useful from the surface temperatures all around the globe we should convert temperatures into energy radiated.

And I hope to cover ocean heat content in a follow up post..

Update – check out The Real Measure of Global Warming

References

Detection of urban warming in recent temperature trends in Japan, Fumiaki Fujibe, International Journal of Climatology (2009)

Unresolved issues with the assessment of multidecadal global land surface temperature trends, Roger A. Pielke Sr. et al, Journal of Geophysical Research (2007)

Does a Global Temperature Exist? C. Essex et al, Journal of Nonequilibrium Thermodynamics (2007)

Read Full Post »

General Circulation Models or Global Climate Models – aka GCMs – often have a bad reputation outside of the climate science community. Some of it isn’t deserved. We could say that models are misunderstood.

Before we look at models on the catwalk, let’s just consider a few basics

Introduction

In an earlier series, CO2 – An Insignificant Trace Gas we delved into simpler numerical models. These were 1d models. They were needed to solve the radiative transfer equations through a vertical column in the atmosphere. There was no other way to solve the equations – and that’s the case with most practical engineering and physics problems.

Here’s a model from another world:

Stress analysis in an impeller

Stress analysis in an impeller

Here’s a visualization of “finite element analysis” of stresses in an impeller. See the “wire frame” look, as if the impeller has been created from lots of tiny pieces?

In this totally different application, the problem of calculating the mechanical stresses in the unit is that the “boundary conditions” – the strange shape – make solving the equations by the usual methods of re-arranging and substitution impossible. Instead what happens is the strange shape is turned into lots of little cubes. Now the equations for the stresses in each little cube are easy to calculate. So you end up with 1000’s of “simultaneous” equations. Each cube is next to another cube and so the stress on each common boundary is the same. The computer program uses some clever maths and lots of iterations to eventually find the solution to the 1000’s of equations that satisfy the “boundary conditions”.

Finite element analysis is used successfully in lots of areas of practical problem solving, many orders simpler of course, than GCMs.

Uses of Models

One use of models is to predict, no project, future climate scenarios. That’s the one that most people are familiar with. And to supply the explanation for recent temperature increases.

But models have more practical uses. They are the only way to provide quantitative analysis of certain situations we want to consider. And they are the only way to test our understanding of the causes of past climate change.

Analysis

On this blog one commenter asked about how much equivalent radiative forcing would be present if all the Arctic sea ice was gone. That is, with no sea ice, there is less reflection of solar radiation. So more absorption of energy – how do we calculate the amount?

You can start with a very basic idea and just look at the total area of Arctic sea ice as a proportion of the globe, and look at the local change in albedo from around 0.5-0.8 down to 0.03-0.09, multiply by the current percentage area in sea ice to find a number in terms of the change in total albedo of the earth. You can turn that into the change in radiation.

But then you think a little bit deeper and want to take into account the fact that solar radiation is at a much lower angle in the Arctic so the first number you got probably overstated the effect. So now, even without any kind of GCM, you can simply use the equation for the reduction in solar insolation due to the effective angle between the sun and the earth:

I = S cos θ – but because this angle, θ, changes with time of day and time of year for any given latitude you have to plug a straightforward equation into a maths program and do a numerical integration. Or write something up in Visual Basic or whatever your programming language of choice is. Even Excel might be able to handle it.

This approach also gives the opportunity to introduce the dependence of the ocean’s albedo on the angle of sunlight (the albedo of ocean with the sun directly overhead is 0.03 and with the sun almost on the horizon is 0.09).

This will give you a better result. But now you start thinking about the fact that the sun’s rays are travelling in a longer path through the atmosphere because of the low angle in the sky.. how to incorporate that? Is it insignificant or highly significant? Perhaps including or not including this effect would change the “radiative forcing” by a factor of two? (I have no idea).

So if you wanted to quantify the positive feedback effect of melting ice your “model” starts requiring a lot more specifics. Atmospheric absorption by O2 and O3 depending on the angle of the sun. And the model should include the spatial profile of O3 in the stratosphere (i.e., is there less at the poles, or more).

It’s only by doing these calculations that the effect of sea ice albedo can be reliably quantified. So your GCM is suddenly very useful – essential in fact.

Without it, you would simply be doing the same calculations very laboriously, slowly and less accurately on pieces of paper. A bit like how an accounts department used to work before modern PCs and spreadsheets. Now one person in finance can do the job of 10 or 20 people from a few decades ago. Without an accountant someone can just change an exchange rate, or an input cost on a well-created spreadsheet and find out the change in cash-flow, P&L and so on. Armies of people would have been needed before to work out the answers.

And of course, the beauty of the GCM is that you can play around with other factors and find out what effect they have. The albedo of the ocean also changes with waves. So you can try some limits between albedo with no waves and all waves and see the change. If it’s significant then you need a parameter that tells you how calm or stormy the ocean is throughout the year. And if you don’t have that data, you have some idea of the “error”.

Everyone wants their own GCM now..

Of course, in that thought experiment about sea ice albedo we haven’t calculated a “final” answer. Other effects will come into play (clouds).. But as you can see with this little example, different phenomena can be progressively investigated and reasonably quantified.

Past Climate

Do we understand the causes of past climate change or not? Do the Milankovitch cycles actually explain the end of the last ice age, or the start of it?

This is another area where models are invaluable. Without a GCM, you are just guessing. Perhaps with a GCM you are guessing as well, but just don’t know it.. A topic for another day.

Common Misconception

The idea floats around that models have “positive feedback” plugged into them. Positive feedback for those few who don’t understand it.. increases in temperature from CO2 will induce more changes (like melting Arctic sea ice) that increase temperature further.

Unless it’s done very secretly, this isn’t the case. The positive feedbacks are the result of the model’s output.

The models have a mixed bag of:

  • fundamental equations – like conservation of energy, conservation of momentum
  • parameterizations – for equations that are only empirically known, or can’t be easily solved in the “grid” that makes up the 3d “mesh” of the GCM

More on these important points in the next post.

“Necessary but Not Sufficient”

A last comment before we see them on the catwalk – the catwalk “retrospective” – is that models matching the past is a necessary but not sufficient condition for them to match the future. However, it is – or it would be – depending on what we find.. a great starting point.

Models On the Catwalk

20th century temperature hindcast vs actual - ensemble

20th century temperature hindcast vs actual - ensemble

Most people have seen this graph. It comes from the IPCC AR4 (2007).

The IPCC comment:

Models can also simulate many observed aspects of climate change over the instrumental record. One example is that the global temperature trend over the past century (shown in Figure 1) can be modeled with high skill when both human and natural factors that influence climate are included.
And a little later:

In summary, confidence in models comes from their physical basis, and their skill in representing observed climate and past climate changes. Models have proven to be extremely important tools for simulating and understanding climate, and there is considerable confidence that they are able to provide credible quantitative estimates of future climate change, particularly at larger scales. Models continue to have significant limitations, such as in their representation of clouds, which lead to uncertainties in the magnitude and timing, as well as regional details, of predicted climate change. Nevertheless, over several decades of model development, they have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases.

Now of course, this is a hindcast. Looking backwards. One way to think about a hindcast is that it’s easy to tweak the results to match the past. That’s partly true and, of course, that’s how the model gets improved- until it can match the past.

The other way to think about the hindcast is that it’s a good way to test the model and find out how accurate it is.

The model gets to “past predict” many different scenarios. So if someone could tweak a model so that it accurately ran temperature patterns, rainfall patterns, ocean currents, etc – if it can be tweaked so that everything in the past is accurate – how can that be a bad thing? Also the model “tweaker” can change a parameter but it doesn’t give the flexibility that many would think. Let’s suppose you want to run the model to calculate average temperatures from 1980-1999 (see below) so you put your start conditions into the model, which are values for 1980 for temperature and all other “process variables” and crank up the model.

It’s not like being able to fix up a painting with a spot of paint in the right place – it’s more like tuning an engine and hoping you win the Dhaka rally. After you blew the engine halfway through you get to do a rebuild and guess what to change next. Well, analogies – just illustrations..

Obviously, these results would need to be achieved by equations and parameterizations that matched the real world. If “tweaking” requires non-physical laws then that would create questions. Well, more on this also in later posts.

More model shots.. The top graphic is the one of interest. This is actual temperature (average 1980-1999) in contours with the shading denoting the model error (actual minus model values). Light blue and light orange (or is it white?) are good..

Actual 1980-1999 temperature and Model error from actual

Actual 1980-1999 temperature with shading denoting model error (top graphic)

The model error is not so bad. Not perfect though. (Note that for some reason, not explained, the land temperature average is over a different time period than sea surface temperatures).

Temperature range:

1980-1999 Temperature range in each location and Model error in temperature range

1980-1999 Temperature range in each location and Model error in temperature range

The standard deviation in temperature gives a measure of the range of temperatures experienced. The colors on the globe indicate the difference between the observed and simulated standard deviation of temperatures.

Simplifying, the light blue and light orange areas are where the models are best at working out the monthly temperature range. The darker colors are where the models are worse. Looks pretty good.

Rainfall:

Actual Rainfall vs Model Rainfall, 1980-99

Actual Rainfall vs Model Rainfall, 1980-99

This one is awesome. Remember that rainfall is calculated by physical processes. Temperature, available water sources, clouds, temperature changes, winds, convection..

Ocean temperature:

Ocean potential temperature and model error 1957-1990

Ocean potential temperature and model error 1957-1990

Ocean potential temperature, what’s that? Think of it as the real temperature with unstable up and down movements factored out, or read about potential temperature.. Note that the contours are the measurements (averaged over 34 years) and the shaded colors are the deviations of actual – model. So once again the light blue and light orange are very close to reality, the darker colors are further away from reality.

This one you would expect to be easier to get right than rainfall, but still, looking good.

Conclusion

It’s just the start of the journey into models. There will be more, next we will look at Models Off the Catwalk. So if you have comments it’s perhaps not necessary to write your complete thoughts on past climate, chaos.. Interesting, constructive and thoughtful comments are welcome and encouraged, of course. As are questions.

Hopefully, we can avoid the usual bunfight over whether the last ten years actual match the model’s predictions. Other places are so much better for those “discussions”..

Update – Part Two now published.

Read Full Post »

New Theory Proves AGW Wrong!

I did think about starting this post by pasting in some unrelated yet incomprehensible maths that only a valiant few would recognize, and finish with:

And so, the theory is overturned

But that might have put off many readers from making it past the equations, which would have been a shame, even though the idea was amusing.

From time to time new theories relating to, and yet opposing, the “greenhouse” effect or something called AGW, get published in a science journal somewhere and make a lot of people happy.

What is the theory of AGW?

If we are going to consider a theory, then at the very least we need to understand what the theory claims. It’s also a plus to understand how it’s constructed, what it relies on and what evidence exists to support the theory. We also should understand what evidence would falsify the theory.

AGW usually stands for anthropogenic global warming or the idea the humans, through burning of fossil fuels and other activities have added to the CO2 in the atmosphere, thereby increased the “greenhouse” effect and warmed the planet. And the theory includes that the temperature rise over the last 100 years or so is largely explained by this effect, and further increases in CO2 will definitely lead to further significant temperature rises.

So far on this blog I haven’t really mentioned AGW, until now. A few allusions here and there. One very minor non-specific claim at the end of Part Seven.

And yet there is a whole series on CO2 – An Insignificant Trace Gas? where the answer is “no, it’s not insignificant”.

Doesn’t that support AGW? Isn’t the theory of “greenhouse” gases the same thing as AGW?

The concept that some gases in the atmosphere absorb and then re-radiate longwave radiation is an essential component of AGW. It is one foundation. But you can accept the “greenhouse gas” theory without accepting AGW. For example, John Christy, Roy Spencer, Richard Lindzen, and many more.

Suppose during the next 12 months the climate science community all start paying close attention to the very interesting theory of Svensmart & Friis-Christensen who propose that magnetic flux changes from the sun induce cloud formation and thereby changing the climate in much more significant ways than greenhouse gases. Perhaps the climate scientists all got bored with their current work, or perhaps some new evidence or re-analysis of the data showed that it was too strong a theory to ignore. Other explanations for the same data just didn’t hold up.

By the end of that 12 months, suppose that a large part of the climate science community were nodding thoughtfully and saying “this explains all the things we couldn’t explain before and in fact fits the data better than the models which use greenhouse gases plus aerosols etc“.  (It’s a thought experiment..)

Well, the theory of AGW would be, if not dead, “on the ropes”. And yet, the theory that some gases in the atmosphere absorb and re-radiate longwave radiation would still be alive and well. The radiative transfer equations (RTE) as presented in the CO2 series would still hold up. And the explanations as to how much energy CO2 absorbed and re-radiated versus water vapor would not have changed a jot.

That’s because AGW is not “the greenhouse gas” theory. The “greenhouse gas” theory is an important and essential building block for AGW. It’s foundational atmospheric physics.

Many readers know this, of course, but some visitors may be confused over this point. Overturning the “greenhouse” theory would require a different approach. And in turn, that theory is based on a few elements each of which are very strong, but perhaps one could fall, or new phenomena could be found which affected the way these elements came together. It’s all possible.

So it is essential to understand what theory we are talking about. And to understand what that theory actually says, and what in turn, it depends on.

A Digression about the Oceans

Analogies prove nothing, they are illustrations. This analogy may be useful.

Working out the 3d path of the oceans around the planet is a complex task. You can read a little about some aspects of ocean currents in Predictability? With a Pinch of Salt please.. Computer models which attempt to calculate some aspects of the volume of warm water flowing northwards from the tropics to Northern Europe and then the cold water flowing southwards back down below struggle in some areas to get the simulated flow of water anywhere near close to the measured values (at least in the papers I was reading).

Why is that? The models use equations for conservation of momentum, conservation of angular momentum and density (from salinity and temperature). Plus a few other non-controversial theories.

Most people reading that there is a problem probably aren’t immediately thinking:

Oh, it’s got to be angular momentum, never believed in it!

Instead many readers might theorize about the challenges of getting the right starting conditions – temperature, salinity, flow at many points in the ocean. Then being able to apply the right wind-drag, how much melt-water flowing from Greenland, how cold that is.. And perhaps how well-defined the shape of the bottom of the oceans are in the models. How fine the “mesh” is..

We don’t expect momentum and density equations to be wrong. Of course, they are just theories, someone might publish a paper which picks a hole in conservation of momentum.. and angular momentum, well, never really believed in that!

The New Paper that Proves “The Theory” Wrong!

Let’s pick a theory. Let’s pick – solving the radiative transfer equations in a standard atmosphere. In laymans terms this would include absorption and re-radiation of longwave radiation by various trace gases and the effect on the temperature profile through the atmosphere – we could call it the “greenhouse theory”.

Ok.. so a physicist has a theory that he claims falsifies our theory. Has he proven our “greenhouse theory” wrong?

We establish that, yes, he is a physicist and has done some great work in a related or similar field. That’s a good start. We might ask next?

Has the physicist published the theory anywhere?

So what we are asking is, has anyone of standing checked the paper? Perhaps the physicist has a good idea but just made a mistake. Used the wrong equation somewhere, used a minus sign where a plus sign should have been, or just made a hash of re-arranging some important equation..

Great, we find out that a journal has published the paper.

So this proves the theory is right?

Not really. It just proves that the editor accepted it for publication. There might be a few reasons why:

  • the editor is also convinced that an important theory has been overturned by the new work and is equally excited by the possibilities
  • the editor thought that it was interesting new approach to a problem that should see the light of day, even though he thinks it’s unlikely to survive close scrutiny
  • the editor is fed up with being underpaid and overworked and there aren’t enough papers being submitted
  • the editor thinks it will really wind up Gavin Schmidt and this will get him to the front of the queue quicker

Well, people are people. All we know is one more person probably thinks it is a decent approach to a problem. Or was having an off day.

For a theory to become “an accepted theory” (because even the theory of gravity is “a theory” not “a fact”) it usually takes some time to be accepted by the people who understand that field.

Sheer Stubbornness and How to be Right

The fact that it’s not accepted by the community of scientists in that discipline doesn’t mean it’s wrong. People who have put their life’s work behind a theory are not going to be particularly accepting. They might die first!

How scientific theories get overturned is a fascinating subject. Those who don’t mind reading quite turgid work describing a fascinating subject might enjoy The Structure of Scientific Revolutions by Thomas Kuhn. No doubt there are more fun books that others can recommend.

The new theory might be right and it might be wrong. The fact that it’s been published somewhere is only the first step on a journey. If being published was sufficient then what to make of opposing papers that both get published?

Why Papers which Prove “it’s all wrong” are Celebrated

Many people are skeptical of the AGW theory.

Some are skeptical of “greenhouse gas” theory. Some accept that theory in essence but are skeptical of the amount that CO2 contributes to the “greenhouse” gas effect.

Some didn’t realize there was a difference..

If you are skeptical about something and someone with credentials agrees with you, it’s a breath of fresh air! Of course, it’s natural to celebrate.

But it’s also important to be clear.

If, for example, you celebrate Richard Lindzen’s concept as put forward in Lindzen & Choi (2009) then you probably shouldn’t be celebrating Miskolczi’s paper. And if you celebrated either of those, you shouldn’t be celebrating Gerlich & Tscheuschner because they will be at odds with the previous ones (as far as I can tell). And if you like Roy Spencer’s work, he is at odds this all of these.

Now, please don’t get me wrong, I don’t want to attack anyone’s work. Lindzen and Choi’s paper is very interesting although I had a lot of questions about it and maybe will get an opportunity at some stage to explain my thoughts. And of course, Professor Lindzen is a superstar physicist.

Miskolczi’s paper confused me and I put it aside to try and read it again – update April 2011, some major problems as explained in The Mystery of Tau – Miskolczi and the following two parts. And I thought it might be easier to understand the evidence that would falsify that theory (and then look for it) than lots of equations. Someone just pointed me to Gerlich & Tscheuschner so I’m not far into it. Perhaps it’s the holy grail – update, full of huge errors as explained in On the Miseducation of the Uninformed by Gerlich and Tscheuschner (2009).

And Lindzen and Choi’s is in a totally different category which is why I introduced it. Widely celebrated as proving the death of AGW beyond a shadow of doubt by the illustrious and always amusing debater Christopher Monckton, they aren’t at odds with “greenhouse gas” theory. They are at odds with the feedback resulting from an increase in “radiative forcing” from CO2 and other gases. They are measuring climate sensitivity. And as many know and understand, the feedback or sensitivity is the key issue.

So, if New Theory Proves AGW Wrong is an exciting subject, you will continue to enjoy the subject for many years, because I’m sure there will be many more papers from physicists “proving” the theory wrong.

However, it’s likely that if they are papers “falsifying” the foundational “greenhouse” gas effect – or radiative-convective model of the atmosphere – then probably each paper will also contradict the ones that came before and the ones that follow after.

Well, predictions are hard to make, especially about the future. Perhaps there will be a new series on this blog Why CO2 Really is Insignificant. Watch out for it.

Read Full Post »

We cover some basics in this post. The subject was inspired by one commenter on the blog.

  • When we look at a “radiative forcing” what does it mean?
  • What immediate and long-term impact does it have on temperature?
  • What is the new equilibrium temperature?

Radiative Forcing

The IPCC, drawing on the work of many physicists over the years, states that the radiative forcing from the increase in CO2 to about 380ppm is 1.7 W/m2. You can see how this is all worked out in the series CO2 – An Insignificant Trace Gas.

What is “radiative forcing”? At the top of atmosphere (TOA) there is an effective downward increase in radiation. So more energy reaches the surface than before..

Thermal Lag

If you put very cold water in a pot and heat it on a stove, what happens? Let’s think about the situation if the water doesn’t boil because we don’t apply so much heat..

Simple Thermal Lag

Simple Thermal Lag

I used simple concepts here.

T= water temperature and the starting temperature of the water, T (t=0) = 5°C

Air temperature, T1 = 5°C

Energy in per second = constant (=1000W in this example)

Energy out per second = h x (T – T1), where h is just a constant (h=20 in this example)

And the equation for temperature increase is:

Energy per second, Q = mc.ΔT

m = mass, and c= specific heat capacity (how much heat is required to raise 1kg of that material by 1’C) – for water this is 4,200 J kg-1 K-1. I used 1kg.

ΔT is change in temperature (and because we have energy per second the result is change in temperature per second)

The simple and obvious points that we all know are:

  • the liquid doesn’t immediately jump to its final temperature
  • as the liquid gets closer to its final temperature the rate of temperature rise slows down
  • as the temperature of the liquid increases it radiates or conducts or convects more energy out, so there will be a new equilibrium temperature reached

In this case, the heat calculation is by some kind of simple conduction process. And is linearly proportional to the temperature difference between the water and the air.

It’s not a real world case but is fairly close – as always, simplifying helps us focus on the key points.

What might be less obvious until attention is drawn to it (then it is obvious) – the final temperature doesn’t depend on the heat capacity of the liquid. That only affects how long it takes to reach its equilibrium – whatever that equilibrium happens to be.

Heating the World

Suppose we take the radiative forcing of 1.7W/m2 and heat the oceans. The oceans are the major store of the climate system’s heat, around 1000x more energy stored than in the atmosphere. We’ll ignore the melting of ice which is a significant absorber of energy.

Ocean mean depth = 4km (4000m)  – the average around the world

Only 70% of the earth’s surface is covered by ocean and we are going to assume that all of the energy goes into the oceans so we need to “scale up” – energy into the oceans =  1.7/0.7 = 2.4 W/m2 going into the oceans.

The density of ocean water is approximately 1000 kg/m3 (it’s actually a little more because of salinity and pressure..)

Each square meter of ocean has a volume of 4000 m3 (thinking about a big vertical column of water), and therefore a mass of 4×106 kg.

Q = mc x dT

Q is energy, m is mass, c is specific heat capacity = 4.2 kJ kg-1 K-1,
dT = change in temperature

We have energy per second (W/m2), so change in temperature per second, dT = Q/mc

dT per second = 2.4 / (4×106 x 4.2×103)

= 1.4 x10-10 °C/second

dT per year = 0.004 °C/yr

That’s really small! It would take 250 years to heat the oceans by 1°C..
Let’s suppose – more realistically – that only the top “well-mixed” 100m of ocean receives this heat, so we would get (just scaling by 4000m/100m):

dT per year = 0.18 ‘C per year.

An interesting result, which of course, ignores the increase in heat lost due to increased radiation, and ignores the heat lost to the lower part of the ocean through conduction.

If we took this result and plotted it on a graph the temperatures would just keep going up!

Calculating the new Equilibrium Temperature

The climate is slightly complicated. How do we work out the new equilibrium temperature?

Do we think about the heat lost from the surface of the oceans into the atmosphere through conduction, convection and radiation? Then what happens to it in the atmosphere? Sounds tricky..

Fortunately, we can take a very simple view of planet earth and say energy in = energy out. This is the “billiard ball” model of the climate, and you can see it explained in CO2 – An Insignificant Trace Gas – Part One and subsequent posts.

What this great and simple model lets us do is compare energy in and out at the top of atmosphere (TOA). Which is why “radiative forcing” from CO2 is “published” at TOA. It helps us get the big picture.

Energy radiated from a body per unit area per second is proportional to T4, where T is temperature in Kelvin (absolute temperature). Energy radiated from the earth has to be balanced by energy we absorb from the sun.

This lets us do a quick comparison, using some approximate numbers.

Energy absorbed from the sun, averaged over the surface of the earth, we’ll call it Pold = 239 W/m2.

Surface temperature, we’ll call it Told = 15°C = 288K

If we add 1.7W/m2 at TOA what does this do to temperature? Well, we can simply divide the old and new values, making the equation slightly easier..

(Tnew/Told)4 =Pnew/Pold

So Tnew=288 x (239+1.7/239)1/4

Therefore, Tnew = 288.5K or 15.5°C   – a rise of 0.5°C

I don’t want to claim this represents some kind of complete answer, but just for some element of completeness, if we redo the calculation with the radiative forcing for all of the “greenhouse” gases, excluding water vapor, we have a radiative forcing of 2.4W/m2.

Tnew = 288.7 or 15.7°C   – a rise of 0.7°C.

(Note for the purists, I believe the only way to actually calculate the old and new surface temperature is using the complete radiative transfer equations, but the results aren’t so different)

Conclusion

The aim of this post is to clarify a few basics, and in the process we looked at how quickly the oceans might warm as a result of increased radiative forcing from CO2.

It does demonstrate that depending on how well-mixed the oceans are, the warming can be extremely slow (250 years for 1°C rise) or very quick (5 years for 1°C rise).

So from the information presented so far, temperatures we currently experience at the surface might be the new equilibrium from increased CO2, or a long way from it – this post doesn’t address that huge question! Or any feedbacks.

What we ignored in the calculation of temperature rise was the increased energy lost as the temperature rose – which would slow the rise down (like the heated water in the graph). But at least it’s possible to get a starting point.

We can also see a rudimentary calculation of the final increase in temperature – the new equilibrium – as a result of this forcing (we are ignoring any negative or positive feedbacks).

And the new equilibrium doesn’t depend on the thermal lag of the oceans.

Of course, calculations of feedback effects in the real climate might find thermal lag parameters to be extremely important.

Read Full Post »

In the series CO2 – An Insignificant Trace Gas? we concluded (in Part Seven!) with the values of “radiative forcing” as calculated for the current level of CO2 compared to pre-industrial levels.

That value is essentially a top of atmosphere (TOA) increase in longwave radiation. The value from CO2 is 1.7 W/m2. And taking into account all of the increases in trace gases (but not water vapor) the value totals 2.4 W/m2.

Comparing Radiative Forcing

The concept of radiative forcing is a useful one because it allows us to compare different first-order effects on the climate.

The effects aren’t necessarily directly comparable because different sources have different properties – but they do allow a useful first pass or quantitative comparison. When we talk about heating something, a Watt is a Watt regardless of its source.

But if we look closely at the radiative forcing from CO2 and solar radiation – one is longwave and one is shortwave. Shortwave radiation creates stratospheric chemical effects that we won’t get from CO2. Shortwave radiation is distributed unevenly – days and nights, equator and poles – while CO2 radiative forcing is more evenly distributed. So we can’t assume that the final effects of 1 W/m2 increase from the two sources are the same.

But it helps to get some kind of perspective. It’s a starting point.

The Solar “Constant”, now more accurately known as Total Solar Irradiance

TSI has only been directly measured since 1978 when satellites went into orbit around the earth and started measuring lots of useful climate values directly. Until it was measured, solar irradiance was widely believed to be constant.

Prior to 1978 we have to rely on proxies to estimate TSI.

Earth from Space

Earth from Space - pretty but irrelevant..

Accuracy in instrumentation is a big topic but very boring:

  • absolute accuracy
  • relative accuracy
  • repeatability
  • long term drift
  • drift with temperature

These are just a few of the “interesting” factors along with noise performance.

We’ll just note that absolute accuracy – the actual number – isn’t the key parameter of the different instruments. What they are good at measuring accurately is the change. (The differences in the absolute values are up to 7 W/m2, and absolute uncertainty in TSI is estimated at approximately 4 W/m2).

So here we see the different satellite measurements over 30+ years. The absolute results here have not been “recalibrated” to show the same number:

Total Solar Irradiation, as measured by various satellites

Total Solar Irradiation, as measured by various satellites

We can see the solar cycles as the 11-year cycle of increase and decrease in TSI.

One item of note is that the change in annual mean TSI from minimum to maximum of these cycles is less than 0.08%, or less than 1.1 W/m2.

In The Earth’s Energy Budget we looked at “comparing apples with oranges” – why we need to convert the TSI or solar “constant” into the absorbed radiation (as some radiation is reflected) averaged over the whole surface area.

This means a 1.1 W/m2 cyclic variation in the solar constant is equivalent to 0.2 W/m2 over the whole earth when we are comparing it with say the radiative forcing from extra CO2 (check out the Energy Budget post if this doesn’t seem right).

How about longer term trends? It seems harder to work out as any underlying change is the same order as instrument uncertainties. One detailed calculation on the minimum in 1996 vs the minimum in 1986 (by R.C. Willson, 1998) showed an increase of 0.5 W/m2 (converting that to the “radiative forcing” = 0.09 W/m2). Another detailed calculation of that same period showed no change.

Here’s a composite from Fröhlich & Lean (2004) – the first graphic is the one of interest here:

Composite TSI from satellite, 1978-2005, Frohlich & Lean

Composite TSI from satellite, 1978-2004, Frohlich & Lean

As you can see, their reanalysis of the data concluded that there hasn’t been any trend change during the period of measurement.

Proxies

What can we work out without satellite data – prior to 1978?

The Sun

The Sun

The historical values of TSI have to be estimated from other data. Solanski and Fligge (1998) used the observational data on sunspots and faculae (“brightspots”) primarily from the Royal Greenwich Observatory dating to back to 1874. They worked out a good correlation between the TSI values from the modern satellite era with observational data and thereby calculated the historical TSI:

Reconstruction of changes in TSI, Solanski & Fligge

Reconstruction of changes in TSI, Solanski & Fligge

As they note, these kind of reconstructions all rely on the assumption that the measured relationships have remained unchanged over more than a century.

They comment that depending on the reconstructions, TSI averaged over its 11-year cycle has varied by 0.4-0.7W/m2 over the last century.

Then they do another reconstruction which includes changes that take place in the “quiet sun” periods – because the reconstruction above is derived from observations of active regions –  in part from data comparing the sun to similar stars.. They comment that this method has more uncertainty, although it should be more complete:

Second reconstruction of TSI back to 1870, Solanski & Fligge

Second reconstruction of TSI back to 1870, Solanski & Fligge

This method generates an increase of 2.5 W/m2 between 1870 and 1996. Which again we have to convert to a radiative forcing of 0.4 W/m2

The IPCC summary (TAR 2001), p.382, provides a few reconstructions for comparison, including the second from Solanski and Fligge:

Reconstructions of TSI back to 1600, IPCC (2001)

Reconstructions of TSI back to 1600, IPCC (2001)

And then bring some sanity:

Thus knowledge of solar radiative forcing is uncertain, even over the 20th century and certainly over longer periods.

They also describe our level of scientific understanding (of the pre-1978 data) as “very low”.

The AR4 (2007) lowers some of the historical changes in TSI commenting on updated work in this field, but from an introductory perspective the results are not substantially changed.

Second Order Effects

This post is all about the first-order forcing due to solar radiation – how much energy we receive from the sun.

There are other theories which rely on relationships like cloud formation as a result of fluctuations in the sun’s magnetic flux – Svensmart & Friis-Christensen. These would be described as “second-order” effects – or feedback.

These theories are for another day.

First of all, it’s important to establish the basics.

Conclusion

We can see from satellite data that the cyclic changes in Total Solar Irradiance over the last 30 years are small. Any trend changes are small enough that they are hard to separate from instrument errors.

Once we go back further, it’s an “open field”. Choose your proxies and reconstruction methods and wide ranging numbers are possible.

When we compare the known changes (since 1978) in TSI we can directly compare the radiative forcing with the “greenhouse” effect and that is a very useful starting point.

References

Solar radiative output and its variability: evidence and mechanisms, Fröhlich & Lean, Astrophysics Review (2004)

Solar Irradiance since 1874 Revisited, Solanski & Fligge, Geophysical Research Letters (1998)

Total Solar Irradiance Trend During Solar Cycles 21 and 22, R.C.Willson, Science (1997)

Read Full Post »

Recap

In Part Five we finally got around to seeing our first calculations by looking at two important papers which used “numerical methods” – 1-dimensional models – to calculate the first order effect from CO2. And to separate out the respective contribution of water vapor and CO2.

Both papers were interesting in their own way.

The 1978 Ramanathan and Coakley paper because it is the often cited paper as the first serious calculation. And it’s good to see the historical perspective as many think scientists have been looking around for an explanation of rising temperatures and “hit on” CO2. Instead, the radiative effect of CO2, other trace gases and water vapor has been known for a very long time. But although the physics was “straightforward”, solving the equations was more challenging.

The 1997 Kiehl and Trenberth paper was discussed because they separate out water vapor from CO2 explicitly. They do this by running the numerical calculations with and without various gases and seeing the effects. We saw that water vapor contributed around 60% with CO2 around 26%.

I thought the comparison of CO2 and water vapor was useful to see because it’s common to find people nodding to the idea that longwave from the earth is absorbed and re-emitted back down (the “greenhouse” effect) – but then saying something like:

Of course, water vapor is 95%-98% of the whole effect, so even doubling CO2 won’t really make much difference

The question to ask is – how did they work it out? Using the complete radiative transfer equations in a 1-d numerical model with the spectral absorption of each and every gas?

Of course, everyone’s entitled to their opinion.. it’s just not necessarily science.

The “Standardized Approach”

In the calculations of the “greenhouse” effect for CO2, different scientists approached the subject slightly differently. Clear skies and cloudy skies, for example. Different atmospheric profiles. Some feedback from the stratosphere (higher up in the atmosphere), or not. Some feedback from water vapor, or not. Different band models (see Part Four). And also different comparison points of CO2 concentrations.

As the subject of the exact impact of CO2 – prior to any feedbacks – became of more and more concern, a lot of effort went into standardizing the measurement/simulation conditions.

One of the driving forces behind this was the fact that many different GCMs (Global Climate Models) produced different results and it was not known how much of this was due to variations in the “first order forcing” of CO2. (“First order forcing” means the effect before any feedbacks are taken into account). So different models had to be compared and, of course, this required some basis of comparison.

There was also the question about how good band models were in action compared with line by line (LBL) calculations. LBL calculations require a huge computational effort because the minutiae of every absorption line from every gas has to be included. Like this small subset of the CO2 absorption lines:

CO2 spectral lines from one part of the 15um band

From "Handbook of Atmospheric Sciences", Hewitt & Jackson 2003

Band models are much simpler, and therefore widely used in GCMs. Band models are “paramaterizations”, where a more complex effect is turned into a simpler equation that is easier to solve.

Averaging

Does one calculation of CO2 radiative forcing from an “average atmosphere” gives us the real result for the whole planet?

Asking the question another way, if we calculate the CO2 radiative forcings from all the points around the globe and average the radiative forcing do we get the same result as one calculation for the “average atmosphere”.

This subject was studied in a 1998 paper: Greenhouse gas radiative forcing: Effects of average and inhomogeneities in trace gas distribution, by Freckleton et al. They ran the same calculations with 1 profile (the “standard atmosphere”), 3 profiles (one tropical plus a northern and southern extra-tropical “standard atmosphere”), and then by resolving the globe into ever finer sections.

The results were averaged (except the single calculation of course) and plotted out. It was clear from this research that using the average of 3 profiles – tropical, northern and southern extra-tropics – was sufficient and gave only 0.1% error compared with averaging the calculation at 2.5% resolution in latitude.

The Standard Result

The standard definition of radiative forcing is:

The change in net (down minus up) irradiance (solar plus longwave; in W/m2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values.

What does it mean? The extra incoming energy flow at the top of atmosphere (TOA) without feedbacks from the surface or the troposphere (lower part of the atmosphere). The stratospheric adjustment is minor and happens almost immediately (there are no oceans to heat up or ice to melt in the stratosphere unlike at the earth’s surface). Later note added – “almost immediately” in the context of the response of the surface, but the timescale is the order of 2-3 months.

The common CO2 doubling scenario, from pre-industrial, is:

278ppm -> 556 ppm

And the comparison to the present day, of course, depends on when the measurement occurs but most commonly uses the 278ppm value as a comparison.

IPCC AR4 (2007)  pre-industrial to the present day (2005),  1.7 W/m2

IPCC AR4 (2007)  doubling CO2,  3.7 W/m2

Just for interest.. Myhre at al (1998) calculated the effects of CO2 – and 12 other trace gases – from the current increases in those gases (to 1995). They calculated separate results for clear sky and cloudy sky. Clear sky results are useful in comparisons between models as clouds add complexity and there are more assumptions to untangle.

They also ran the calculations using the very computationally expensive Line by Line (LBL) absorption, and compared with a Narrow Band Model (NBM) and Broad Band Model (BBM).

CO2 current (1995) compared to pre-industrial, clear sky – 1.76W/m2, cloudy sky 1.37W/m2

(The NBM and BBM were within a few percent of the LBL calculations).

There are lots of other papers looking at the subject. All reach similar conclusions, which is no surprise for such a well-studied subject.

Where does the IPCC Logarithmic Function come from?

The 3rd assessment report (TAR) and the 4th assessment report (AR4) have an expression showing a relationship between CO2 increases and “radiative forcing” as described above:

ΔF = 5.35 ln (C/C0)

where:

C0 = pre-industrial level of CO2 (278ppm)
C = level of CO2 we want to know about
ΔF = radiative forcing at the top of atmosphere.

(And for non-mathematicians, ln is the “natural logarithm”).

This isn’t a derived expression which comes from simplifying down the radiative transfer equations in one fell swoop!

Instead, it comes from running lots of values of CO2 through the standard 1d model we have discussed, and plotting the numbers on a graph:

Radiative Forcing vs CO2 concentration, Myhre et al (1998)

Radiative Forcing vs CO2 concentration, Myhre et al (1998)

From New estimates of radiative forcing due to well mixed greenhouse gases, Myhre et al, Geophysical Research Letters (1998).

The graph reasonably closely approximates to the equation above. It’s very useful because it enables people to do a quick calculation.

E.g. CO2 = 380ppm, ΔF = 1.7W/m2

CO2 = 556ppm, ΔF = 3.7 W/m2

Easy.

Benefit of Using “Radiative Forcing” at TOA (top of atmosphere)

First of all, we can use this number to calculate a very basic temperature increase at the surface. Prior to any feedbacks – or can we? [added note, James McC kindly pointed out that my calculation of temperature is wrong and so maybe it is too simplistic to use this method when there is an absorbing and re-transmitting atmosphere in the way. I abused this approach myself rather than following any standard work. All errors are mine in this bit – we’ll let it stand for interest. See James McC’s comments in About this Blog)

In Part One of this series, in the maths section at the end (to spare the non-mathematically inclined), we looked at the Stefan-Boltzmann equation, which shows the energy radiated from any “body” at a given temperature (in K):

Total energy per unit area per unit time, j = εσT4

where ε= emissivity (how close to a “blackbody”: 0-1), σ=5.67×10-8 and T = absolute temperature (in K).

The handy thing about this equation is that when the earth’s climate is in overall equilibrium, the energy radiated out will match the incoming energy. See The Earth’s Energy Budget – Part Two and also Part One might be of interest.

We can use the equations to do a very simple calculation of what ΔF = 3.7W/m2 (doubling CO2) means in terms of temperature increase. It’s a rough and ready approach. It’s not quite right, but let’s see what it churns out.

Take the solar incoming absorbed energy of 239W/m2 (see The Earth’s Energy Budget – Part One) and comparing the old  (only solar) – and new (solar + radiative forcing for doubling CO2 values), we get:

Tnew4/Told4 = (239 + 3.7)/239

where Tnew = the temperature we want to determine, Told = 15°C or 288K

We get Tnew = 289.1K or a 1.1°C increase.

Well, the full mathematical treatment calculates a 1.2°C increase – prior to any feedbacks – so it’s reasonably close.

[End of dodgy calculation that when recalculated is not close at all. More comments when I have them].

Secondly, we can compare different effects by comparing their radiative forcing. For example, we could compare a different “greenhouse” gas. Or we could compare changes in the sun’s solar radiation (don’t forget to compare “apples with oranges” as explained in The Earth’s Energy Budget – Part One). Or albedo changes which increase the amount of reflected solar radiation.

What’s important to understand is that the annualized globalized TOA W/m2 forcing for different phenomena will have subtly different impacts on the climate system, but the numbers can be used as a “broad-brush” comparison.

Conclusion

We can have a lot of confidence that the calculations of the radiative forcing of CO2 are correct. The subject is well-understood and many physicists have studied the subject over many decades. (The often cited “skeptics” such as Lindzen, Spencer, Christy all believe these numbers as well). Calculation of the “radiative forcing” of CO2 does not have to rely on general circulation models (GCMs), instead it uses well-understood “radiative transfer equations” in a “simple” 1-dimensional numerical analysis.

There’s no doubt that CO2 has a significant effect on the earth’s climate – 1.7W/m2 at top of atmosphere, compared with pre-industrial levels of CO2.

What conclusion can we draw about the cause of the 20th century rise in temperature from this series? None so far! How much will temperature rise in the future if CO2 keeps increasing? We can’t yet say from this series.

The first step in a scientific investigation is to isolate different effects. We can now see the effect of CO2 in isolation and that is very valuable.

Although there will be one more post specifically about “saturation” – this is the wrap up.

Something to ponder about CO2 and its radiative forcing.

If the sun had provided an equivalent increase in radiation over the 20th century to a current value of 1.7W/m2, would we think that it was the cause of the temperature rises measured over that period?

Update – CO2 – An Insignificant Trace Gas? Part Eight – Saturation is now published

References

Greenhouse gas radiative forcing: Effects of average and inhomogeneities in trace gas distribution, Freckleton at al, Q.J.R. Meteorological Society (1998)

New estimates of radiative forcing due to well mixed greenhouse gases, Myhre et al, Geophysical Research Letters (1998)


Read Full Post »

« Newer Posts - Older Posts »