Feeds:
Posts
Comments

Archive for the ‘Measurement’ Category

Gary Thompson at American Thinker recently produced an article The AGW Smoking Gun. In the article he takes three papers and claims to demonstrate that they are at odds with AGW.

A key component of the scientific argument for anthropogenic global warming (AGW) has been disproven. The results are hiding in plain sight in peer-reviewed journals.

The article got discussed on Skeptical Science, with the article Have American Thinker Disproven Global Warming? although the blog article really just covered the second paper. The discussion was especially worth reading because Gary Thompson joined in and showed himself to be a thoughtful and courteous fellow.

He did claim in that discussion that:

First off, I never stated in the article that I was disproving the greenhouse effect. My aim was to disprove the AGW hypothesis as I stated in the article “increased emission of CO2 into the atmosphere (by humans) is causing the Earth to warm at such a rate that it threatens our survival.” I think I made it clear in the article that the greenhouse effect is not only real but vital for our planet (since we’d be much cooler than we are now if it didn’t exist).

However, the papers he cites are really demonstrating the reality of the “greenhouse” effect. If his conclusions – different from the authors of the papers – are correct, then he has demonstrated a problem with the “greenhouse” effect, which is a component – a foundation – of AGW.

This article will cover the first paper which appears to be part of a conference proceeding: Changes in the earth’s resolved outgoing longwave radiation field as seen from the IRIS and IMG instruments by H.E. Brindley et al. If you are new to understanding the basics on longwave and shortwave radiation and absorption by trace gases, take a look at CO2 – An Insignificant Trace Gas?

Take one look at a smoking gun and you know it’s been fired. One look at a paper on a complex subject like atmospheric physics and you might easily jump to the wrong conclusion. Let’s hope I haven’t fallen into the same trap..

Even their mother couldn't tell them apart

Even their mother couldn't tell them apart

The Concept Behind the Paper

The paper examines the difference between satellite measurements of longwave radiation from 1970 and 1997. The measurements are only for clear sky conditions, to remove the complexity associated with the radiative effects of clouds (they did this by removing the measurements that appeared to be under cloudy conditions). And the measurements are in the Pacific, with the data presented divided between east and west. Data is from April-June in both cases.

The Measurement

The spectral data is from 7.1 – 14.1 μm (1400 cm-1 – 710 cm-1 using the convention of spectral people, see note 1 at end). Unfortunately, the measurements closer to the 15μm band had too much noise so were not reliable.

Their first graph shows the difference of 1997 – 1970 spectral results converted from W/m2 into Brightness Temperature (the equivalent blackbody radiation temperature). I highlighted the immediate area of concern, the “smoking gun”:

Spectral difference - 1997 less 1970 over East and West Pacific, Brindley

Spectral difference - 1997 less 1970 over East and West Pacific, Brindley

Note first that the 3 lines on each graph correspond to the measurement (middle) and the error bars either side.

I added wavelength in μm under the cm-1 axis for reference.

What Gary Thompson draws attention to is the fact that OLR (outgoing longwave radiation) has increased even in the 13.5+μm range, which is where CO2 absorbs radiation – and CO2 has increased during the period in question (about 330ppm to 380ppm). Surely, with an increase in CO2 there should be more absorption and therefore the measurement should be negative for the observed 13.5μm-14.1μm wavelengths.

One immediate thought without any serious analysis or model results is that we aren’t quite into the main absorption of the CO2 band, which is 14 – 16μm. But let’s read on and understand what the data and the theory are telling us.

Analysis

The key question we need to ask before we can draw any conclusions is what is the difference between the surface and atmosphere in these two situations?

We aren’t comparing the global average over a decade with an earlier decade. We are comparing 3 months in one region with 3 months 27 years earlier in the same region.

Herein seems to lie the key to understanding the data..

For the authors of the paper to assess the spectral results against theory they needed to know the atmospheric profile of temperature and humidity, as well as changes in the well-studied trace gases like CO2 and methane. Why? Well, the only way to work out the “expected” results – or what the theory predicts – is to solve the radiative transfer equations (RTE) for that vertical profile through the atmosphere. Solving those equations, as you can see in CO2 – Part Three, Four and Five – requires knowledge of the temperature profile as well as the concentration of the various gases that absorb longwave radiation. This includes water vapor and, therefore, we need to know humidity.

Atmospheric Temperature Profile, Brindley

Change in Atmospheric Temperature Profile, Brindley

I’ve broken up their graphs, this is temperature change – the humidity graphs are below.

Now it is important to understand where the temperature profiles came from. They came from model results, by using the recorded sea surface temperatures during the two periods. The temperature profiles through the atmosphere are not usually available with any kind of geographic and vertical granularity, especially in 1970. This is even more the case for humidity.

Note that the temperature – the real sea surface temperature – in 1997 for these 3 months is higher than 1970.

Higher temperature = higher radiation across the spectrum of emission.

Now the humidity:

Change in Humidity Profile through the atmosphere, Brindley

Change in Humidity Profile through the atmosphere, Brindley

The top graph is change in specific humidity – how many grams of water vapor per kg of air. The bottom is change in relative humidity. Not relevant to the subject of the post, but you can see how even though the difference in relative humidity is large high up in the atmosphere it doesn’t affect the absolute amount of water vapor in any meaningful way – because it is so cold high up in the atmosphere. Cold air cannot hold as much water vapor as warm air.

It’s no surprise to see higher humidity when the sea temperature is warmer. Warmer air has a higher ability to absorb water vapor, and there is no shortage of water to evaporate from the surface of the ocean.

Model Results of Expected Longwave Radiation

Now here are some important graphs which initially can be a little confusing. It’s worth taking a few minutes to see what these graphs tell us. Stay with me..

Top - model results not including trace gases; Bottom - model results including all effects

Top - model results not including trace gases; Bottom - model results including all effects

The top graph. The bold line is the model results of expected longwave radiation – not including the effect of CO2, methane, etc – but taking into account sea surface temperature and modeled atmospheric temperature and humidity profiles.

This calculation includes solving the radiative transfer equations through the atmosphere (see CO2 – An Insignificant Trace Gas? Part Five for more explanation on this, and you will see why the vertical temperature profile through the atmosphere is needed).

The breakdown is especially interesting – the three fainter lines. Notice how the two fainter lines at the top are the separate effects of the warmer surface and the higher atmospheric temperature creating more longwave radiation. Now the 3rd fainter line below the bold line is the effect of water vapor. As a greenhouse gas, water vapor absorbs longwave radiation through a wide spectral range – and therefore pulls the longwave radiation down.

So the bold line in the top graph is the composite of these three effects. Notice that without any CO2 effect in the model, the graph towards the left edge trends up: 700 cm-1 to 750 cm-1 (or 13.5μm to 14.1μm). This is because water vapor is absorbing a lot of radiation to the right (wavelengths below 13.5μm) – dragging that part of the graph proportionately down.

The bottom graph. The bold line in the bottom graph shows the modeled spectral results including the effects of the long-term changes in the trace gases CO2, O3, N2O, CH4, CFC11 and CFC12. (The bottom graph also confuses us by including some inter-annual temperature changes – the fainter lines – let’s ignore those).

Compare the top and bottom bold graphs to see the effect of the trace gases. In the middle of the graph you see O3 at 1040 cm-1 (9.6μm). Over on the right around 1300cm-1 you see methane absorption. And on the left around 700cm-1 you see the start of CO2 absorption, which would continue on to its maximum effect at 667cm-1 or 15μm.

Of course we want to compare this bottom graph – the full model results – more easily with the observed results. And the vertical axes are slightly different.

First for completeness, the same graphs for the West Pacific:

Model results for West Pacific

Model results for West Pacific

Let’s try the comparison of observation to the full model, it’s slightly ugly because I don’t have source data, just a graphics package to try and line them up on comparable vertical axes.

Here is the East Pacific. Top is observed with (1 standard deviation) error bars. Bottom is model results based on: observed SST; modeled atmospheric profile for temperature and humidity; plus effect of trace gases:

Comparison on similar vertical axes - top, observed; bottom, model

Comparison on similar vertical axes - top, observed; bottom, model

Now the West Pacific:

Comparison, West Pacific, Observed (top) vs Model (bottom)

Comparison, West Pacific, Observed (top) vs Model (bottom)

We notice a few things.

First, the model and the results aren’t perfect replicas.

Second, the model and the results both show a very similar change in the profile around methane (right “dip”), ozone (middle “dip”) and CO2 (left “dip”).

Third, the models show a negative value in change of brightness temperature (-1K) at the 700 cm-1 wavelength, whereas the actual results for the East Pacific is around 1K and for West Pacific is around -0.5K. The 1 standard deviation error bars for measurement include the model results – easily for West Pacific and just for East Pacific.

It appears to be this last observation that has prompted the article in American Thinker.

Conclusion

Hopefully, those who have taken the time to review:

  • the results
  • the actual change in surface and atmospheric conditions between 1970 and 1997
  • the models without trace gas effects
  • the models with trace gas effects

might reach a different conclusion to Gary Thompson.

The radiative transfer equations as part of the modeled results have done a pretty good job of explaining the observed results but aren’t exactly the same. However, if we don’t include the effect of trace gases in the model we can’t explain some of the observed features – just compare the earlier graphs of model results with and without trace gases.

It’s possible that the biggest error is the water vapor effect not being modeled well. If you compare observed vs model (the last 2 sets of graphs) from 800cm-1 to 1000cm-1 there seems to be a “trend line” error. The effect of water vapor has the potential to cause the most variation for two reasons:

  • water vapor is a strong greenhouse gas
  • water vapor concentration varies significantly vertically through the atmosphere and geographically (due to local vaporization, condensation, convection and lateral winds)

It’s also the case that the results for the radiative transfer equations will have a certain amount of error using “band models” compared with the “line by line” (LBL) codes for all trace gases. (A subject for another post but see note 2 below). It is rare that climate models – even just 1d profiles – are run with LBL codes because it takes a huge amount of computer time due to the very detailed absorption lines for every single gas.

The band models get good results but not perfect – however, they are much quicker to run.

Comparing two spectra from two different real world situations where one has higher sea surface temperatures and declaring the death of the model seems premature. Perhaps Gary ran the RTE calculations through a pen and paper/pocket calculator model like so many others have done.

There is a reason why powerful computers are needed to solve the radiative transfer equations. And even then they won’t be perfect. But for those who want to see a better experiment that compared real and modeled conditions, take a look at Part Six – Visualization where actual measurements of humidity and temperature through the atmosphere were taken, the detailed spectra of downwards longwave radiation was measured and the model and measured values were compared.

The results might surprise even Gary Thompson.

Notes:

1. Wavelength has long been converted to wavenumber, or cm-1. This convention is very simple. 10,000/wavenumber in cm-1 = wavelength in μm.

e.g. CO2 central absorption wavelength of 15μm => 667cm-1 (=10,000/15)

2. Solving the radiative transfer equations through the atmosphere requires knowledge of the absorption spectra of each gas. These are extremely detailed and consequently the numerical solution to the equations require days or weeks of computational time. The detailed versions are known as LBL – line by line transfer codes. The approximations, often accurate to within 10% are called “band models”. These require much less computational time and so the band models are almost always used.

Read Full Post »

The title should really be:

The Real Measure of Global Warming – Part Two – How Big Should Error Bars be, and the Sad Case of the Expendable Bathythermographs

But that was slightly too long.

This post picks up from The Real Measure of Global Warming which in turn followed Why Global Mean Surface Temperature Should be Relegated, Or Mostly Ignored

The discussion was about ocean heat content being a better measure of global warming than air temperature. However, ocean heat down into the deep has been less measured than air temperature, so is subject to more uncertainty the further back in time we travel.

We had finished up with a measure of changes in OHC (ocean heat content) over 50 years from Levitus (2005):

Ocean heat change, Levitus (2005)

Ocean heat change, Levitus (2005)

Some of the earlier graphs were a little small but you could probably see that the error bars further back in time are substantial. Unfortunately, it’s often the case that the error bars themselves are placed with too much confidence, and so it transpired here.

In 2006, GRL (Geophysical Research Letters) published the paper How much is the ocean really warming? by Gouretski and Koltermann.

They pointed out a significant error source in XBTs (expendable bathythermographs ). The XBT’s estimate temperature against depth by estimating depth from fall rate, a value which was found to be inaccurate.

The largest discrepancies are found between the expendable bathythermographs (XBT) and bottle and CTD data, with XBT temperatures being positively biased by 0.2–0.4C on average. Since the XBT data are the largest proportion of the dataset, this bias results in a significant World Ocean warming artefact when time periods before and after introduction of XBT are compared.

And conclude:

Comparison with LAB2005 [Levitus 2005] results shows that the estimates of global warming are rather sensitive to the data base and analysis method chosen, especially for the deep ocean layers with inadequate sampling. Clearly instrumental biases are an important issue and further studies to refine estimates of these biases and their impact on ocean heat content are required. Finally, our best estimate of the increase of the global ocean heat content between 1957–66 and 1987–96 is 12.8 ± 8.0 x 1022 J with the XBT offsets corrected. However, using only the CTD and bottle data reduces this estimate to 4.3 ± 8.0 x 1022 J.

If we refer back to Levitus, they had calculated a value over the same time period of 15×1022 J.

Gouretski and Koltermann are saying, in layman’s terms, if I might paraphrase:

Might be around what Levitus said, might be a lot less, might even be zero.. we don’t know.

Some readers might be asking, does this heretical stuff really get published?

Well, moving back to ocean heat content, we don’t want to drown in statistical analysis because anything more than a standard deviation and I am out of my depth, so to speak.. Better just to see what the various experts have concluded as our measure of uncertainty.

Ocean Heat Content is one of the hot topics, so no surprise to see others weighing in..

Domingues et al

In 2008, Nature then published Improved estimates of upper-ocean warming and multi-decadal sea-level rise by Domingues et al.

Remembering that the major problem of ocean heat content is first a lack of data, and now revealed, problematic data in the major data source.. Domingues says in the abstract:

..using statistical techniques that allow for sparse data coverage..

My brief excursion into statistics was quickly abandoned when the first paper cited (Reduced space optimal interpolation of historical marine sea level pressure: 1854-1992, Kaplan 2000) states:

..A novel procedure of covariance adjustment brought the results of the analysis to the consistency with the a priori assumptions on the signal covariance structure..

Let’s avoid the need for strong headache medication and just see their main points, interesting asides and conclusions. Which are interesting.

OHC 1951-2004, Domingues (2008)

OHC 1951-2004, Domingues (2008)

The black line is their story. Note their “error bars” in the top graph, the grey shading around the black line is one standard deviation. This helps us see “a measure” of uncertainty as we go back in time. The red line is the paper we have just considered, Levitus 2005.

Domingues calculates the 1961-2003 increase in OHC as 16 x1022 J, with their error bars as ±3 x1022 J. They calculate a number very close to Levitus (2005).

Interesting aside:

Climate models, however, do not reproduce the large decadal variability in globally averaged ocean heat content inferred from the sparse observational database.

From one of the papers they cite (Simulated and observed variability in ocean temperature and heat content, AchutaRao 2007) :

Several studies have reported that models may significantly underestimate the observed OHC variability, raising concerns about the reliability of detection and attribution findings.

And on to Levitus et al 2009

From GRL, Global ocean heat content 1955–2008 in light of recently revealed instrumentation problems

Or, having almost the last word with his updated paper:

Ocean heat change 1955- 2009 - Levitus (2009)

Ocean heat change 1955- 2009 - Levitus (2009)

The red line being the updated version, the black dotted line the old version.

Willis Back, 2006 and Forwards, 2009

In the meantime, Josh Willis, using the brand new Argo floats, (see part one for the Argo floats) published a paper (GRL 2006) showing such a sharp reduction in ocean heat from 2003 – 2005 that there was no explanation for.

And then a revised paper in 2009 in Journal of Atmospheric and Oceanic Technology showing that the previous correction was a mistake, instrument problems again.. now it’s all flat for a few years:

no significant warming or cooling is observed in upper-ocean heat content between 2003 and 2006

Probably more papers we could investigate, including one which I planned to cover before realizing I can’t find it and this post has gone on way too long already.

Conclusion

We are looking at a very important measurement, ocean heat content. We aren’t as sure as we would like to be about the history of OHC and not much can be done about that, although novel statistical methods of covariance adjustment may have their place.

Some could say, based on one of the papers presented here, “No ocean warming for 50 years”. It’s a possibility, but probably a distant one. One day when we get to the sea level “budget”, more usefully called “sea level rise”, we will probably think that the rise of sea level is usefully explained by the ocean heat content going up.

We do have excellent measurements in place now, and since around 2000, although even that exciting project has been confused by instrument uncertainty, or uncertainty about instrument uncertainty.

We have seen a great example that error bars aren’t really error bars. They are “statistics”, not real life.

And perhaps, most useful of all, we might have seen that papers which show “a lot less warming” and “unexplained cooling”, still make it into print with peer-reviewed science journals like GRL. This last factor may give us more confidence than anything that we are seeing real science in progress. And save us from having to analyze 310,000 temperature profiles with and without covariance adjustments. Instead, we can wait for the next few papers to see what the final consensus is.

Or spend a lifetime in study of statistics.

Read Full Post »

In an earlier post – Why Global Mean Surface Temperature Should be Relegated, Or Mostly Ignored – I commented:

There’s a huge amount of attention paid to the air temperature 6ft off the ground all around the continents of the world. And there’s an army of bloggers busy re-analyzing the data.

It seems like one big accident of history. We had them, so we used them, then analyzed them, homogenized them, area-weighted them, re-analyzed them, wrote papers about them and in so doing gave them much more significance than they deserve. Consequently, many people are legitimately confused about whether the earth is warming up.

Then we looked at some of the problems of measuring the surface temperature of the earth via the temperature of a light ephemeral substance approximately 6ft off the ground.

In Warming of the World Ocean 1955-2003, Levitus (2005) shows an interesting comparison of estimates of absorbed heat over almost half a century:

Heat absorbed in different elements of the climate, Levitus (2005)

Heat absorbed in different elements of the climate, Levitus (2005)

Once you find out that the oceans have around 1000x the heat capacity of the atmosphere, the above chart won’t be surprising.

For those who haven’t considered this relative difference in heat capacity before:

  • if the oceans cooled down by a tiny 0.1°, transferring their heat to the atmosphere, the atmosphere would heat up by 100°C (it wouldn’t happen like this but it gives an idea of the relative energy in both)
  • if the atmosphere transferred so much heat to the oceans that the air temperature went from an average of 15°C to a freezing -15°C, the oceans would heat up by a tiny, almost unnoticeable 0.03°C

So if we want to understand the energy in the climate system, if we want to understand whether the earth is warming up, we need to measure the energy in the oceans.

An Accident of History

Measuring the temperature of the earth’s surface by measuring the highly mobile atmosphere 6ft off the ground is a problem. By contrast, measuring ocean heat is simple..

Except we didn’t start until much later. Sea surface temperatures date back to the 19th century, but that doesn’t tell us much. We want to know the temperature down into the deep all around the world.

Ocean temperature vs depth in one location, Bigg (2003)

Ocean temperature vs depth in one location, "Oceans and Climate", Bigg (2003)

Here is a typical sample. Unlike the atmosphere, the oceans are more “stratified” – see Why Global Mean Surface Temperature Should be Relegated, Or Mostly Ignored for more on the basic physics of why the ocean is warmer at the surface. However, the oceans have complex global currents so we need to take a lot of measurements.

Measurements of the temperature down into the ocean depths didn’t really start until the 1940s and progressed very slowly since then. Levitus says:

Most of the data from the deep ocean are from research expeditions. The amount of data at intermediate and deep depths decreases as we go back further in time.

Fast forward to 2000 and the Argo project began to be deployed. By early 2010, over 3300 sensors have been moved into place around the world’s oceans. The Argo sensors drop to 2km in depth every 10 days and automatically measure temperature and salinity from the surface to this 2km depth:

Argo profile, Temperature and Salinity vs Depth

Argo profile, Temperature and Salinity vs Depth

Why salinity? Salinity is the other major factor apart from temperature which affects ocean density and therefore controls the ocean currents. See Predictability? With a Pinch of Salt please.. for more..

As we go back from 2010 there is progressively less data available. Even during the last 10 years measurement issues have created waves. But more on that later..

The Leviathan

It’s often best to step back a little to understand a subject better.

In 2000, Science published the paper Warming of the World Ocean by Sydney Levitus and a few co-workers. The paper has a thorough analysis of the previous 50 years of ocean history.

Ocean heat change, upper 3000m, 1955-1996, from Levitus (2000)

Ocean heat change, upper 3000m, 1955-1996, from Levitus (2000)

Now and again the large number of joules (unit of energy) are turned into a comparison W/m2 absorbed for the time period in question. 1W/m2 for a year (averaged over the entire surface of the earth) translates into 1.6×1022J.

But it’s better to get used to the idea that change in energy in the oceans is usually expressed as 1022J.

The graphs above show a lot of variability between oceans but still they all demonstrate the similar warming pattern.

Comparison of OHC in top 3000m, top 800m, top 300m, Levitus (2000)

Comparison of OHC in top 3000m, top 800m, top 300m, Levitus (2000)

Here is the data shown (from left to right) as the energy change in the deeper 3000m, 800m and 300m.

We are used to seeing temperature graphs, even sea surface temperature graphs that go up and down from year to year. Of course we want to understand exactly why, for example see Is climate more than weather? Is weather just noise? It’s easy to think of reasons why that might happen, even in a warming world (or a cooling world) – with one of the main reasons being that heat has moved around in the oceans.

For example, due to ocean currents colder water has been brought to the surface. The measured sea surface temperature would be significantly lower but the total heat hasn’t necessarily changed – because we are only measuring the temperature at one vertical location (the top).

So we wouldn’t expect to see a big yearly decline in total energy.. not if the planet was “warming up”.

So this is quite surprising! See the change downward in the 1980’s:

Ocean heat change - global summary, Levitus (2000). Numbers in 10^22 J

Ocean heat change - global summary, Levitus (2000). Numbers in 10^22 J

What caused this drop?

Here’s a another fascinating look into the depths that we don’t usually get to see:

Temperature comparison 1750m down. 1970-74 cf 55-59 & 1988-92 cf 70-74

Temperature comparison 1750m down. 1970-74 cf 55-59 & 1988-92 cf 70-74

Here we see changes in the deeper North Atlantic in two comparison periods about 15 years apart. (As a minor note the reason for the comparisons of averaged 5-year periods is the sparsity of data below the surface of the oceans).

See how the 1990 period has cooled from 15 years earlier.

Levitus, Antonov and Boyer updated their paper in 2005 (reference below).

They comment:

Here we present new yearly estimates for the 1955– 2003 period for the upper 300 m and 700 m layers and pentadal (5-year) estimates for the 1955–1959 through 1994–1998 period for the upper 3000 m of the world ocean.

The heat content estimates we present are based on an additional 1.7 million temperature profiles that have become available as part of the World Ocean Database 2001.

Also, we have processed approximately 310,000 additional temperature profiles since the release of WOD01 and include these in our analyses.

(My emphasis added). Think re-doing GISS and CRU is challenging? And for those who like to know where the data lives, check out the World Ocean Database and World Ocean Atlas Series

Ocean heat change, Levitus (2005)

Ocean heat change, Levitus (2005)

Here’s a handy comparison of the changing heat when we look at progressively deeper sections of the ocean with the more up-to-date data.

The actual numbers (change in energy) from 1955-1998 were calculated to be:

  • 0-300m:   7×1022J
  • 0-700m:   11×1022J
  • 0-3000m:   15×1022J
  • 1000-3000m:   1.3×1022J

So the oceans below 1000m only accounted for 9% of the change. This gives an idea of the relative importance of measuring the temperatures as we go deeper.

In their 2005 paper they comment on the question of the early 80’s cooling:

One dominant feature .. is the large decrease in ocean heat content beginning around 1980. The 0–700 m layer exhibits a decrease of approximately 6 x 1022 J between 1980 and 1983. This corresponds to a cooling rate of 1.2 Wm2 (per unit area of Earth’s total surface).

Most of this decrease occurs in the Pacific Ocean.. Most of the net decrease occurred at 5°S, 20°N, and 40°N. Gregory et al. [2004] have cast doubt on the reality of this decrease but we disagree. Inspection of pentadal data distributions at 400 m depth (not shown here) indicates excellent data coverage for these two pentads.

And they also comment:

However, the large decrease in ocean heat content starting around 1980 suggests that internal variability of the Earth system significantly affects Earth’s heat balance on decadal time-scales.


So far so interesting, but as the article is already long enough we will come back to the subject in a later post with the follow up:

How Big Should Error Bars be and the Sad Case of the Expendable Bathythermographs.

And for one reader, in anticipation:

XBT

XBT

Update – follow up post – The Real Measure of Global Warming – Part Two – How Big Should Error Bars be, and the Sad Case of the Expendable Bathythermographs

References

Warming of the World Ocean, Levitus et al, Science (2000)

Warming of the World Ocean 1955-2003, Levitus et al, GRL (2005)

Read Full Post »

There’s a huge amount of attention paid to the air temperature 6ft off the ground all around the continents of the world. And there’s an army of bloggers busy re-analyzing the data.

It seems like one big accident of history. We had them, so we used them, then analyzed them, homogenized them, area-weighted them, re-analyzed them, wrote papers about them and in so doing gave them much more significance than they deserve. Consequently, many people are legitimately confused about whether the earth is warming up.

I didn’t say land surface temperatures should be abolished. Everyone’s fascinated by their local temperature. They should just be relegated to a place of less importance in climate science.

Problems with Air Surface Temperature over Land

If you’ve spent any time following debates about climate, then this one won’t be new. Questions over urban heat island, questions over “value-added” data, questions about which stations and why in each index. And in journal-land, some papers show no real UHI, others show real UHI..

One of the reasons I posted the UHI in Japan article was I hadn’t seen that paper discussed, and it’s interesting in so many ways.

The large number of stations (561) with high quality data revealed a very interesting point. Even though there was a clear correlation between population density and “urban heat island” effect, the correlation was quite low – only 0.44.

Lots of scatter around the trend:

Estimate of actual UHI by referencing the closest rural stations

Estimate of actual UHI by referencing the closest rural stations - again categorized by population density

This doesn’t mean the “trend” wasn’t significant, as the result had a 99% confidence around it. What it meant was there was a lot of variability in the results.

The reason for the high variability was explained as micro-climate effects. The very local landscape, including trees, bushes, roads, new buildings, new vegetation, changing local wind patterns..

Interestingly, the main effect of UHI is on night-time temperatures:

Temperature change per decade: time of day vs population density

Temperature change per decade: time of day vs population density

Take a look at the top left graphic (the others are just the regional breakdown in Japan). Category 6 is the highest population density and category 3 the lowest.

What is it showing?

If we look at the midday to mid-afternoon temperatures then the average temperature change per decade is lowest and almost identical in the big cities and the countryside.

If we look at the late at night to early morning temperatures then average change per decade is very dependent on the population density. Rural areas have experienced very little change. And big cities have experienced much larger changes.

Night time temperatures have gone up a lot in cities.

A quick “digression” into some basic physics..

Why is the Bottom of the Atmosphere Warmer than the Top while the Oceans are Colder at the Bottom?

The ocean surface temperature somewhere on the planet is around 25°C, while the bottom of the ocean is perhaps 2°C.

Ocean temperature vs depth, Grant Bigg, Oceans and Climate (2003)

Ocean temperature vs depth, Grant Bigg, Oceans and Climate (2003)

The atmosphere at the land interface somewhere on the planet is around 25°C, while the top of the troposphere is around -60°C. (Ok, the stratosphere above the troposphere increases in temperature but there’s almost no atmosphere there and so little heat).

Typical temperature profile in the troposphere

Typical temperature profile in the troposphere

The reason why it’s all upside down is to do with solar radiation.

Solar radiation, mostly between wavelengths of 100nm to 4μm, goes through most of the atmosphere as if it isn’t there (apart from O2-O3 absorption of ultraviolet). But the land and sea do absorb solar radiation and, therefore, heat up and radiate longwave energy back out.

See the CO2 series for a little more on this if you wonder why it’s longwave getting radiated out and not shortwave.

The top of the ocean absorbs the sun’s energy, heats up, expands, and floats.. but it was already at the top so nothing changes and that’s why the ocean is mostly “stratified” (although see Predictability? With a Pinch of Salt please.. for a little about the complexity of ocean currents in the global view)

The very bottom of the atmosphere gets warmed up by the ground and expands. So now it’s less dense. So it floats up. Convective turbulence.

This means the troposphere is well-mixed during the day. Everything is all stirred up nicely and so there are more predictable temperatures – less affected by micro-climate. But at night, what happens?

At night, the sun doesn’t shine, the ground cools down very rapidly, the lowest level in the atmosphere absorbs no heat from the ground and it cools down fastest. So it doesn’t expand, and doesn’t rise. Therefore, at night the atmosphere is more stratified. The convective turbulence stops.

But if it’s windy because of larger scale effects in the atmosphere there is more “stirring up”. Consequently, the night-time temperature measured 6ft off the ground is very dependent on the larger scale effects in the atmosphere – quite apart from any tarmac, roads, buildings, air-conditioners – or urban heat island effects (apart from tall buildings preventing local windy conditions)

There’s a very interesting paper by Roger Pielke Sr (reference below) which covers this and other temperature measurement subjects in an accessible summary. (The paper used to be available free from his website but I can’t find it there now).

One of the fascinating observations is the high dependency of measured night temperatures on height above the ground, and on wind speed.

Micro-climate and Macro-climate

Perhaps the micro-climate explains much of the problems of temperature measurement.

But let’s turn to a thought experiment. No research in the thought experiment.. let’s take the decent-sized land mass of Australia. Let’s say large scale wind effects are mostly from the north to south – so the southern part of Australia is warmed up by the hot deserts.

Now we have a change in weather patterns. More wind blows from the south to the north. So now the southern part of Australia is cooled down by Antarctica.

This change will have a significant “weather” impact. And in terms of land-based air surface temperature we will have a significant change which will impact on average surface temperatures (GMST). And yet the energy in the climate system hasn’t changed.

Of course, we expect that these things average themselves out. But do they? Maybe our assumption is incorrect. At best, someone had better start doing a major re-analysis of changing wind patterns vs local temperature measurements. (Someone probably did it already, as it’s a thought experiment, there’s the luxury of making stuff up).

How much Energy is Stored in the Atmosphere?

The atmosphere stores 1000x less energy than the oceans. The total heat capacity of the global atmosphere corresponds to that of only a 3.2 m layer of the ocean.

So if we want a good indicator – a global mean indicator – of climate change we should be measuring the energy stored in the oceans. This avoids all the problems of measuring the temperature in a highly, and inconsistently, mobile lightweight gaseous substance.

Right now the ocean heat content (OHC) is imperfectly measured. But it’s clearly a much more useful measure of how much the globe is warming up than the air temperature a few feet off the ground.

If the primary measure was OHC with the appropriately-sized error bars, then at least the focus would go into making that measurement more reliable. And no urban heat island effects to worry about.

How to Average

There’s another problem with the current “index” – averaging of temperatures, a mix of air over land and sea surface temperatures. There is a confusing recent paper by Essex (2007), see the reference below, just the journal title says it’s not for the faint-hearted, which says we can’t average global temperatures at all –  however, this is a different point of view.

There is an issue of averaging land and sea surface temperatures (two different substances). But even if we put that to one side there is still a big question about how to average (which I think is part of the point of the confusing Essex paper..)

Here’s a thought experiment.

Suppose the globe is divided into 7 equal sized sections, equatorial region, 2 sub-tropics, 2 mid-latitude regions, 2 polar regions. (Someone with a calculator and a sense of spherical geometry would know where the dividing lines are.. and we might need to change the descriptions appropriately).

Now suppose that in 1999 the average annual temperatures are as follows:

  • Equatorial region: 30°C
  • Sub-tropics: 22°C, 22°C
  • Mid-latitude regions: 12°C, 12°C
  • Polar regions: 0°C, 0°C

So the “global mean surface temperature” = 14°C

Now in 2009 the new numbers are:

  • Equatorial region: 26°C
  • Sub-tropics: 20°C, 20°C
  • Mid-latitude regions: 12°C, 12°C
  • Polar regions: 5°C, 5°C

So the “global mean surface temperature” = 14.3°C – an increase of 0.3°C. The earth has heated up 0.3°C in 10 years!

After all, that’s how you average, right? Well, that’s how we are averaging now.

But if we look at it from more a thermodynamics point of view we could ask – how much energy is the earth radiating out? And how has the radiation changed?

After all, if we aren’t going to look at total heat, then maybe the next best thing is to use how much energy the earth is radiating to get a better feel for the energy balance and how it has changed.

Energy is radiated proportional to σT4, where T is absolute temperature (K).  0°C = 273K. And σ is a well-known constant.

Let’s reconsider the values above and average the amount of energy radiated and find out if it has gone up or down. After all, if temperature has gone up by 0.3°C the energy radiated must have gone up as well.

What we will do now is compare the old and new values of effective energy radiated. (And rather than work out exactly what it means in W/m2, we just calculate the σT4 value for each region and sum).

  • 1999 value = 2714.78 (W/arbitrary area)
  • 2009 value = 2714.41 (W/arbitrary area – but the same units)

Interesting? The “average” temperature went up. The energy radiated went down.

The more mathematically inclined will probably see why straight away. Once you have relationships that aren’t linear the results doesn’t usually change in proportion to the inputs.

Well, energy radiated out is more important in climate than some “arithmetic average of temperature”.

When Trenberth and Kiehl updated their excellent 1997 paper in 2008 the average energy radiated up from the earth’s surface was changed from 390W/m2 to 396W/m2. The reason? You can’t average the temperature and then work out the energy radiated from that one average (how they did it in 1997). Instead you have to work out the energy radiated all around the world and then average those numbers (how they did it in 2008).

Conclusion

Measuring the temperature of air to work out the temperature of the ground is problematic and expensive to get right. And requires lot of knowledge about changing wind patterns at night.

And even if we measure it accurately, how useful is it?

Oceans store heat, the atmosphere is an irrelevance as far as heat storage is concerned. If the oceans cool, the atmosphere will follow. If the oceans heat up, the atmosphere will follow.

And why take a lot of measurements and take an arithmetic average? If we want to get something useful from the surface temperatures all around the globe we should convert temperatures into energy radiated.

And I hope to cover ocean heat content in a follow up post..

Update – check out The Real Measure of Global Warming

References

Detection of urban warming in recent temperature trends in Japan, Fumiaki Fujibe, International Journal of Climatology (2009)

Unresolved issues with the assessment of multidecadal global land surface temperature trends, Roger A. Pielke Sr. et al, Journal of Geophysical Research (2007)

Does a Global Temperature Exist? C. Essex et al, Journal of Nonequilibrium Thermodynamics (2007)

Read Full Post »

In the series CO2 – An Insignificant Trace Gas? we concluded (in Part Seven!) with the values of “radiative forcing” as calculated for the current level of CO2 compared to pre-industrial levels.

That value is essentially a top of atmosphere (TOA) increase in longwave radiation. The value from CO2 is 1.7 W/m2. And taking into account all of the increases in trace gases (but not water vapor) the value totals 2.4 W/m2.

Comparing Radiative Forcing

The concept of radiative forcing is a useful one because it allows us to compare different first-order effects on the climate.

The effects aren’t necessarily directly comparable because different sources have different properties – but they do allow a useful first pass or quantitative comparison. When we talk about heating something, a Watt is a Watt regardless of its source.

But if we look closely at the radiative forcing from CO2 and solar radiation – one is longwave and one is shortwave. Shortwave radiation creates stratospheric chemical effects that we won’t get from CO2. Shortwave radiation is distributed unevenly – days and nights, equator and poles – while CO2 radiative forcing is more evenly distributed. So we can’t assume that the final effects of 1 W/m2 increase from the two sources are the same.

But it helps to get some kind of perspective. It’s a starting point.

The Solar “Constant”, now more accurately known as Total Solar Irradiance

TSI has only been directly measured since 1978 when satellites went into orbit around the earth and started measuring lots of useful climate values directly. Until it was measured, solar irradiance was widely believed to be constant.

Prior to 1978 we have to rely on proxies to estimate TSI.

Earth from Space

Earth from Space - pretty but irrelevant..

Accuracy in instrumentation is a big topic but very boring:

  • absolute accuracy
  • relative accuracy
  • repeatability
  • long term drift
  • drift with temperature

These are just a few of the “interesting” factors along with noise performance.

We’ll just note that absolute accuracy – the actual number – isn’t the key parameter of the different instruments. What they are good at measuring accurately is the change. (The differences in the absolute values are up to 7 W/m2, and absolute uncertainty in TSI is estimated at approximately 4 W/m2).

So here we see the different satellite measurements over 30+ years. The absolute results here have not been “recalibrated” to show the same number:

Total Solar Irradiation, as measured by various satellites

Total Solar Irradiation, as measured by various satellites

We can see the solar cycles as the 11-year cycle of increase and decrease in TSI.

One item of note is that the change in annual mean TSI from minimum to maximum of these cycles is less than 0.08%, or less than 1.1 W/m2.

In The Earth’s Energy Budget we looked at “comparing apples with oranges” – why we need to convert the TSI or solar “constant” into the absorbed radiation (as some radiation is reflected) averaged over the whole surface area.

This means a 1.1 W/m2 cyclic variation in the solar constant is equivalent to 0.2 W/m2 over the whole earth when we are comparing it with say the radiative forcing from extra CO2 (check out the Energy Budget post if this doesn’t seem right).

How about longer term trends? It seems harder to work out as any underlying change is the same order as instrument uncertainties. One detailed calculation on the minimum in 1996 vs the minimum in 1986 (by R.C. Willson, 1998) showed an increase of 0.5 W/m2 (converting that to the “radiative forcing” = 0.09 W/m2). Another detailed calculation of that same period showed no change.

Here’s a composite from Fröhlich & Lean (2004) – the first graphic is the one of interest here:

Composite TSI from satellite, 1978-2005, Frohlich & Lean

Composite TSI from satellite, 1978-2004, Frohlich & Lean

As you can see, their reanalysis of the data concluded that there hasn’t been any trend change during the period of measurement.

Proxies

What can we work out without satellite data – prior to 1978?

The Sun

The Sun

The historical values of TSI have to be estimated from other data. Solanski and Fligge (1998) used the observational data on sunspots and faculae (“brightspots”) primarily from the Royal Greenwich Observatory dating to back to 1874. They worked out a good correlation between the TSI values from the modern satellite era with observational data and thereby calculated the historical TSI:

Reconstruction of changes in TSI, Solanski & Fligge

Reconstruction of changes in TSI, Solanski & Fligge

As they note, these kind of reconstructions all rely on the assumption that the measured relationships have remained unchanged over more than a century.

They comment that depending on the reconstructions, TSI averaged over its 11-year cycle has varied by 0.4-0.7W/m2 over the last century.

Then they do another reconstruction which includes changes that take place in the “quiet sun” periods – because the reconstruction above is derived from observations of active regions –  in part from data comparing the sun to similar stars.. They comment that this method has more uncertainty, although it should be more complete:

Second reconstruction of TSI back to 1870, Solanski & Fligge

Second reconstruction of TSI back to 1870, Solanski & Fligge

This method generates an increase of 2.5 W/m2 between 1870 and 1996. Which again we have to convert to a radiative forcing of 0.4 W/m2

The IPCC summary (TAR 2001), p.382, provides a few reconstructions for comparison, including the second from Solanski and Fligge:

Reconstructions of TSI back to 1600, IPCC (2001)

Reconstructions of TSI back to 1600, IPCC (2001)

And then bring some sanity:

Thus knowledge of solar radiative forcing is uncertain, even over the 20th century and certainly over longer periods.

They also describe our level of scientific understanding (of the pre-1978 data) as “very low”.

The AR4 (2007) lowers some of the historical changes in TSI commenting on updated work in this field, but from an introductory perspective the results are not substantially changed.

Second Order Effects

This post is all about the first-order forcing due to solar radiation – how much energy we receive from the sun.

There are other theories which rely on relationships like cloud formation as a result of fluctuations in the sun’s magnetic flux – Svensmart & Friis-Christensen. These would be described as “second-order” effects – or feedback.

These theories are for another day.

First of all, it’s important to establish the basics.

Conclusion

We can see from satellite data that the cyclic changes in Total Solar Irradiance over the last 30 years are small. Any trend changes are small enough that they are hard to separate from instrument errors.

Once we go back further, it’s an “open field”. Choose your proxies and reconstruction methods and wide ranging numbers are possible.

When we compare the known changes (since 1978) in TSI we can directly compare the radiative forcing with the “greenhouse” effect and that is a very useful starting point.

References

Solar radiative output and its variability: evidence and mechanisms, Fröhlich & Lean, Astrophysics Review (2004)

Solar Irradiance since 1874 Revisited, Solanski & Fligge, Geophysical Research Letters (1998)

Total Solar Irradiance Trend During Solar Cycles 21 and 22, R.C.Willson, Science (1997)

Read Full Post »

In Part One we looked at a few basic numbers and how to compare “apples with oranges” – or the solar radiation in vs the earth’s longwave radiation going out.

And in Part One I said:

Energy radiated out from the climate system must balance the energy received from the sun. This is energy balance. If it’s not true then the earth will be heating up or cooling down.

Why hasn’t the Outgoing Longwave Radiation (OLR) increased?

In a discussion on another blog when I commented about CO2 actually creating a “radiative forcing” – shorthand for “it adds a certain amount of W/m^2 at the earth’s surface” – one commenter asked (paraphrasing because I can’t remember the exact words):

If that’s true – if CO2 creates extra energy at the earth’s surface – why has OLR not increased in 20 years?

This is a great question and inspired a mental note to add a post which includes this question.

Hopefully, most readers of this blog will know the answer. And understanding this answer is the key to understanding an important element of climate science.

Energy Balance and Imbalance

It isn’t some “divine” hand that commands that Energy in = Energy out.

Instead, if energy in > energy out, the system warms up.

And conversly, if energy in < energy out, the system cools down.

So if extra CO2 increases surface temperature… pause a second… backup, for new readers of this blog:

First, check out the CO2 series if it seems like some crazy idea that CO2 in the atmosphere can increase the amount of radiation at the earth’s surface. 10,000 physicists over 100 years are probably right, but depending on what and where you have been reading I can understand the challenge..

Second, we like to use weasel words like “all other things being equal” to deal with the fact that the climate is a massive mix of cause and effect. The only way that science can usually progress is to separate out one factor at a time and try and understand it..

So, if extra CO2 increases surface temperature – all other things being equal, why hasn’t energy out of the system increased?

Because the system will accumulate energy until energy balance is restored?

More or less correct. No, definitely correct – probably an axiom – and probably describes what we see.

Higher Surface Temperature – Same OLR  – Does that make sense?

The question that the original commenter was asking was a very good one. He (or she) was trying to get something clear – if surface temperature has increased why hasn’t OLR increased?

Here’s a graphic which has caused much head scratching for non-physicists: (And I can understand why).

Upward Longwave Radiation, Numbers from Kiehl & Trenberth

Upward Longwave Radiation, Numbers from Kiehl & Trenberth (1997)

For those new to the blog or to climate science concepts, “Longwave” means energy originally radiated from the earth’s surface (check out CO2 – An Insignificant Trace Gas – Part One for a little more on this).

Where’s the energy going? Everyone asks.

Some of it is being absorbed and re-radiated. Of this, some is re-radiated up. No real change there. And some is re-radiated down.

The downwards radiation, which we can measure – see Part Six – Visualization, is what increases the surface temperature.

Add some CO2 – and, all other things being equal, or weasel words to that effect, there will be more absorption of longwave radiation in the atmosphere, and more re-radiation back down to the surface – so clearly, less OLR.

In fact, that’s the explanation in a nutshell. If you add CO2, as an immediate effect less longwave radiation leaves the top of atmosphere (TOA). Therefore, more energy comes in than leaves, therefore, temperatures increase.

Eventually, energy balance is restored when higher temperatures at the surface finally mean that enough longwave radiation is leaving through the top of atmosphere.

If you are new to this, you might be saying “What?

So, take a minute and read the post again. Or even – come back tomorrow and re-read it.

New concepts are hard to absorb inside five minutes.

Conclusion

This post has tried to look at energy balance from a couple of perspectives. Picture the whole climate system and think about energy in and energy out.

The idea is very illuminating.

The energy balance at TOA (top of atmosphere) is the “driver” for whether the earth heats or cools.

In the next post we will learn the annoying fact that we can’t measure the actual values accurately enough.. Which is also why even if there is an energy imbalance for an extended period, it is hard to measure.

Update – Part Three in the series on how the earth radiates energy from its atmosphere and what happens when the amount of “greenhouse” gas is increased. (And not, as promised, on measurement issues..)

Read Full Post »

This post tries to help visualizing, or understanding better, the greenhouse effect.

By the way, if you are new to this subject and think CO2 is an insignificant trace gas, then at least take a look at Part One.

I tried to think of a good analogy, something to bring it to life. But this is why the effect of these invisible trace gases is so difficult to visualize and so counter-intuitive.

The most challenging part is that energy flowing in – shortwave radiation from the sun – passes through these “greenhouse” gases like they don’t exist (although strictly speaking there is a small effect from CO2 in absorption of solar radiation). That’s because solar radiation is almost all in the 0.1-4μm band (see The Sun and Max Planck Agree – Part Two).

But energy flowing out from the earth’s surface is absorbed and re-radiated by these gases because the earth’s radiation is in the >4μm band. Again, you can see these effects more clearly if you take another look at part one.

If we try and find an analogy in everyday life nothing really fits this strange arrangement.

Upwards Longwave Radiation

So let’s try and look at it again and see if starts to make sense. Here is the earth’s longwave energy budget – considering first the energy radiated up:

Upward Longwave Radiation, Numbers from Kiehl & Trenberth

Upward Longwave Radiation, Numbers from Kiehl & Trenberth (1997)

Of course, the earth’s radiation from the surface depends on the actual temperature. This is the average upwards flux. And it also depends slightly on the factor called “emissivity” but that doesn’t have a big effect.

The value at the top of atmosphere (TOA) is what we measure by satellite – again that is the average for a clear sky. Cloudy skies produce a different (lower) number.

These values alone should be enough to tell us that something significant is happening to the longwave radiation. Where is it going? It is being absorbed and re-radiated. Some upwards – so it continues on its journey to the top of the atmosphere and out into space – and some back downwards to the earth’s surface. This downwards component adds to the shortwave radiation from the sun and helps to increase the surface temperature.

As a result the longwave radiation upwards from the earth’s surface is higher than the upwards value at the top of the atmosphere.

Here’s the measured values by satellite averaged over the whole of June 2009.

Measured Outgoing Longwave Radiation at the top of atmosphere, June 2009

Measured Outgoing Longwave Radiation at the top of atmosphere, June 2009

Of course, the hotter parts of the globe radiate out more longwave energy.

Downwards Longwave Radiation

But what does it look like at the earth’s surface to an observer looking up – ie the downwards longwave radiation? If there was no greenhouse effect we should, of course, see zero longwave radiation.

Here are some recent measurements:

Downwards Longwave Radiation at the Earth's Surface, From Evans & Puckrin

Downwards Longwave Radiation at the Earth's Surface, From Evans & Puckrin (2006)

Note that the wavelengths have been added under “Wavenumber” (that convention of spectrum people) and so the graph runs from longer to shorter wavelength.

This is for a winter atmosphere in Canada.

Now what the scientists did was to run a detailed simulation of the expected downwards longwave radiation using the temperature, relative humidity and pressure profiles from radiosondes, as well as a detailed model of the absorption spectra of the various greenhouse gases:

Measured vs Simulated Downward Longwave Radiation at the Surface, Evans & Puckrin

Measured vs Simulated Downward Longwave Radiation at the Surface, Evans & Puckrin

What is interesting is seeing the actual values of longwave radiation at the earth’s surface and the comparison 1-d simulations for that particular profile. (See Part Five for a little more about 1-d simulations of the “radiative transfer equations”). The data and the mathematical model matches very well.

Is that surprising?

It shouldn’t be if you have worked your way through all the posts in this series. Calculating the radiative forcing from CO2 or any other gas is mathematically demanding but well-understood science. (That is a whole different challenge compared with modeling the whole climate 1 year or 10 years from now).

They did the same for a summer profile and reported in that case on the water vapor component:

Downwards Longwave Radiation at the Earth's Surface, Summer

Downwards Longwave Radiation at the Earth's Surface, Summer

As an interesting aside, it’s a lot harder to get the data for the downwards flux at the earth’s surface than it is for upwards flux at the top of atmosphere (OLR). Why?

Because a few satellites racing around can measure most of the radiation coming out from the earth. But to get the same coverage of the downwards radiation at the earth’s surface you would need thousands or millions of expensive measuring stations..

Conclusion

Measurements of longwave radiation at the earth’s surface help to visualize the “greenhouse” effect. For people doubting its existence this measured radiation might also help to convince them that it is a real effect!

If there was no “greenhouse” effect, there would be no longwave radiation downwards at the earth’s surface.

Calculations of the longwave radiation due to each gas match quite closely with the measured values. This won’t be surprising to people who have followed through this series. The physics of absorption and re-emission is a subject which has been extremely thoroughly studied for many decades, in fact back into the 19th century.

How climate responds to the “extra radiation” (radiative forcing is the standard term) from increases in some “greenhouse” gases is whole different story.

More in this series

Part Seven – The Boring Numbers – the values of “radiative forcing” from CO2 for current levels and doubling of CO2.

Part Eight – Saturation – explaining “saturation” in more detail

CO2 Can’t have that Effect Because.. – common “problems” or responses to the theory and evidence presented

AND much more about the downward radiation from the atmosphere – The Amazing Case of “Back-Radiation”Part Two, and Part Three

Reference

Measurements of the Radiative Surface Forcing of Climate, W.J.F. Evans & E. Puckrin, American Meteorological Society, 18th Conference on Climate Variability and Change (2006)

Read Full Post »

Urban Heat Island in Japan

For newcomers to the climate debate it is often difficult to understand if global warming even exists. Controversy rages about temperature records, “adjustments” to individual stations, methods of creating the global databases like CRU and GISS and especially the problem of UHI.

UHI, or the urban heat island, refers to the problem that temperatures in cities are warmer than temperatures in nearby rural areas, not due to a real climatic effect, but due to concrete, asphalt, buildings and cars. There are also issues raised as to the actual location of many temperature stations, as Anthony Watts and his volunteer work demonstrated in the US.

First of all, everyone agrees that the UHI exists. The controversy rages about how large it is. The IPCC (2007) believes it is very low – 0.006°C per decade globally. This would mean that out of the 0.7°C temperature rise in the 20th century, the UHI was only 0.06°C or less than 10% – not particularly worth worrying about.

For those few not familiar with the mainstream temperature reconstruction of the last 150 years, here is the IPCC from 2007 (global reconstructions):

IPCC 2007 Global Temperature 1840-2000

IPCC 2007, Working Group 1, Historical Overview of Climate Change

New Research from Japan

Detection of urban warming in recent temperature trends in Japan by Fumiaki Fujibe was published in the International Journal of Climatology (2009). It is a very interesting paper which I’ll comment on in this post.

The abstract reads:

The contribution of urban effects on recent temperature trends in Japan was analysed using data at 561 stations for 27 years (March 1979–February 2006). Stations were categorized according to the population density of surrounding few kilometres. There is a warming trend of 0.3–0.4 °C/decade even for stations with low population density (<100 people per square kilometre), indicating that the recent temperature increase is largely contributed by background climatic change. On the other hand, anomalous warming trend is detected for stations with larger population density. Even for only weakly populated sites with population density of 100–300/km2, there is an anomalous trend of 0.03–0.05 °C/decade. This fact suggests that urban warming is detectable not only at large cities but also at slightly urbanized sites in Japan. Copyright, 2008 Royal Meteorological Society.

Why the last 27 years?

The author first compares the temperature over 100 years as measured in Tokyo in the central business district with that in Hachijo Island, 300km south.

Tokyo –               3.1°C rise over 100 years (1906-2006)
Hachijo Island –  0.6°C over the same period

Tokyo vs Hachijo Island, 100 years

This certainly indicates a problem, but to do a thorough study over the last 100 years is impossible because most temperature stations with a long history are in urban areas.

However, at the end of the 1970’s, the Automated Meteorological Data Acquisition System (AMeDAS) was deployed around Japan providing hourly temperature data at 800 stations. The temperature data from these are the basis for the paper. The 27 years coincides with the large temperature rise (see above) of around 0.3-0.4°C globally.

And the IPCC (2007) summarized the northern hemisphere land-based temperature measurements from 1979- 2005 as 0.3°C per decade.

How was Urbanization measured?

The degree of urbanization around each site was calculated from grid data of population and land use, because city populations often used as an index of urban size (Oke, 1973; Karl et al., 1988; Fujibe, 1995) might not be representative of the thermal environment of a site located outside the central area of a city.

What were the Results?

Temperature anomaly against population density, Japan

Mean temperature anomaly vs population density, Japan

The x-axis, D3, is a measure of population density. T’mean is the change in the mean temperature per decade.

Tmean is the average of all of the hourly temperature measurements, it is not the average of Tmax and Tmin.

Notice the large scatter – this shows why having a large sample is necessary. However, in spite of that, there is a clear trend which demonstrates the UHI effect.

There is large scatter among stations, indicating the dominance of local factors’ characteristic to each station. Nevertheless, there is a positive correlation of 0.455 (Tmean = 0.071 logD3 + 0.262 °C), which is significant at the 1% level, between logD3 and Tmean.

Here’s the data summarized with T’mean as well as the T’max and T’min values. Note that D3 is population per km2 around the point of temperature measurement, and remember that the temperature values are changes per decade:

The effect of UHI demonstrated in various population densities

The effect of UHI demonstrated in various population densities

Note that, as observed by many researchers in other regions, especially Roger Pielke Sr, the Tmin values are the most problematic – demonstrating the largest UHI effect. Average temperatures for land-based stations globally are currently calculated from the average of Tmax and Tmin, and in many areas globally it is the Tmin which has shown the largest anomalies. But back to our topic under discussion..

And for those confused about how the Tmean can be lower than the Tmin value in each population category, it is because we are measuring anomalies from decade to decade.

And the graphs showing the temperature anomalies by category (population density):

Dependence of Tmean, Tmax and Tmin on population density for different regions in Japan

Dependence of Tmean, Tmax and Tmin on population density for different regions in Japan

Quantifying the UHI value

Now the author carries out an interesting step:

As an index of net urban trend, the departure of T from its average for surrounding non-urban stations was used on the assumption that regional warming was locally uniform.

That is, he calculates the temperature deviation in each station in category 3-6 with the locally relevant category 1 and 2 (rural) stations. (There were not enough category 1 stations to do it with just category 1). The calculation takes into account how far away the “rural” stations are, so that more weight is given to closer stations.

Estimate of actual UHI by referencing the closest rural stations

Estimate of actual UHI by referencing the closest rural stations - again categorized by population density

And the relevant table:

Temperature delta from nearby rural areas vs population density

Temperature delta from nearby rural areas vs population density

Conclusion

Here’s what the author has to say:

On the one hand, it indicates the presence of warming trend over 0.3 °C/decade in Japan, even at non-urban stations. This fact confirms that recent rapid warming at Japanese cities is largely attributable to background temperature rise on the large scale, rather than the development of urban heat islands.

..However, the analysis has also revealed the presence of significant urban anomaly. The anomalous trend for the category 6, with population density over 3000 km−2 or urban surface coverage over 50%, is about 0.1 °C/decade..

..This value may be small in comparison to the background warming trend in the last few decades, but they can have substantial magnitude when compared with the centennial global trend, which is estimated to be 0.74°C/century for 1906–2005 (IPCC, 2007). It therefore requires careful analysis to avoid urban influences in evaluating long-term temperature changes.

So, in this very thorough study, in Japan at least, the temperature rise that has been measured over the last few decades is a solid result. The temperature increase from 1979 – 2006 has been around 0.3°C/decade

However, in the larger cities the actual measurement will be overstated by 25%.

And in a time of lower temperature rise, the UHI may be swamping the real signal.

The IPCC (2007) had this to say:

A number of recent studies indicate that effects of urbanisation and land use change on the land-based temperature record are negligible (0.006ºC per decade) as far as hemispheric- and continental-scale averages are concerned because the very real but local effects are avoided or accounted for in the data sets used.

So, on the surface at least, this paper indicates that the IPCC’s current position may be in need of modification.

Read Full Post »

In many debates on whether the earth has been cooling this decade we often hear

This decade is the warmest on record

(Note: reference is to the “naughties” decade).

This post isn’t about whether or not the temperature has gone up or down but just to draw attention to a subject that you would expect climate scientists and their marketing departments to handle better.

An Economic Analogy

Analogies don’t prove anything, but they can be useful illustrations, especially for those whose heads start to spin as soon as statistics are mentioned.

Suppose that the nineties were a roaring decade of economic progress, as measured by the GDP of industrialized nations (and ignoring all problems relating to what that all means). And suppose that the last half century with a few ups and downs had been one of strong economic progress.

Now suppose that around the start of the new millennium the industrialized nations fell into a mild recession and it dragged on for the best part of the decade. Towards the end of the decade a debate starts up amongst politicians about whether we are in recession or not.

There would be various statistics put forward, and of these the politicians out of power would favor the indicators that showed how bad things were. The politicians in power would favor the indicators that showed how good things were, or at least “the first signs of economic spring”.

Suppose in this debate some serious economists stood up and said,

But listen everyone, this decade has the highest GDP of any decade since records began.

What would we all think of these economists?

The progress that had taken the world to the start of the millennium would be the reason for the high GDP in the “naughties” decade. It doesn’t mean there isn’t a recession. In fact, it tells you almost nothing about the last few years. Why would these economists be bringing it up unless they didn’t understand “Economics 101”?

GDP and other measures of economic prosperity have a property that they share with the world’s temperature. The status at the end of this year depends in large part on the status at the end of last year.

In economics we can all see how this works. Prosperity is stored up year after year within the economic system. Even if some are spending like crazy others are making money as a result. When hard times come we don’t suddenly reappear, in economic terms, in 1935.

In climate it’s because the earth’s climate system stores energy. This is primarily the oceans and cryosphere (ice) but also includes the atmosphere.

Auto-Correlation for the total layman/woman who doesn’t want to hear about statistics

For those not statistically inclined, don’t worry this isn’t a technical treatment.

When various people analyze the temperature series for the last few decades they usually try and work out some kind of trend line and also other kinds of statistical treatments like “standard deviation”.

You can find lots of these on the web. I’m probably in a small minority but I don’t see the point of most of them. More on this at Is the climate more than weather? Is weather just noise?

However, for those who do see the point and carry out these analyses to prove or disprove that the world is warming or cooling in a “statistically significant” way, the more statistically inclined will be sure to mention one point. Because the temperature from year to year is related strongly to the immediate past – or in technical language “auto-correlated” – this changes the maths and widens the error bars.

Auto-correlation in layman’s terms is what I described in the economic analogy. Next year depends in large part on what happened last year.

Why mention this?

First, a slightly longer explanation of auto-correlation – skip that section if you are not interested..

Auto-Correlation in a little more detail

If you ever read anything about statistics you would have read about “the coin toss”.

I toss a coin – it’s 50/50 whether it comes up heads or tails. I have one here, flipping.. catching.. ok, trust me it’s heads.

Now I’m going to toss the coin again. What are the odds of heads or tails? Still 50/50. Ok, tossing.. heads again.

Now I’m going to toss the coin a 3rd time. At this point you check the coin and get it scientifically analyzed. Finally, much poorer, you hand me back the coin because it’s been independently verified as a “normal coin”. Ok so I toss the coin a 3rd time and it’s still 50/50 whether it lands heads or tails.

Many people who have never been introduced to statistics – like all the people who play roulette for real money that matters to them – have no concept of independent statistical events.

It’s a simple concept. What happened previously to the coin when I flipped it has absolutely no effect on a future toss of the coin. The coin has no memory. The law of averages doesn’t change the future. If I have tossed 10 heads in a row the next toss of this standard coin is no more likely to be tails than heads.

In statistics, the first kind of problems that are covered are ones where each event or each measurement are “independent”. Like the coin toss. This makes analysis of calculation of the mean (average) and standard deviation (how spread out the results are) quite simple.

Once a measurement or event is dependent in some way on the last reading (or an earlier reading) it gets much more complicated.

In technical language: Autocorrelation is the correlation of a signal with itself

If you want to assess a series of temperature measurements and work out a trend line and statistical significance of the results you need to take account of its auto-correlation.

What’s the Point?

What motivated this post was watching the behavior of some climate scientists, or at least their marketing departments. You can see them jump into many debates to point out that the error bars aren’t big enough on a particular graph, with a sad shake of their head as if to say “why aren’t people better at stats? why do we have to keep explaining the basics? you have to use an ARMA(1,1) process..

But the same people, in debates about current cooling or warming, keep repeating

This decade IS the warmest decade on record

as if they hadn’t heard the first thing about auto-correlation.

Statistically minded climate scientists, like our mythical economists earlier, should be the last people to make that statement. And they should be the first to be coughing slightly and putting up a hand when others make that point in the context of whether the current decade is warming or cooling.

Conclusion

Figuring out whether the current decade is cooling or warming isn’t as easy as it might seem and isn’t the subject of this post.

But next time someone tells you “This decade IS the warmest decade on record” – which means in the last 150 years, or a drop in the geological ocean – remember that it is true, but doesn’t actually answer the question of whether the last 10 years have seen warming or cooling.

And if they are someone who appears to know statistics, you have to wonder. Are they trying to fool you?

After all, if they know what auto-correlation is there’s no excuse.

Read Full Post »

Understanding the relationship between climate and weather is important in climate science.

Here’s NASA:

The difference between weather and climate is a measure of time. Weather is what conditions of the atmosphere are over a short period of time, and climate is how the atmosphere “behaves” over relatively long periods of time.

And again:

Climate is the average of weather over time and space.

Who could argue with that succinct statement? Easy for all of us to understand.

Now Tamino, in his long running blog, says:

Time and time again, peoplewhodontagreewithus-ists try to suggest that the last 10 years, or 9 years, or 8 years, or 7 years, or 6 years, or three and a half days of temperature data establish that the earth is cooling, in contradiction to mainstream climate science…

Of course that raises an interesting question: how long a time span do we need to establish a trend in global temperature data? It’s sometimes stated that the required time is 30 years, because that’s the time span used most often to distinguish climate from weather. Although that’s a useful guide, it’s not strictly correct. The time required to establish a trend in data depends on many things, including how big the trend is (the size of the signal) and how big, and what type, the noise is…

Well, I agree with the statistical principles involved here. But his comment does raise a very interesting point.

Is the global temperature value measured for a year just noise on top of the climate signal?

If the global temperature value measured in 2009 is less than that measured in 2008 did the world actually cool that year (relative to 2008), or is it just noise ?

Digression on Noise and Signal

For those not so familiar with the technical terms it’s worth explaining signal and noise a little. Let’s choose a non-controversial topic and suppose we want to set up a radio communications link. We have a receiver which amplifies the tiny incoming radio signal so that we can hear it – or retransmit it – whatever we want to do with this signal.

The noise is the random element that get mixed in with the signal. In amplifiers they are frequently the random movement of electrons (that increase with temperature). In reception of the signal they are the other radio waves at similar frequencies that have been reflected, diffracted and otherwise distorted their way to your receiver.

In this case, noise is stuff that is NOT the signal. It threatens to stop you measuring your signal – or at least make it less accurate. Noise can have a systematic bias or it can be random. And in the real world of engineering problems, dealing with noise is often a significant problem to be solved.

Signal and Noise in Climate

We are thinking here specifically of the average global temperature. Often known by its acronym, GMST (global mean surface temperature).

What Tamino appears to be saying is that the temperature from year to year is just the “noise” on top of the climate (temperature) signal. Well we don’t want noise to upset our measurement so in that case we do need to call on statistical processes to give us the real signal.

But is it true? Is this the right way to look at it?

Other commentators and scientists have made a similar point.  Easterling’s paper Is the climate warming or cooling? submitted to GRL (2009) says:

Numerous websites, blogs and articles in the media have claimed that the climate is no longer warming, and is now cooling. Here we show that periods of no trend or even cooling of the globally averaged surface air temperature are found in the last 34 years of the observed record, and in climate model simulations of the 20th and 21st century forced with increasing greenhouse gases. We show that the climate over the 21st century can and likely will produce periods of a decade or two where the globally averaged surface air temperature shows no trend or even slight cooling in the presence of longer-term warming.

But there’s a very interesting paper in Current Opinion in Environmental Sustainability (2009) from Kevin Trenberth on the global energy budget. It’s worth paying close attention to what he has to say, and for anyone interested in the subject of the global temperature, read the whole paper. From the introduction:

The global mean temperature in 2008 was the lowest since about 2000 (Figure 1). Given that there is continual heating of the planet, referred to as radiative forcing, by accelerating increases of carbon dioxide and other greenhouses due to human activities, why is the temperature not continuing to go up? The stock answer is that natural variability plays a key role and there was a major La Nina event early in 2008 that led to the month of January having the lowest anomaly in global temperature since 2000. While this is true, it is an incomplete explanation.

In particular, what are the physical processes? From an energy standpoint, there should be an explanation that accounts for where the radiative forcing has gone. Was it compensated for temporarily by changes in clouds or aerosols, or other changes in atmospheric circulation that allowed more radiation to escape to space?

Was it because a lot of heat went into melting Arctic sea ice or parts of Greenland and Antarctica, and other glaciers? Was it because the heat was buried in the ocean and sequestered, perhaps well below the surface? Was it because the La Nina led to a change in tropical ocean currents and rearranged the configuration of ocean heat?

Perhaps all of these things are going on?

Interesting.

Trenberth is saying that we need to understand what happens to the global energy “account” in shorter time periods than decades. In fact, it’s essential. Because if we don’t know whether the earth warms or cools in one year, there might be important aspects of the climate that we haven’t understood in sufficient detail – or we aren’t measuring in sufficient detail. And if the earth has warmed but we don’t know where the energy actually is that is a problem to be solved as well.

All of which leads to the inescapable conclusion that the average global temperature value for one year is not “noise”. It is the “signal”. (See the technical note on temperature measurement at the end of this post)

After all, if there is a radiative imbalance in the earth’s climate system such that we take more energy in than we radiate out it must be warming. But if this energy is not being stored somewhere then the earth hasn’t warmed in that year. In fact, if there is less heat in the climate system, we radiated out more than we received in. There isn’t some secret place that is storing it all up.

Roger Pielke Sr has made that point (probably many times).

What I’m not saying is that earth is on a long term cooling trend. And I’m definitely not saying that the cold weather yesterday means the earth is cooling.

But if the global temperature in one year is cooler than the global temperature in the previous year then the earth has cooled. It’s not noise.

It’s only noise if we can’t measure temperature accurately enough to be sure whether the temperature has gone up or down.

Possibly I have misunderstood Tamino. I did post a comment on this topic to his recent blog post (twice) but possibly due to a moderating snafu, or possibly because there were much more important questions to be answered, it didn’t get published.

Conclusion

As NASA says, the climate is the average of the weather.

Monthly and annual averages of specific values like temperature and total heat stored in the climate system can change in apparently random ways. But that doesn’t mean that these changes don’t reflect real changes in the system.

I can draw a trend line through a longer time series and show lots of deviations from the trend line. But that doesn’t give some kind of superior validity to the trend line. I could take 10000 years of climate data and show that 100 year periods are just noise.

What is the climate doing? It’s apparently random, but actually always changing. Over the last 40 years it has warmed. For the more recent shorter period that Trenberth covered, it cooled. Over the last 20,000 years it has warmed. Over the last 10 million years it has cooled.

If there’s less heat in the earth’s climate at the end of 2010 compared with 2009, the planet will have cooled. And if there’s more heat at the end of 2010 compared with 2009, the planet will have warmed. Not noise, it really will have changed.

Why the total heat stored goes up and down, and how that heat is distributed is at the heart of the complex subject known as climate science.

Technical Note

More on this on a later post, but as many physicists will point out, taking the average temperature around the world is a little odd. That might seem strange – how else can you see whether the world is warming???

As a thought experiment, if you have 5 places to measure temperature, nicely distributed, the average temperature,

Tav = T1 + T2 + T3 + T4 + T5.

But it’s not really meaningful. T1 might be measured in a big lake – and water stores a lot of energy per unit volume. T2 might be measured on a big piece of plastic and be storing almost no energy.

Adding up these different numbers and dividing by the number of measurements is not really a useful number. Sure if you keep calculating this same average it kind of gives you a clue where things are headed. But it could be misleading. The piece of plastic might go up 10°C and the lake might go down 5°C so the average has gone up. But the total heat in the system will have gone down.

Two more useful methods would be:

  1. Sum up the energy actually stored -as Trenberth does in his paper – but it’s tougher
  2. Average the fourth power of the temperatures – T4

Energy is radiated out as the fourth power of temperature, so averaging this value gives you more idea how much energy the system is radiating – as a proxy for the energy changes in the system. It’s easy to show that global average temperature can go up while total radiated energy is going down. Just have the colder places heat up more than the warmer places cool down and Tav increases while T4av decreases. More on this in another post.

Read Full Post »

« Newer Posts