In Part One we saw:
- some trends based on real radiosonde measurements
- some reasons why long term radiosonde measurements are problematic
- examples of radiosonde measurement “artifacts” from country to country
- the basis of reanalyses like NCEP/NCAR
- an interesting comparison of reanalyses against surface pressure measurements
- a comparison of reanalyses against one satellite measurement (SSMI)
But we only touched on the satellite data (shown in Trenberth, Fasullo & Smith in comparison to some reanalysis projects).
Wentz & Schabel (2000) reviewed water vapor, sea surface temperature and air temperature from various satellites. On water vapor they said:
..whereas the W [water vapor] data set is a relatively new product beginning in 1987 with the launch of the special sensor microwave imager (SSM/I), a multichannel microwave radiometer. Since 1987 four more SSM/I’s have been launched, providing an uninterrupted 12-year time series. Imaging radiometers before SSM/I were poorly calibrated, and as a result early water-vapour studies (7) were unable to address climate variability on interannual and decadal timescales.
The advantage of SSMI is that it measures the 22 GHz water vapor line. Unlike measurements in the IR around 6.7 μm (for example the HIRS instrument) which require some knowledge of temperature, the 22 GHz measurement is a direct reflection of water vapor concentration. The disadvantage of SSMI is that it only works over the ocean because of the low ocean emissivity (but variable land emissivity). And SSMI does not provide any vertical resolution of water vapor concentration, only the “total precipitable water vapor” (TPW) also known as “column integrated water vapor” (IWV).
The algorithm, verification and error analysis for the SSMI can be seen in Wentz’s 1997 JGR paper: A well-calibrated ocean algorithm for special sensor microwave / imager.
Here is Wentz & Schabel’s graph of IWV over time (shown as W in their figure):
Figure 1 – Region captions added to each graph
They calculate, for the short period in question (1988-1998):
- 1.9%/decade for 20°N – 60°N
- 2.1%/decade for 20°S – 20°N
- 1.0%/decade for 20°S – 60°S
Soden et al (2005) take the dataset a little further and compare it to model results:
Figure 2
They note the global trend of 1.4 ± 0.78 %/decade.
As their paper is more about upper tropospheric water vapor they also evaluate the change in channel 12 of the HIRS instrument (High Resolution Infrared Radiometer Sounder):
The radiance channel centered at 6.7 μm (channel 12) is sensitive to water vapor integrated over a broad layer of the upper troposphere (200 to 500 hPa) and has been widely used for studies of upper tropospheric water vapor. Because clouds strongly attenuate the infrared radiation, we restrict our analysis to clear-sky radiances in which the upwelling radiation in channel 12 is not affected by clouds.
The change in radiance from channel 12 is approximately zero over the time period, which for technical reasons (see note 1) corresponds to roughly constant relative humidity in that region over the period from the early 1980’s to 2004. You can read the technical explanation in their paper, but as we are focusing on total water vapor (IWV) we will leave a discussion over UTWV for another day.
Updated Radiosonde Trends
Durre et al (2009) updated radiosonde trends in their 2009 paper. There is a lengthy extract from the paper in note 2 (end of article) to give insight into why radiosonde data cannot just be taken “as is”, and why a method has to be followed to identify and remove stations with documented or undocumented instrument changes.
Importantly they note, as with Ross & Elliott 2001:
..Even though the stations were located in many parts of the globe, only a handful of those that qualified for the computation of trends were located in the Southern Hemisphere. Consequently, the trend analysis itself was restricted to the Northern Hemisphere as in that of RE01..
Here are their time-based trends:
Figure 3
And a map of trends:
Figure 4
Note the sparse coverage of the oceans and also the land regions in Africa and Asia, except China.
And their table of results:
Figure 5
A very interesting note on the effect of their removal of stations based on detection of instrument changes and other inhomogeneities:
Compared to trends based on unadjusted PW data (not shown), the trends in Table 2 are somewhat more positive. For the Northern Hemisphere as a whole, the unadjusted trend is 0.22 mm/decade, or 0.23 mm/decade less than the adjusted trend.
This tendency for the adjustments to yield larger increases in PW is consistent with the notion that improvements in humidity measurements and observing practices over time have introduced an artificial drying into the radiosonde record (e.g., RE01).
TOPEX Microwave
Brown et al (2007) evaluated data from the Topex Microwave Radiometer (TMR). This is included on the Topex/Poseiden oceanography satellite and is dedicated to measuring the integrated water vapor content of the atmosphere. TMR is nadir pointing and measures the radiometric brightness temperature at 18, 21 and 37 GHz. As with SSMI, it only provides data over the ocean.
For the period of operation of the satellite (1992 – 2005) they found the trend of 0.90 ± 0.06 mm/decade:
Figure 6 – Click for a slightly larger view
And a map view:
Figure 7
Paltridge et al (2009)
Paltridge, Arking & Pook (2009) – P09 – take a look at the NCEP/NCAR reanalysis project from 1973 – 2007. They chose 1973 as the start date for the reasons explained in Part One – Elliott & Gaffen have shown that pre-1973 data has too many problems. They focus on humidity data below 500mbar as the measurement of humidity at higher altitudes and lower temperatures are more prone to radiosonde problems.
The NCEP/NCAR data shows positive trends below 850 mbar (=hPa) in all regions, negative trends above 850 mbar in the tropics and midlatitudes, and negative trends above 600 mbar in the northern midlatitudes.
Here are the water vapor trends vs height (pressure) for both relative humidity and specific humidity:
Figure 8
And here is the map of trends:
Figure 9
They comment on the “boundary layer” vs “free troposphere” issue.. In brief the boundary layer is that “well-mixed layer” close to the surface where the friction from the ground slows down the atmospheric winds and results in more turbulence and therefore a well-mixed layer of atmosphere. This is typically around 300m to 1000m high (there is no sharp “cut off”). At the ocean surface the atmosphere tends to be saturated (if the air is still) and so higher temperatures lead to higher specific humidities. (See Clouds and Water Vapor – Part Two if this is a new idea). Therefore, the boundary layer is uncontroversially expected to increase its water vapor content with temperature increases. It is the “free troposphere” or atmosphere above the boundary layer where the debate lies.
They comment:
It is of course possible that the observed humidity trends from the NCEP data are simply the result of problems with the instrumentation and operation of the global radiosonde network from which the data are derived.
The potential for such problems needs to be examined in detail in an effort rather similar to the effort now devoted to abstracting real surface temperature trends from the face-value data from individual stations of the international meteorological networks.
In the meantime, it is important that the trends of water vapor shown by the NCEP data for the middle and upper troposphere should not be “written off” simply on the basis that they are not supported by climate models—or indeed on the basis that they are not supported by the few relevant satellite measurements.
There are still many problems associated with satellite retrieval of the humidity information pertaining to a particular level of the atmosphere— particularly in the upper troposphere. Basically, this is because an individual radiometric measurement is a complicated function not only of temperature and humidity (and perhaps of cloud cover because “cloud clearing” algorithms are not perfect), but is also a function of the vertical distribution of those variables over considerable depths of atmosphere. It is difficult to assign a trend in such measurements to an individual cause.
Since balloon data is the only alternative source of information on the past behavior of the middle and upper tropospheric humidity and since that behavior is the dominant control on water vapor feedback, it is important that as much information as possible be retrieved from within the “noise” of the potential errors.
So what has P09 added to the sum of knowledge? We can already see the NCEP/NCAR trends in Trends and variability in column-integrated atmospheric water vapor by Trenberth et al from 2005.
Did the authors just want to take the reanalysis out of the garage, drive it around the block a few times and park it out front where everyone can see it?
No, of course not!
– I hear all the NCEP/NCAR believers say.
One of our commenters asked me to comment on Paltridge’s reply to Dessler (which was a response to Paltridge..), and linked to another blog article. It seems like even the author of that blog article is confused about NCEP/NCAR. This reanalysis project (as explained in Part One), is a model output not a radiosonde dataset:
Humidity is in category B – ‘although there are observational data that directly affect the value of the variable, the model also has a very strong influence on the value ‘
And for those people who have a read of Kalnay’s 1996 paper describing the project they will see that with the huge amount of data going into the model, the data wasn’t quality checked by human inspection on the way in. Various quality control algorithms attempt to (automatically) remove “bad data”.
This is why we have reviewed Ross & Elliott (2001) and Durre et al (2009). These papers review the actual radiosonde data and find increasing trends in IWV. They also describe in a lot of detail what kind of process they had to go through to produce a decent dataset. The authors of both papers also both explained that they could only produce a meaningful trend for the northern hemisphere. There is not enough quality data for the southern hemisphere to even attempt to produce a trend.
And Durre et al note that when they use the complete dataset the trend is half that calculated with problematic data removed.
This is the essence of the problem with Paltridge et al (2009)
Why is Ross & Elliot (2001) not reviewed and compared? If Ross & Elliott found that Southern Hemisphere trends could not be calculated because of the sparsity of quality radiosonde data, why doesn’t P09 comment on that? Perhaps Ross & Elliott are wrong. But no comment from P09. (Durre et al find the same problem with SH data, and probably too late for P09 but not too late for the 2010 comments the authors have been making).
In The Mass of the Atmosphere: A Constraint on Global Analyses, Trenberth & Smith pointed out clear problems with NCEP/NCAR vs ERA-40. Perhaps Trenberth and Smith are wrong. Or perhaps there is another way to understand these results. But no comment on this from P09.
P09 comment on the issues with satellite humidity retrieval for different layers of the atmosphere but no comment on the results from the microwave SSMI which has a totally different algorithm to retrieve IWV. And it is important to understand that they haven’t actually demonstrated a problem with satellite measurements. Let’s review their comment:
In the meantime, it is important that the trends of water vapor shown by the NCEP data for the middle and upper troposphere should not be “written off” simply on the basis that they are not supported by climate models—or indeed on the basis that they are not supported by the few relevant satellite measurements.
The reader of the paper wouldn’t know that Trenberth & Smith have demonstrated an actual reason for preferring ERA-40 (if any reanalysis is to be used).
The reader of the paper might understand “a few relevant satellite measurements” as meaning there wasn’t much data from satellites. If you review figure 4 you can see that the quality radiosonde data is essentially mid-latitude northern hemisphere land. Satellites – that is, multiple satellites with different instruments at different frequencies – have covered the oceans much much more comprehensively than radiosondes. Are the satellites all wrong?
The reader of the paper would think that the dataset has been apparently ditched because it doesn’t fit climate models.
This is probably the view of Paltridge, Arking & Pook. But they haven’t demonstrated it. They have just implied it.
Dessler & Davis (2010)
Dessler & Davis responded to P09. They plot some graphs from 1979 to present. The reason for plotting graphs from 1979 is because this is when the satellite data was introduced. And all of the reanalysis projects, except NCEP/NCAR incorporated satellite humidity data. (NCEP/NCAR does incorporate satellite data for some other fields).
Basically when data from a new source is introduced, even if it is more accurate, it can introduce spurious trends and even in opposite direction to the real trends. This was explained in Part One under the heading Comparing Reanalysis of Humidity. So trend analysis usually takes place over periods of consistent data sources.
This figure contrasts short term relationships between temperature and humidity with long term relationships:
Figure 10
If the blog I referenced earlier is anything to go by, the primary reason for producing this figure has been missed. And as that blog article seemed to not comprehend that NCEP/NCAR is a reanalysis (= model output) it’s not so surprising.
Dessler & Davis said:
There is poorer agreement among the reanalyses, particularly compared to the excellent agreement for short‐term fluctuations. This makes sense: handling data inhomogeneities will introduce long‐term trends in the data but have less effect on short‐term trends. This is why long term trends from reanalyses tend to be looked at with suspicion [e.g., Paltridge et al., 2009; Thorne and Vose, 2010; Bengtsson et al., 2004].
[Emphasis added]
They are talking about artifacts of the model (NCEP/NCAR). In the short term the relationship between humidity and temperature agree quite well among the different reanalyses. But in the longer term NCEP/NCAR doesn’t – demonstrating that it is likely introducing biases.
The alternative, as Dessler & Davis explain, is that there is somehow an explanation for a long term negative feedback (temperature and water vapor) with a short term positive feedback.
If you look around the blog world, or at say, Professor Lindzen you don’t find this. You find arguments about why short term feedback is negative. Not an argument that short term is positive and yet long term is negative.
I agree that many people say: “I don’t know, it’s complicated, perhaps there is a long term negative feedback..” and I respect that point of view.
But in the blog article pointed to me by our commenter in Part One, the author said:
JGR let some decidedly unscientific things slip into that Dessler paper. One of the reasons provided is nothing more than a form of argument from ignorance: “there’s no theory that explains why the short term might be different to the long term”.
Why would any serious scientist admit that they don’t have the creativity or knowledge to come up with some reasons, and worse, why would they think we’d find that ignorance convincing?
..It’s not that difficult to think of reasons why it’s possible that humidity might rise in the short run, but then circulation patterns or other slower compensatory effects shift and the long run pattern is different. Indeed they didn’t even have to look further than the Paltridge paper they were supposedly trying to rebut (see Garth’s writing below). In any case, even if someone couldn’t think of a mechanism in a complex unknown system like our climate, that’s not “a reason” worth mentioning in a scientific paper.
The point that seems to have been missed is this is not a reason to ditch the primary dataset but instead a reason why NCEP/NCAR is probably flawed compared with all the other reanalyses. And compared with the primary dataset. And compared with multiple satellite datasets.
This is the issue with reanalyses. They introduce spurious biases. Bengsston explained how (specifically for ERA-40). Trenberth & Smith have already demonstrated it for NCEP/NCAR. And now Dessler & Davis have simply pointed out another reason for taking that point of view.
The blog writer thinks that Dessler is trying to ditch the primary dataset because of an argument from ignorance. I can understand the confusion.
It is still confusion.
One last point to add is that Dessler & Davis also added the very latest in satellite water vapor data – the AIRS instrument from 2003. AIRS is a big step forward in satellite measurement of water vapor, a subject for another day.
AIRS also shows the same trends as the other reanalyses and different from NCEP/NCAR.
A Scenario
Before reaching the conclusion I want to throw a scenario out there. It is imaginary.
Suppose that there were two sources of data for temperature over the surface of the earth – temperature stations and satellite. Suppose the temperature stations were located mainly in mid-latitude northern hemisphere locations. Suppose that there were lots of problems with temperature stations – instrument changes & environmental changes close to the temperature stations (we will call these environmental changes “UHI”).
Suppose the people who had done the most work analyzing the datasets and trying to weed out the real temperature changes from the spurious ones had demonstrated that the temperature had decreased over northern hemisphere mid-latitudes. And that they had claimed that quality southern hemisphere data was too “thin on the ground” to really draw any conclusions from.
Suppose that satellite data from multiple instruments, each using different technology, had also demonstrated that temperatures were decreasing over the oceans.
Suppose that someone fed the data from the (mostly NH) land-based temperature stations – without any human intervention on the UHI and instrument changes – into a computer model.
And suppose this computer model said that temperatures were increasing.
Imagine it, for a minute. I think we can picture the response.
And yet, this is a similar situation that we are confronted with on integrated water vapor (IWV). I have tried to think of a reason why so many people would be huge fans of this particular model output. I did think of one, but had to reject it immediately as being ridiculous.
I hope someone can explain why NCEP/NCAR deserves the fan club it has currently built up.
Conclusion
Radiosonde datasets, despite their problems, have been analyzed. The researchers have found positive water vapor trends for the northern hemisphere with these datasets. As far as I know, no one has used radiosonde datasets to find the opposite.
Radiosonde datasets provide excellent coverage for mid-latitude northern hemisphere land, and, with a few exceptions, poor coverage elsewhere.
Satellites, using IR and microwave, demonstrate increasing water vapor over the oceans for the shorter time periods in which they have been operating.
Reanalysis projects have taken in various data sources and, using models, have produced output values for IWV (total water vapor) with mixed results.
Reanalysis projects all have the benefit of convenience, but none are perfect. The dry mass of the atmosphere, which should be constant within noise errors unless a new theory comes along, demonstrates that NCEP/NCAR is worse than ERA-40.
ERA-40 demonstrates increasing IWV. NCEP/NCAP demonstrates negative IWV.
Some people have taken NCEP/NCAR for a drive around the block and parked it in front of their house and many people have wandered down the street to admire it. But it’s not the data. It’s a model.
Perhaps Paltridge, Arking or Pook can explain why NCEP/NCAR is a quality dataset. Unfortunately, their paper doesn’t demonstrate it.
It seems that some people are really happy if one model output or one dataset or one paper says something different from what 5 or 10 or 100 others are saying. If that makes you, the reader, happy, then at least the world has less deaths from stress.
In any field of science there are outliers.
The question on this blog at least, is what can be proven, what can be demonstrated and what evidence lies behind any given claim. From this blog’s perspective, the fact that outliers exist isn’t really very interesting. It is only interesting to find out if in fact they have merit.
In the world of historical climate datasets nothing is perfect. It seems pretty clear that integrated water vapor has been increasing over the last 20-30 years. But without satellites, even though we have a long history of radiosonde data, we have quite a limited dataset geographically.
If we can only use radiosonde data perhaps we can just say that water vapor has been increasing over northern hemisphere mid-latitude land for nearly 40 years. If we can use satellite as well, perhaps we can say that water vapor has been increasing everywhere for over 20 years.
If we can use the output from reanalysis models and do a lucky dip perhaps we can get a different answer.
And if someone comes along, analyzes the real data and provides a new perspective then we can all have another review.
References
On the Utility of Radiosonde Humidity Archives for Climate Studies, Elliot & Gaffen, Bulletin of the American Meteorological Society (1991)
Relationships between Tropospheric Water Vapor and Surface Temperature as Observed by Radiosondes, Gaffen, Elliott & Robock, Geophysical Research Letters(1992)
Column Water Vapor Content in Clear and Cloudy Skies, Gaffen & Elliott, Journal of Climate (1993)
On Detecting Long Term Changes in Atmospheric Moisture, Elliot, Climate Change (1995)
Tropospheric Water Vapor Climatology and Trends over North America, 1973-1993, Ross & Elliot, Journal of Climate (1996)
An assessment of satellite and radiosonde climatologies of upper-tropospheric water vapor, Soden & Lanzante, Journal of Climate (1996)
The NCEP/NCAR 40-year Reanalysis Project, Kalnay et al, Bulletin of the American Meteorological Society (1996)
Precise climate monitoring using complementary satellite data sets, Wentz & Schabel, Nature (2000)
Radiosonde-Based Northern Hemisphere Tropospheric Water Vapor Trends, Ross & Elliott, Journal of Climate (2001)
An analysis of satellite, radiosonde, and lidar observations of upper tropospheric water vapor from the Atmospheric Radiation Measurement Program, Soden et al, Journal of Geophysical Research (2005)
The Radiative Signature of Upper Tropospheric Moistening, Soden et al, Science (2005)
The Mass of the Atmosphere: A Constraint on Global Analyses, Trenberth & Smith, Journal of Climate (2005)
Trends and variability in column-integrated atmospheric water vapor, Trenberth et al, Climate Dynamics (2005)
Can climate trends be calculated from reanalysis data? Bengtsson et al, Journal of Geophysical Research (2005)
Ocean Water Vapor and Cloud Burden Trends Derived from the Topex Microwave Radiometer, Brown et al, Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEE International (2007)
Radiosonde-based trends in precipitable water over the Northern Hemisphere: An update, Durre et al, Journal of Geophysical Research (2009)
Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data, Paltridge et al, Theoretical Applied Climatology (2009)
Trends in tropospheric humidity from reanalysis systems, Dessler & Davis, Journal of Geophysical Research (2010)
Notes
Note 1: The radiance measurement in this channel is a result of both the temperature of the atmosphere and the amount of water vapor. If temperature increases radiance increases. If water vapor increases it attenuates the radiance. See the slightly more detailed explanation in their paper.
Note 2: Here is a lengthy extract from Durre et al (2009), partly because it’s not available for free, and especially to give an idea of the issues arising from trying to extract long term climatology from radiosonde data and, therefore, careful approach that needs to be taken.
Emphasis added in each case:
From the IGRA+RE01 data, stations were chosen on the basis of two sets of requirements: (1) criteria that qualified them for use in the homogenization process and (2) temporal completeness requirements for the trend analysis.
In order to be a candidate for homogenization, a 0000 UTC or 1200 UTC time series needed to both contain at least two monthly means in each of the 12 calendar months during 1973–2006 and have at least five qualifying neighbors (see section 2.2). Once adjusted, each time series was tested against temporal completeness requirements analogous to those used by RE01; it was considered sufficiently complete for the calculation of a trend if it contained no more than 60 missing months, and no data gap was longer than 36 consecutive months.
Approximately 700 stations were processed through the pairwise homogenization algorithm (hereinafter abbreviated as PHA) at each of the nominal observation times. Even though the stations were located in many parts of the globe, only a handful of those that qualified for the computation of trends were located in the Southern Hemisphere.
Consequently, the trend analysis itself was restricted to the Northern Hemisphere as in that of RE01. The 305 Northern Hemisphere stations for 0000 UTC and 280 for 1200 UTC that fulfilled the completeness requirements covered mostly North America, Greenland, Europe, Russia, China, and Japan.
Compared to RE01, the number of stations for which trends were computed increased by more than 100, and coverage was enhanced over Greenland, Japan, and parts of interior Asia. The larger number of qualifying
stations was the result of our ability to include stations that were sufficiently complete but contained significant inhomogeneities that required adjustment.Considering that information on these types of changes tends to be incomplete for the historical record, the successful adjustment for inhomogeneities requires an objective technique that not only uses any available metadata, but also identifies undocumented change points [Gaffen et al., 2000; Durre et al., 2005]. The PHA of MW09 has these capabilities and thus was used here. Although originally developed for homogenizing time series of monthly mean surface temperature, this neighbor-based procedure was designed such that it can be applied to other variables, recognizing that its effectiveness depends on the relative magnitudes of change points compared to the spatial and temporal variability of the variable.
As can be seen from Table 1, change points were identified in 56% of the 0000 UTC and 52% of the 1200 UTC records, for a total of 509 change points in 317 time series.
Of these, 42% occurred around the time of a known metadata event, while the remaining 58% were considered to be ‘‘undocumented’’ relative to the IGRA station history information. On the basis of the visual inspection, it appears that the PHA has a 96% success rate at detecting obvious discontinuities. The algorithm can be effective even when a particular step change is present at the target and a number of its neighbors simultaneously.
In Japan, for instance, a significant drop in PW associated with a change between Meisei radiosondes around 1981 (Figure 1, top) was detected in 16 out of 17 cases, thanks to the inclusion of stations from adjacent tries in the pairwise comparisons Furthermore, when an adjustment is made around the time of a documented change in radiosonde type, its sign tends to agree with that expected from the known biases of the relevant instruments. For example, the decrease in PW at Yap in 1995 (Figure 1, middle) is consistent with the artificial drying expected from the change from a VIZ B to a Vaisala RS80–56 radiosonde that is known to have occurred at this location and time [Elliott et al., 2002; Wang and Zhang, 2008].
SoD, have you considered the implications, if any, of this paper to measurements of increased atmospheric water vapor [regardless of the atmospheric height or location, land/ocean, of that increase];
Click to access 271RodericketalPanreviewIGeogCompass2009_000.pdf
The paper is disussed here:
http://pielkeclimatesci.wordpress.com/2009/11/11/pan-evaporation-trends-and-its-relation-to-the-diagnosis-of-global-warming-comments-on-a-new-article-by-roderick-et-al-2009/
cohenite,
Did you read the conclusion of Pielke, Sr.’s comment? In short, we don’t know what pan evaporation trends mean unless we know the local humidity trend as well as the temperature. Pan evaporation on a golf course in Palm Springs is going to be a lot less than it would be in the desert a few miles upwind.
I will re-iterate that the relevant measure with respect to radiative forcing is not precipitable water.
The largest emissions to space from the water vapor bands take place in the relatively humid sub-tropics.
I have created a scatter plot of water vapor emission to space versus precipitable water (using the cloud free areas of the GFS simulation of the GOES channels ) and find positive correlation up to about 40g/m^2.
From 40 to 60 g/m^2 there is a roughly congruous negative correlation.
That means that, spatially anyway, as water vapor increases, emission to space from the water vapor bands also increases until reaching 40g/m^2.
This means that for most of the area of the earth, water vapor feedback is negative.
That sounds interesting; is there a link?
I would also appreciate a link.
Best regards, Ray Dart.
ClimateWatcher:
I agree. Upper tropospheric water vapor is the key, and of course, around the globe the effects are non-linear.
I covered IWV because many people are confused about it and especially because it is a key component in the Miskolczi debate.
SOD wrote:
“In The Mass of the Atmosphere: A Constraint on Global Analyses, Trenberth & Smith pointed out clear problems with NCEP/NCAR vs ERA-40.”
“Reanalysis projects all have the benefit of convenience, but none are perfect. The dry mass of the atmosphere, which should be constant within noise errors unless a new theory comes along, demonstrates that NCEP/NCAR is worse than ERA-40.”
I didn’t find this conclusion in the abstract or looking at the data in Figure 2c. Perhaps I missed something important.
Bengsston explained that reanalyses introduced spurious long-term trends or biases in long-term trends in the following manner. In the early years, when observational data is limited, the re-analysis results will contain the biases of the model used in the re-analysis. In the later years, when more observational data is available (especially global satellite observations), the observational data will tend to over-rule the biases in the model. In the case of ERA-40, the model had a cold bias in the upper troposphere. So the ERA-40 re-analysis has a warming trend in the upper troposphere that is to large.
According to your post, the NCEP/NCAR reanalysis didn’t incorporate satellite observations of water vapor whereas other re-analysis products did. This could mean that all of the other re-analyses are wrong about long-term water vapor trends BECAUSE they have dramatically increased the observational constraints on water vapor in the satellite era. The difference between ERA-40 and NCEP/NCAR demonstrates that the biases introduced by the models used in re-analysis in early years could be the main source of error. If the quality of the old radiosonde data were the main source of error, both re-analyses would be biased in the same direction.
Frank:
[On Trenberth & Smith 2005]
I see it in fig. 1 & fig 2. They said for fig 2:
The anomaly (dry atmospheric pressure) from NCEP (green) shows much more variability post-1979 than from ERA40 (red).
SOD: Thanks for posting the Figure, but I’m not sure I draw the same conclusion from it that you do. For Paltridge, the relevant period is 1973-2007. Presumably, I’m supposed to calculate variability by squaring the deviations from zero for ERA-40 and NCEP/NCAR over this period. It’s too bad Trenberth didn’t do this for us. Unfortunately, I still don’t know how to relate reduced short-term variability in surface pressure to accuracy in long term trends in water vapor. If I had to pick one re-analysis over another, I’d pay more attention to Dessler’s Figure 3 (showing a big wet bias at 300 mb for NCEP/NCAR vs other re-analyses and AIRS) than Trenberth’s surface pressure (but I had only read Trenberth at the time I first commented).
From The NCEP–NCAR 50-Year Reanalysis: Monthly Means CD-ROM and Documentation, Kistler et al (2001):
I have put in a request to NOAA for the paper/web-page/other with the algorithm (the model) for humidity.
P09 comment that satellite data is not used. So is the temperature field from the radiosonde used to calculate specific humidity? Or temperature field from the model (with no satellite input?)
It is clear from reading Kistler 2001 that some temperature fields come from satellites.
Frank:
I’m not sure about this.
Wouldn’t the trends from 1979-present then be more accurate for ERA-40 (incorporating satellite data)? Unless all the satellite data was worse than the radiosonde data.
I’m not sure this correct.
Suppose the ERA-40 model is biased too wet, and the NCEP/NCAR model is biased too dry.
Now we introduce data into both reanalyses – where before it was mostly gaps or inconsistent radiosonde data – and now ERA-40 acquires a negative trend in water vapor, while NCEP acquires a positive trend.
Both have taken the same new data and produced opposite results.
I am not saying that has happened, but it is a possibility.
I could not determine exactly what Bengtsson et al actually thought about IWV trends in ERA40 as a result of the introduction of satellite. I have read the paper 3 times and it seems to give different conclusions in different places. It is much clearer about temperature trends. Not saying the paper is at fault. Perhaps the reader.
If you are in doubt about Bengtsson’s meaning, I’d suggest the abstract. Three times he refers to artifacts due to changes in the observing system. His 2005 paper was published before the Paltridge controversy and therefore doesn’t have an agenda (like Trenberth and Dessler).
“Several global quantities are computed from the ERA40 re-analysis for the period 1958-2001 and explored for trends. These are discussed in the context of changes to the global observing system. Temperature, integrated water vapor (IWV) and kinetic energy are considered. The ERA40 global mean temperature in the lower troposphere has a trend of +0.11K per decade over the period of 1979-2001, which is slightly higher than the MSU measurements, but within the estimated error limit. For the period 1958-2001 the warming trend is 0.14 K per decade but this is likely to be an artifact of changes in the observing system. When this is corrected for, the warming trend is reduced to 0.10 K per decade. The global trend in IWV for the period 1979-2001 is +0.36 mm per decade. This is about twice as high as the trend determined from the Clausius-Clapeyron relation assuming conservation of relative humidity. It is also larger than results from free climate model integrations driven by the same observed sea surface temperature (SST) as used in ERA40. It is suggested that the large trend in IWV does not represent a genuine climate trend but an artifact caused by changes in the global observing system such as the use of SSM/I and more satellite soundings in later years. Recent results are in good agreement with GPS measurements. The IWV trend for the period 1958-2001 is still higher but reduced to +0.16 mm per decade when corrected for changes in the observing systems. Total kinetic energy shows an increasing global trend. Results from data assimilation experiments strongly suggest that this trend is also incorrect and mainly caused by the huge changes in the global observing system in 1979. When this is corrected for no significant change in global kinetic energy from 1958 onwards can be found.”
Your usual bias is showing badly here.
As Frank points out, Trenberth and Smith did not point out clear problems with NCEP-NCAR vs ERA40.
At the end of the abstract they point out a mass conservation problem with both.
Would you like to withdraw that comment?
PaulM
In Part One I said:
What specifically would you like me to retract about this, given your lack of bias and clear-sightedness?
SOD: You seem to place more faith in humidity changes from homogenized radiosonde data of Ross and Elliott (Part 1) than from re-analysis. The advantage of the re-analyses is that they use ALL of the radiosonde data that meets minimal QC criteria. Some of the problems in this data get corrected by the model. On the other hand, Ross and Elliott make subject judgements about what data to include:
“These change points can disclose either artificial changes or true climate changes so additional information is required to distinguish between the two”.
“Change points near 1977 and 1988 are ascribed to climate regime shifts (Trenberth and Hurrell 1994). Changes near 1981 are less easily explained.”
The statistical change points in 1988 (China) and in 1977 (Alaska) are probably due to the climate regime shifts in the North Pacific at these times and both these time series were retained for trend analysis. Interestingly, no change point is visually apparent in either station time series at the other climate regime shift date.”
When authors make subjective judgments like these, they should do so transparently. “Here is the result based on subjective judgment A and here is how much our result would have changed if we had made the opposite choice.” The same for subjective judgments B, C, D etc. As the authors are trying to decide whether a change point is an artifact or a “climate shift”, they already know how that particular radiosonde site is going to influence the overall outcome. There certainly could be an unconscious tendency to look harder for evidence of a “climate shift” at a site showing a strong increase in humidity. If the authors had properly reported the effect of their subjective choices, THEY and we could see if their subjective choices had a strong bias towards increasing humidity or not. If there appears to be a strong bias towards increasing humidity in some of their choices, can they provide a good explanation? Careful scientists should always be asking themselves whether the choices they are making could bias the answer they get and they should want to show the reader how careful they were to avoid such bias. Rose and Elliot did not do so.
Some regions of the NH (the US) show strongly increasing humidity and some do not (Asia). If different subjective choices were made, could the US look more like Asia or could Asia look more like the US?
The re-analyses avoid these subjective choices. Unfortunately, the re-analyses disagree with each other and are subject to the severe problems Bengtsson discusses. What can we trust?
Frank:
Essentially the approach of the reanalyses and the people who have analyzed radiosonde data are in clear opposition.
If you can use reanalysis on radiosonde trend then you have demonstrated that all of their concerns (Ross & Elliott etc) are invalid.
Having read all the papers in the References plus some few more, I’m convinced that their concerns are definitely valid. And the “Dry Mass of the Atmosphere..” just demonstrates this in another way (although this could of course be the reanalysis model rather than the data).
Durre et al 2009 attempt to take a more “objective” look, via an algorithm, and also do actually state the difference between their results and the results if all data is used (in the article). Have a look at note 2 for a lengthy extract of some of their comments.
I can email you the Durre et al paper if you want to have a read.
If you think Durre did a better job of explaining how they refined their radiosonde data and the consequences of the choices they made, I’d like to read the paper. In the meantime, Ross and Elliot’s treatment of change points (some are artifacts and some are “true climate changes”) reminds me of this passage from Feynman’s “Cargo Cult Science”
We have learned a lot from experience about how to handle some of
the ways we fool ourselves. One example: Millikan measured the
charge on an electron by an experiment with falling oil drops, and
got an answer which we now know not to be quite right. It’s a
little bit off, because he had the incorrect value for the
viscosity of air. It’s interesting to look at the history of
measurements of the charge of the electron, after Millikan. If you
plot them as a function of time, you find that one is a little
bigger than Millikan’s, and the next one’s a little bit bigger than
that, and the next one’s a little bit bigger than that, until
finally they settle down to a number which is higher.
Why didn’t they discover that the new number was higher right away?
It’s a thing that scientists are ashamed of–this history–because
it’s apparent that people did things like this: When they got a
number that was too high above Millikan’s, they thought something
must be wrong–and they would look for and find a reason why
something might be wrong. When they got a number closer to
Millikan’s value they didn’t look so hard. And so they eliminated
the numbers that were too far off, and did other things like that.
We’ve learned those tricks nowadays, and now we don’t have that
kind of a disease.
But this long history of learning how not to fool ourselves–of
having utter scientific integrity–is, I’m sorry to say, something
that we haven’t specifically included in any particular course that
I know of. We just hope you’ve caught on by osmosis.
The first principle is that you must not fool yourself–and you are
the easiest person to fool. So you have to be very careful about
that. After you’ve not fooled yourself, it’s easy not to fool other
scientists. You just have to be honest in a conventional way after
that.
Frank:
I’ve got no doubt about his meaning but I’m not sure I understand exactly what amount of IWV bias he can justify claiming in ERA40.
Well, he compares it to water vapor in ECHAM5 (a GCM) and finds that water vapor is much higher in ERA40 than ECHAM5 (double). Are models only ok if they produce results in one direction?
His NOSAT results confuse me. (NOSAT is running ERA40 reanalysis without the satellite data).
So is the introduction of satellites only causing a few % change?
So ERA-40 shows that his NOSAT experiments are faulty – ie even though adding a new observing system introduces a bias he doesn’t find one? Is that what the paper is saying?
Like I say, I’ve read his paper 3 times and it seems that NOSAT produces very similar results to ERA40. Yet the GCM shows that ERA40 is biased high and so he picks the GCM.
I don’t understand the justification. So I think I am probably missing his point.
I will see if I can find out from the author himself.
[Note on NOSAT from the paper: “In order to assess the longer term trends for 1958–2001, which includes the pre-satellite period, a special data assimilation with the ERA40 system [Bengtsson et al., 2004] has been used covering three periods, December–February (DJF) 1990/91, June–August (JJA) 2000 and DJF 2000/01.
In this experiment all satellite observations have been excluded in order to mimic the pre-satellite observing system. This will be called the NOSAT experiment.“]
This is an example from A New Approach to Homogenize Daily Radiosonde Humidity Data, Dai et al, Journal of Climate (2011).
(Thanks to Rocco for pointing out this paper in Part One).
Here we see Frequency of DPD measurements over different time periods.
DPD, or Dew Point Depression = temperature reduction needed to reach saturation. Think of it as the reverse of relative humidity. If DPD=0, RH=100%. As DPD increases, RH is reducing.
The graphs show how different radiosonde sensors and different reporting practices produce significant variation in DPD values. Note especially the 30’C readings in the first time period which actually = “relative humidity too low”.
As the authors state:
[Emphasis added]
Supporters of NCEP/NCAR for long term water vapor trends (of which there are many) will of course be “full bottle” on this important topic – how their favored reanalysis project deals with this moving target.
Are the “humidity too low values” of 30’C DPD simply included, but solved by the model calculating the real value from meteorological considerations?
Rejected at certain times & locations because it knows 30’C DPD is too low a relative humidity?
I’m sure some NCEP/NCAR proponents will have the answer.
The whole debate on which reanalysis to chose for humidity trends is nugatory. None are fit for purpose. We simply don’t know what the long term trend in humidity has been.
DeWitt Payne:
That is also an interesting way of looking at it. Can we prove that the trend is not zero?
I’m not good at that kind of analysis and often suspicious that stats has assumed something important about data quality. “If the data was perfect and unbiased then the statistical test A reveals…”
But if the data was not perfect and was biased but we’re not sure exactly how then what test should be used?
And if we are talking longer term trend than 20 years I would agree anyway because only northern hemisphere (mid-latitude) land has quality data – so what can someone say about global trends for more than 20 years.
But if we are talking 20 years or less we do have satellite data with good coverage. Would you like to advance the case that the SSMI period cannot disprove a zero trend?
SOD: I think the general idea is that standard deviations, confidence intervals, p scores, etc. are supposed to include both natural variation (in observations) and error in the observations. When you are trying to demonstrate that a (linear) trend in humidity is greater than zero, for example, it doesn’t make any difference if the scatter in the data is mostly due to bad humidity sensors or to natural variability in humidity. In both cases, there is a certain probability that most of the upward “scatter” might have occurred at one end of the time period and most of the downward “scatter” at the other end.
The problem with trying to calculate long term trends in humidity or even surface temperature is that the data weren’t collected in a manner suitable for the kind of analyses we want to do today. However, data from satellites has been collected with today’s needs in mind. When the IPCC first started issuing reports, the time span of satellite data was too short. AR5 will be based on up to 30 years of satellite observations and perhaps we will have more definitive answers. For example, if M&M are to be believed, the last ten years of data have provided a definitive answer about the existence of amplified warming in the upper tropical troposphere.
scienceofdoom,
Nope. Long term would be > 40 years. 20 – 40 years would have large error bars so it would probably not be possible to reject a trend of zero. But that’s an opinion, not a calculation.
It’s the irony of people who reject climate models categorically claiming that what amounts to output of a model proves their point that I find amusing.
Amazing. On-and-on-and-on…never ending… Maybe hundreds of thousands of hours of analyses by “knowing people” that assume there is a GHE. But what if there is not?
Let’s go back to bare bones basics to look at how the Earth might work without an “atmospheric greenhouse effect” (GHE).
Here is THE fundamental equation that is used to calculate the radiative energy transfer between objects:
Power = (epsilon)(alpha)((T-warm)^4 – (T-ambient)^4)),
(sorry, I hate latex)
where power is radiative energy emitted by the warm body (in wm-2), epsilon is the emissivity (usually assumed as = 1), alpha is Planck’s constant, T-warm is the temperature of the warmer object, and T-ambient is the temperature of the cooler object.
Does anyone see any /variables/parameters/functions/factors/adjustments/whatever to “account” for backradiation/ “insulation”/ “atmospheric blanket effect”/ etc. due to GHE?? No, because there are none. Radiative transfer is dependent ONLY upon the parameters given above, not upon what gases/concentrations are present between the objects.
I know of no “limitations” on this equation, such as a caveat that it only applies in a vacuum. Does anyone else know of any (need reference, if so)?
The fact that energy loss is “slowed down” by the radiation back from the cooler object (backradiation) is included in this classic equation, so we can easily accommodate the concept, so often mentioned to support a GHE, that the presence of backradiation from the air retards the rate of heat loss to outer space*.
Therefore, if one wants to determine the radiative loss from the surface to the tropopause, one only needs to know the temperatures at the surface and at the tropopause (assuming one can assume bb radiation in air). One does not need to know anything about how dense the GHGs are in-between.
So, let’s just use this well-known and well-accepted equation to calculate the “average radiative emissions” from the planet, assuming the average surface temperature is 15 C and the point at which the “effective radiation temperature” of -18 C occurs (about 5.5 km altitude). Assuming an emissivity of 1, the equation gives 390 wm-2 – 240 wm-2 = 150 wm-2. Now, let’s add Kiel and Trenberth’s famous estimates for latent heat (78 wm-2) and for “thermals” (convection) (24 wm-2), giving a total of 252 wm-2. Not far from K&T’s value of 235 wm-2. In fact, if one assumes an emissivity of 0.9, which is probably much more realistic, one gets almost exactly the same number as the K&T estimate (237).
Coincidence, perhaps, but I doubt it.
PLEASE ADD THIS:
So there appears to be no place for or need for a “GHG effect” to explain “radiative balances.”
A bonus is that this simple equation fits the situation for ALL planets with atmospheres. The GHG theory does not.
*Just like “back conduction” retards the rate of conduction toward the cold end of a metal rod that is being heated at only one end.
Frank on June 10, 2011 at 8:04 am:
Perhaps you think the radiosonde data are all fine?
The papers about radiosonde data issues are full of evidence for the problems. Some of them are in the References section.
You can see an additional example above.
If you have a better way to handle the radiosonde data than Ross & Elliott why not explain it?
I think the history of science lesson from Millikan’s oil drop experiment is a great one.
That is why, even if 100 top scientists in a field agree on something, when someone advances evidence for another point of view I think it should be reviewed.
Just needs some evidence to get started.
SOD. You raise an interesting issue: Do skeptics needing more evidence supporting their views (so that they can say they hold a SCIENTIFIC opinion about AGW), or does it suffice to attack the quality, rigor, and uncertainty of the science supporting CAGW? I think all that is needed is the latter. I don’t need to know that humidity has bee falling as as Paltridge suggests. I do need some decent reason to believe WV has not risen as much as models predict to hope WV feedback doesn’t increase climate sensitivity by 1 degK.
If I need real evidence, I’ll offer the lack of amplified warming in the upper tropical troposphere.
Frank:
I accept your point of view. Given the politics of the situation, which I want to stay away from, the heavier burden is on the AGW point of view.
It’s just that I’m here making a much narrower comment about water vapor trends, and nothing about agreeing with models at all, or about AGW.
This is the problem with the discussion on the science of atmospheric physics or the related measurements. A lot of the time, it’s all about something else.
If Ross & Elliott 2001 or Durre et al 2009 is sloppy science then you need to make that case. Rather than a different one.
Frank,
More on point is the quote from Einstein that translates more or less to:
“An infinite number of experiments cannot prove that I’m correct, a single experiment can prove I’m wrong.”
Dewitt: Millikan’s oil drop exp may be more relevant. Is there a fundamental theory in GW that can be overturned by a single well-designed experiment. The essence of the AGW problem is to determine climate sensitivity, a number like Millikan measured, not a theory, like those Einstein proposed.
DeWitt:
This is VERY funny, coming from YOU:
“Frank,
More on point is the quote from Einstein that translates more or less to:
“An infinite number of experiments cannot prove that I’m correct, a single experiment can prove I’m wrong.”
LOL!
“We have learned [nothing] from experience about how to handle some of the ways we fool ourselves. One example:”
James Hansen admitted that the GCMs had overestimated ocean uptake and ocean heating but instead of downgrading CO2 climate sensitivity [AGW] concluded that the AGW was correct but that it had been masked by a further GCM mistake in UNDERESTIMATING colling from aerosols.
As a student, I ( with my classmates ) launched a number of sondes in support of a NASA satellite experiment. At that time ( early eighties ), the hygristor ( humidity sensor ) was a long three inch rectangular plate, subject to breaking, which the sonde launcher had to unwrap from it’s sealed container and carefully mount in between two wire pressure clasps.
Nearly twenty years later, as an employee, I was associated with a quite separate experiment which launched sondes using the more or less standard Vaisalla system. All the instrumentation had changed, of course, but especially the hygristor was a small element, around an eighth of an inch in length.
Undoubtedly this change reduced the response time of humidity measurements, the effect of which was to reduce the artificially high humidity readings from earlier dates, which would artificially appear as a drying trend.
However, the effect of this artificial trend should diminish over time as instrumentation improved, converging toward the presumed increase in humidity, which is not observed.
Further, one question to ask, is that
* if humidities have increased, and
* if this increase has occurred in such a way as to increase radiative forcing ( which is not a given )
then why are oceanic heat storage and temperature trends more consistent with long lived greenhouse gas forcing alone without water vapor feedback?
I see Lindzen and Choi have their 2011 version up; they apply jae’s formula:
“We estimate climate sensitivity from observations, using the deseasonalized fluctuations in sea surface temperatures (SSTs) and the concurrent fluctuations in the top-of-atmosphere (TOA) outgoing radiation from the ERBE (1985-1999) and CERES (2000-2008) satellite instruments.”
The process by which they were published in a way sums up the climate science peer review system and is discussed at McIntyre and Jo Nova. That Dessler confused Chou and Choi hardly instills confidence.
NOT EVEN CRICKETS??? LOL. I DO HOPE it isn’t this simple!!!
Frank,
Millikan was measuring a fundamental physical constant, the minimum unit of charge. That is in no way similar to measuring the sensitivity of a complex coupled non-linear system to a change in forcing.
TCWV trends are non-existent for the last 25 years. Given the %increase in Co2 over this timeframe it should have been readily apparent.
Click to access 12.3_VWG12.pdf
Click to access Forsythe_GVAP_09102014.pdf
Positive feedback does appear in the observations.
DT: Page 16 of your second reference says:
“we can neither prove nor disprove a robust trend in the global water vapor data”
What the data on this page shows is the increase in global water vapor expected for 22 annual rises of about 3.5 degC. If we expect a 7% increase in water vapor for every 1 degC rise, that would be a 24.5% rise in water vapor. A typical minimum is 24 mm, so that would be a rise of 5.9 mm of water vapor. The typical rise is 3.5-4.0 mm. The temperature rise is caused the asymmetric distribution of land, so perhaps the lower than expected rise in humidity is also associated with less ocean on the warmer half of the planet. The clear sky combined water vapor plus lapse rate feedback reduces the annual OLR increase measured by ERBE and CERES below that expected for Planck feedback alone.
As for the long term trend, UAH temperature rose about 0.3 degC over these two decades. With a 7% increase in humidity per degC, we expect to see about 0.5 mm of TPW. It would be pretty hard to see the expcted 0.25 mm/decade trend in TPW in data with seasonal changes of about 4 mm.
DT,
Water vapor trends are not related to CO2.
Water vapor trends have a theoretical relationship with temperature, which has some experimental support as outlined in this article and the preceeding article.
Can someone tell me how down welling long wave radiation heats the earth if the energy leaving the TOA is the same as the energy entering the TOA?
DLR does not ‘heat’ the surface. This is something of a minority view on my part, but I think it would be better if you never used the term ‘heat’ when you’re talking about thermodynamics. What you need to know is the net energy flux in and out of the surface. In includes SWR from the sun and DLR from the atmosphere. Out is OLR from the surface and net convective transfer to the atmosphere, latent and sensible. If in is greater than out, the surface warms and vice versa. You cannot ascribe temperature change to just a change in DLR.
For more detail, look at the articles found under the heading of Earth’s Energy Budget in the Pages section at the top of the page.
Out is always equal to IN. The surface does not warm in response to greater DLR because convection rises in response. This is because atmospheric temperature is determined by PV = n RT and not radiation absorption.
No, it isn’t. You can have a radiative imbalance at the top of the atmosphere and a flux imbalance at the surface.
PV = nRT does not determine the surface temperature. It only determines the ratio of T/V for a given atmospheric mass and value of the acceleration of gravity. That’s a fact in spite of what you might think. I also have no interest in arguing the point as I don’t consider your point of view defensible as science.
Etiquette here is as follows:
The idea that atmospheric pressure alone determines the surface temperature disagrees with basic science. See:
https://scienceofdoom.com/2010/06/12/venusian-mysteries/
So, you refuse to debate? If I make errors, correct them. You cannot expect laymen to know the field exhaustively.
I have. I’m not going to waste bandwidth repeating what can already be found on this site.
Proverbs 26:4 also applies.
Much of what is found on this site is indecipherable.
Timothy Spiegel – you wrote “Out is always equal to IN”. If that were true, there would be no warming during the day or cooling at night. So you might re-think or re-word that a bit. On the longer time scales, you’re right – if there are no changes to the system, OUT will equal IN over years of time and the average earth temperature will remain stable. However, we’ve made a change with increased CO2, like throwing an extra blanket around the earth. Just like a regular blanket, it doesn’t create heat. It just slows down the rate at which heat escapes, which means OUT has been reduced. So for a while, Earth’s OUT is less than IN, and the temperature will rise due to accumulation of heat from the sun. As the temperature increases, the driving force for outgoing radiation increases, meaning OUT also increases. Eventually OUT will equal IN again, and the temperature will stabilize at the new higher value.
Tim asked: “Can someone tell me how down welling long wave radiation heats the earth if the energy leaving the TOA is the same as the energy entering the TOA?”
If energy leaving the TOA is the same as the energy entering the TOA, then the temperature of the planet AS A WHOLE is not changing.
Asking if one of many paths by which an object receives and loses energy “heats” or “cools” that object is meaningless. Temperature rises when the object emits more energy than it receives by ALL routes and falls when it receives less. A single route doesn’t heat or cool.
Energy is always conserved. If more energy enters something that leaves it, that energy becomes “internal energy” or rising temperature. Heat capacity tells us how to convert an energy imbalance into a temperature change.
If radiation transfers more energy into a sealed container of gas (a jar, for example) than escapes from it, conservation of energy demands that temperature must rise. PV = nRT. The volume of the container is fixed, the amount of gas inside (n) is fixed and R is fixed. So the internal pressure must rise so that P always equals nRT/V.
Global average atmospheric pressure at the surface (P) is fixed by the total weight of the atmosphere. n and R are both fixed. Conservation of energy demands that T must rise if more energy enters the atmosphere by radiation than leaves. V is the only thing that can change – and it does so that V always equals nRT/P. However, this is hard to picture this happening since the atmosphere isn’t enclosed inside anything. (If it were enclosed, then P would rise.)
A more familiar situation where volume changes is a cylinder with a piston. Heat the gas inside the cylinder (by any means including radiation), and the piston will move so that V = nRT/P. In this case the pressure is the force per unit area holding the piston in place. Most of the time, that force is the atmospheric pressure on the piston, but it can be supplemented by pushing on the piston.
The internet is filled with simple arguments about the greenhouse effect that appear reasonable, but contain simple flaws (like applying PV = nRT to the atmosphere) or meaningless, but deceptive phrases (like “Does DLR heat the surface?) However, if the situation were that simple, thousands of scientists wouldn’t be writing massive reports for the IPCC and NAS. Skeptical scientists who oppose the IPCC consensus don’t use this simple arguments. Look for the NIPCC reports.
The host of this site is dedicated to accurate portrayal of the physics of climate change – though it took me a long time to be convinced of that when this site was first recommended by Steve McIntyre. As a layman, it may be very difficult for you to master the physics presented at this site. If you persist, you will find accurate information. You won’t find out how much warmer the planet will be when CO2 has doubled, because physics and observations can’t provide an accurate answer. The IPCC says the answer is likely to be between 1.5 and 4.5 degC, with a 15% chance of both higher and lower answers. Some commenters here favor the lower end of this range including the 15% below 1.5 degC. Others focus on the upper end, which poses the greatest risk; some the middle. The regulars have one thing in common, sharing accurate science. You can learn why the answer can’t be ZERO warming.
Timothy Spiegel,
You wrote: “This is because atmospheric temperature is determined by PV = n RT and not radiation absorption.”
When I see such an incorrect statement stated in a didactic manner, I suspect that the author has no interest in having his errors corrected. Nevertheless, I will take a brief shot at it. The pressure at any given place in the atmosphere is due to the weight of the air above that place. Temperature is determined by energy transfer, both radiative and convective. The results are complex, but there are some simple rules of thun=mb. Given temperature and pressure, one can use the Ideal Gas Law to determine the density, n/V.
Mike,
Agreed. Timothy’s discussion brings to mind one of the 1000 skeptic papers collected by one of Sen. Imhofe’s assistants. The highly bombastic author based his arguments on the Ideal Gas Law, but wrote it as PV=RT. He completely forgot about n, and how it can also change for a given volume of gas to compensate for changes in P & T. n/V is of course an extremely important parameter, as our lungs remind us at high altitude.
Jim A,
Actually the ideal gas law often *is* written as PV = RT, with V being molar volume (the volume occupied by one mole). In that case, one usually puts a bar over the V or adds a subscript ‘m’, either are standard ways of indicating quantity per mole, but those indicators are sometimes omitted when the writer regards the use of molar quantities as being “understood”. I only mention this because if you are going to mock someone (and the writer you refer to sounds like he deserves mocking) you need to be careful to not criticize something that is actually correct.
Mike M
Fair point – thanks for the clarification about molar volume. However, when someone is making the argument that decreasing pressure forces decreasing temperature, are they not treating V/n as a constant? If, however, V/n is not a constant, then you can’t predict T as a function of P. That’s the main point I was trying to make.
Jim A wrote: “when someone is making the argument that decreasing pressure forces decreasing temperature, are they not treating V/n as a constant?”
When someone makes a silly argument, it can be hard to be sure just what they are thinking. Treating V/n as a constant would imply a temperature of about 60 K at the tropopause. Maybe some people are silly enough to make that argument and not notice what it implies. My point is that if you aren’t careful, you can end up sounding as silly as the person you criticize.
Mike M.,
Hence my reference to Proverbs 26:4
King James version.
DeWitt: Are you stooping to arguments from authority? (:))
The next verse says to: “Answer a fool according to his folly, lest he be wise in his own conceit.”
Are you also stooping to cherry-picking your arguments from authority? (:)) (:))
Happy New Year from this fool, who rarely succeeds when questioning your authority (as in the case of Tim).