Gary Thompson at American Thinker recently produced an article The AGW Smoking Gun. In the article he takes three papers and claims to demonstrate that they are at odds with AGW.
A key component of the scientific argument for anthropogenic global warming (AGW) has been disproven. The results are hiding in plain sight in peer-reviewed journals.
The article got discussed on Skeptical Science, with the article Have American Thinker Disproven Global Warming? although the blog article really just covered the second paper. The discussion was especially worth reading because Gary Thompson joined in and showed himself to be a thoughtful and courteous fellow.
He did claim in that discussion that:
First off, I never stated in the article that I was disproving the greenhouse effect. My aim was to disprove the AGW hypothesis as I stated in the article “increased emission of CO2 into the atmosphere (by humans) is causing the Earth to warm at such a rate that it threatens our survival.” I think I made it clear in the article that the greenhouse effect is not only real but vital for our planet (since we’d be much cooler than we are now if it didn’t exist).
However, the papers he cites are really demonstrating the reality of the “greenhouse” effect. If his conclusions – different from the authors of the papers – are correct, then he has demonstrated a problem with the “greenhouse” effect, which is a component – a foundation – of AGW.
This article will cover the first paper which appears to be part of a conference proceeding: Changes in the earth’s resolved outgoing longwave radiation field as seen from the IRIS and IMG instruments by H.E. Brindley et al. If you are new to understanding the basics on longwave and shortwave radiation and absorption by trace gases, take a look at CO2 – An Insignificant Trace Gas?
Take one look at a smoking gun and you know it’s been fired. One look at a paper on a complex subject like atmospheric physics and you might easily jump to the wrong conclusion. Let’s hope I haven’t fallen into the same trap..
The Concept Behind the Paper
The paper examines the difference between satellite measurements of longwave radiation from 1970 and 1997. The measurements are only for clear sky conditions, to remove the complexity associated with the radiative effects of clouds (they did this by removing the measurements that appeared to be under cloudy conditions). And the measurements are in the Pacific, with the data presented divided between east and west. Data is from April-June in both cases.
The Measurement
The spectral data is from 7.1 – 14.1 μm (1400 cm-1 – 710 cm-1 using the convention of spectral people, see note 1 at end). Unfortunately, the measurements closer to the 15μm band had too much noise so were not reliable.
Their first graph shows the difference of 1997 – 1970 spectral results converted from W/m2 into Brightness Temperature (the equivalent blackbody radiation temperature). I highlighted the immediate area of concern, the “smoking gun”:
Note first that the 3 lines on each graph correspond to the measurement (middle) and the error bars either side.
I added wavelength in μm under the cm-1 axis for reference.
What Gary Thompson draws attention to is the fact that OLR (outgoing longwave radiation) has increased even in the 13.5+μm range, which is where CO2 absorbs radiation – and CO2 has increased during the period in question (about 330ppm to 380ppm). Surely, with an increase in CO2 there should be more absorption and therefore the measurement should be negative for the observed 13.5μm-14.1μm wavelengths.
One immediate thought without any serious analysis or model results is that we aren’t quite into the main absorption of the CO2 band, which is 14 – 16μm. But let’s read on and understand what the data and the theory are telling us.
Analysis
The key question we need to ask before we can draw any conclusions is what is the difference between the surface and atmosphere in these two situations?
We aren’t comparing the global average over a decade with an earlier decade. We are comparing 3 months in one region with 3 months 27 years earlier in the same region.
Herein seems to lie the key to understanding the data..
For the authors of the paper to assess the spectral results against theory they needed to know the atmospheric profile of temperature and humidity, as well as changes in the well-studied trace gases like CO2 and methane. Why? Well, the only way to work out the “expected” results – or what the theory predicts – is to solve the radiative transfer equations (RTE) for that vertical profile through the atmosphere. Solving those equations, as you can see in CO2 – Part Three, Four and Five – requires knowledge of the temperature profile as well as the concentration of the various gases that absorb longwave radiation. This includes water vapor and, therefore, we need to know humidity.
I’ve broken up their graphs, this is temperature change – the humidity graphs are below.
Now it is important to understand where the temperature profiles came from. They came from model results, by using the recorded sea surface temperatures during the two periods. The temperature profiles through the atmosphere are not usually available with any kind of geographic and vertical granularity, especially in 1970. This is even more the case for humidity.
Note that the temperature – the real sea surface temperature – in 1997 for these 3 months is higher than 1970.
Higher temperature = higher radiation across the spectrum of emission.
Now the humidity:
The top graph is change in specific humidity – how many grams of water vapor per kg of air. The bottom is change in relative humidity. Not relevant to the subject of the post, but you can see how even though the difference in relative humidity is large high up in the atmosphere it doesn’t affect the absolute amount of water vapor in any meaningful way – because it is so cold high up in the atmosphere. Cold air cannot hold as much water vapor as warm air.
It’s no surprise to see higher humidity when the sea temperature is warmer. Warmer air has a higher ability to absorb water vapor, and there is no shortage of water to evaporate from the surface of the ocean.
Model Results of Expected Longwave Radiation
Now here are some important graphs which initially can be a little confusing. It’s worth taking a few minutes to see what these graphs tell us. Stay with me..
The top graph. The bold line is the model results of expected longwave radiation – not including the effect of CO2, methane, etc – but taking into account sea surface temperature and modeled atmospheric temperature and humidity profiles.
This calculation includes solving the radiative transfer equations through the atmosphere (see CO2 – An Insignificant Trace Gas? Part Five for more explanation on this, and you will see why the vertical temperature profile through the atmosphere is needed).
The breakdown is especially interesting – the three fainter lines. Notice how the two fainter lines at the top are the separate effects of the warmer surface and the higher atmospheric temperature creating more longwave radiation. Now the 3rd fainter line below the bold line is the effect of water vapor. As a greenhouse gas, water vapor absorbs longwave radiation through a wide spectral range – and therefore pulls the longwave radiation down.
So the bold line in the top graph is the composite of these three effects. Notice that without any CO2 effect in the model, the graph towards the left edge trends up: 700 cm-1 to 750 cm-1 (or 13.5μm to 14.1μm). This is because water vapor is absorbing a lot of radiation to the right (wavelengths below 13.5μm) – dragging that part of the graph proportionately down.
The bottom graph. The bold line in the bottom graph shows the modeled spectral results including the effects of the long-term changes in the trace gases CO2, O3, N2O, CH4, CFC11 and CFC12. (The bottom graph also confuses us by including some inter-annual temperature changes – the fainter lines – let’s ignore those).
Compare the top and bottom bold graphs to see the effect of the trace gases. In the middle of the graph you see O3 at 1040 cm-1 (9.6μm). Over on the right around 1300cm-1 you see methane absorption. And on the left around 700cm-1 you see the start of CO2 absorption, which would continue on to its maximum effect at 667cm-1 or 15μm.
Of course we want to compare this bottom graph – the full model results – more easily with the observed results. And the vertical axes are slightly different.
First for completeness, the same graphs for the West Pacific:
Let’s try the comparison of observation to the full model, it’s slightly ugly because I don’t have source data, just a graphics package to try and line them up on comparable vertical axes.
Here is the East Pacific. Top is observed with (1 standard deviation) error bars. Bottom is model results based on: observed SST; modeled atmospheric profile for temperature and humidity; plus effect of trace gases:
Now the West Pacific:
We notice a few things.
First, the model and the results aren’t perfect replicas.
Second, the model and the results both show a very similar change in the profile around methane (right “dip”), ozone (middle “dip”) and CO2 (left “dip”).
Third, the models show a negative value in change of brightness temperature (-1K) at the 700 cm-1 wavelength, whereas the actual results for the East Pacific is around 1K and for West Pacific is around -0.5K. The 1 standard deviation error bars for measurement include the model results – easily for West Pacific and just for East Pacific.
It appears to be this last observation that has prompted the article in American Thinker.
Conclusion
Hopefully, those who have taken the time to review:
- the results
- the actual change in surface and atmospheric conditions between 1970 and 1997
- the models without trace gas effects
- the models with trace gas effects
might reach a different conclusion to Gary Thompson.
The radiative transfer equations as part of the modeled results have done a pretty good job of explaining the observed results but aren’t exactly the same. However, if we don’t include the effect of trace gases in the model we can’t explain some of the observed features – just compare the earlier graphs of model results with and without trace gases.
It’s possible that the biggest error is the water vapor effect not being modeled well. If you compare observed vs model (the last 2 sets of graphs) from 800cm-1 to 1000cm-1 there seems to be a “trend line” error. The effect of water vapor has the potential to cause the most variation for two reasons:
- water vapor is a strong greenhouse gas
- water vapor concentration varies significantly vertically through the atmosphere and geographically (due to local vaporization, condensation, convection and lateral winds)
It’s also the case that the results for the radiative transfer equations will have a certain amount of error using “band models” compared with the “line by line” (LBL) codes for all trace gases. (A subject for another post but see note 2 below). It is rare that climate models – even just 1d profiles – are run with LBL codes because it takes a huge amount of computer time due to the very detailed absorption lines for every single gas.
The band models get good results but not perfect – however, they are much quicker to run.
Comparing two spectra from two different real world situations where one has higher sea surface temperatures and declaring the death of the model seems premature. Perhaps Gary ran the RTE calculations through a pen and paper/pocket calculator model like so many others have done.
There is a reason why powerful computers are needed to solve the radiative transfer equations. And even then they won’t be perfect. But for those who want to see a better experiment that compared real and modeled conditions, take a look at Part Six – Visualization where actual measurements of humidity and temperature through the atmosphere were taken, the detailed spectra of downwards longwave radiation was measured and the model and measured values were compared.
The results might surprise even Gary Thompson.
Notes:
1. Wavelength has long been converted to wavenumber, or cm-1. This convention is very simple. 10,000/wavenumber in cm-1 = wavelength in μm.
e.g. CO2 central absorption wavelength of 15μm => 667cm-1 (=10,000/15)
2. Solving the radiative transfer equations through the atmosphere requires knowledge of the absorption spectra of each gas. These are extremely detailed and consequently the numerical solution to the equations require days or weeks of computational time. The detailed versions are known as LBL – line by line transfer codes. The approximations, often accurate to within 10% are called “band models”. These require much less computational time and so the band models are almost always used.
“Warmer air has a higher ability to absorb water vapor”
Basically that is wrong or at least misleading. Water vapor and the other atmospheric gases are independent. Saturation vapor pressure of water is a function of temperature. The warmer it is the higher saturation vapor pressure. It doesn’t matter in the least what the other gases are doing. There is no such thing as ‘air absorbing water vapor’. Simply stated: if it is warmer the maximum amount of water vapor (kg/m^3) – before condensation – increases.
I’d try to take a simpler view of these data. Firstly, I think Gary shares a common misconception that GHG’s block IR, so outgoing IR should diminish as CO2 increases. But of course except for transient effects, OLR has to match insolation, so basically constant.
How that is achieved is that the surface warms, so more IR is emitted through the atmospheric window (8-13 μm). Correspondingly less is emitted by GHG in the peak absorption bands – not so much because it is blocked, but because it is emitted at higher colder levels of the atmosphere. So there a is a bit of a shift in the spectrum, with the total unchanged.
That seems to be just what the data shows, A clear increase from 10-13 μm at least, and just a bit of murkiness at 14 μm. It doesn’t show 15 μm or beyond, but if you accept that the total should be unchanged, there should be a decrease there.
So this seems to give support to the GHE in operation.
That makes it very clear 4 me
The only question that matters for most people is if AGW- the idea that CO2 is going to drive a climate catastrophe at current and foreseeable levels- is true.
You seem to imply that if AGW is wrong, then something is wrong with the ghg effect. That seems circular and self-referential.
The assumptions of increasing water vapor content from increased air temperature over water is not generally valid. The pan evaporation level was observed to decrease considerably during much of this period, despite the increasing temperature. This has been claimed to be an due to particle pollution. Pan evaporation is a better correlation parameter for sea surface evaporation, not air temperature over the water. The higher temperature resulted in the air holding the water vapor longer, so absolute humidity in the lower troposphere did increase, but the relative humidity decreased. It also appears (but different studies disagree) that absolute humidity in the upper troposphere and stratosphere actually slightly decreased. The result is that the issue is at best unresolved.
hunter:
Gary Thompson says his article takes issue with AGW. I think his article actually takes issue with the ghg effect.
He doesn’t show anything about feedbacks, temperature rises, predictability of climate, GCMs.. he says – in effect – that the 1d radiative transfer equations are not correct.
If he was right, these would need to be revised.
Leonard Weinstein:
I perhaps should have drawn more attention to this point – and intended to when I started out writing the article..
If the model results of temperature and humidity profile vertically though the atmosphere were not correct then this would definitely affect the accuracy of the radiative transfer equations.
And would not be a comment on the “greenhouse” gas effect, but would be a comment on the ability of models to produce humidity profiles.
Humidity will be a fascinating subject to cover.
And I will get around to drawing a little more attention to this point in the main article.
Nick Stokes:
There’s nothing wrong with your assessment.
I thought it was worth drawing attention to the fact that we can’t assume we are looking a system that has reached a new equilibrium as a result of increases in “greenhouse” gases.
What the theory predicts is as you describe and very loosely as Gary Thompson describes – when we look at long term averages.
If we want to know what the actual theory predicts in a specific case we need to solve the equations. This is the bit that I don’t think Gary understands.
He simply points to the fact that the 1997-1970 difference in the 14um-13.5um section is positive and not negative.
To paraphrase:
..and then seems to think that all the attention by the authors of the papers to model results is some kind of distraction from the central observation.
The authors are perhaps muddying the waters, or lost in their paradigm. They missed the main point of the results and should have turned off the big computers and gone back to basics to work out the new theory.
Unfortunately for Gary – well, unfortunately for his readers – without knowing the surface temperature plus temperature and humidity profiles in the atmosphere in the two cases we can’t actually know what the theory predicts.
The Skeptical Science link is now “broken” – goes to a random page on their site. Was working earlier..
But since the apocalypse is not occurring. I would suggest, as Dr. Pielke, Sr. points out, that the models are not basic physics models but are complex engineering type models that do not (as you have pointed out) properly represent the reality.
IOW, failure of the models is not an indictment of physics, but rather the application of the physics.
Just like if Boeing has a problem with a wing in test mode failing when the model says it should not, no one goes back to quetion physics. They look to where the model went wrong.
Climate science, as far as AGW is concerned, seems extremely confident in models that do not fully describe, and certainly do not predict, reality.
Speaking of Boeing. No matter how much Computational Fluid Dynamics they do, they always do the first test flights with tufts stuck to the aerodynamic surfaces – so they can see exactly what the airflow is doing despite what the models say. Here’s what NASA does.
http://www.grc.nasa.gov/WWW/K-12/airplane/tunvsmoke.html
hunter:
Take a look at CO2 – An Insignificant Trace Gas – Part Five where the “model” is explained.
Solving the RTE through a 1D vertical profile through the atmosphere with fairly well known conditions is a whole world away from predicting the future through a 3D model that steps through time.
This 1D RTE solution is a tiny portion of the GCM and doesn’t need to step through time.
The 1D model is basically a series of simultaneous equations that solve for absorption and radiation through a series of “slices” through the atmosphere.
While we are in analogy land it’s like arguing that because we are having troubling launching some space ships, the bicycle obviously can’t work.
Both are vehicles, one is a lot more complicated. Sure, it’s an analogy, it doesn’t prove anything, so take a look at CO2 – An Insignificant Trace Gas? Part Six – Visualization – where you can see the model results of downward longwave radiation in a known temperature/humidity profile through the atmosphere compared with the actual measurements of downward longwave radiation.
So in fact, referring back to Roger Pielke Sr, this kind of model is a basic physics model. And that’s why the results you see in Part Six and also the results you see above, are pretty close to the model.
Well, the point of the American Thinker article was that the theory didn’t match reality. But he didn’t really understand the theory.
Perhaps we are talking past each other a bit. I am giving specific examples of something much simpler than a climate- an airplane wing, and showing that it specifically does not work, and you keep saying the models work for the atmosphere. But they obviously don’t. So why the divergence in perception?
Jerry,
I spent some time working on Aircraft ( YF-23 ) and some other “experimental” craft. It’s a gross oversimplifcation
to say that the models “dont work” You see that picture you
linked to? That’s the Dryden F18. Ha. I also
happen to work in F/A -18 as well. ( at Northrop and a small flow visualization/flight test company named eidetics) I want you to look at the smoke in that picture. You see where that smoke is coming from? That is actually an extension of the wing called a LEX, or leading edge extension. Its a high AOA device, that LEX creates a vortex
and it increases the stall margin at high AOA. Vortices are
tricky things. But even they are “predictable” more or less.
See the simulations below
http://www.aerospaceweb.org/question/planes/q0176.shtml
So it’s a gross simplification to say that the models “dont work” certain parameters are predictable. We KNEW that the LEX would create a vortex. That’s why we put a LEX on.
( same for the F20, another sweet machine) What was unpredictable was exactly HOW and when a vortex would be created and where it would burst. So the CFD codes gave us all sorts of great information at certain scales. Other information could be collected using scale physical models in water tunnels and wind tunnels. Some information of course could only be seen at full scale in real life. Some things, like the vortex burst problem didnt show up for a while..
So the models tell us a lot. they work. There are things they dont do so well on. thre are things that we dont even catch in flight test.
And those tuffs.what are they measuring and why?.have you ever seen a VG on a wing?
Mr Mosher,
I’m not sure I (Jerry) said they didn’t work. Rather I was pointing out the caution of using tufts and smoke ‘just to make sure’
CFD may be pretty good, but only as good as the model used. Extreme cases like your vortex break are obviously testing the boundaries of the model.
Even mundane things like ice accretion and surface changes due to maintenance and aging are problematic. Far to many aircraft have fallen out of the sky because the design model was not sufficient for the operating envelope – the Mitsubishi MU-2 for example.
I retain my point. CFD simulation is only as good as the physical model. The tufts and smoke add a bit more information and confirmation. The reality is whether a plane falls out of the sky – of which there are many, many examples in recent times. CFD is not an exact science – yet.
Hunter,
Sorry got your name wrong:
u wrote:
an airplane wing, and showing that it specifically does not work.
CFD does work, but within limits. There are things that will remain fundamentally un predictable. Like which direction a nose vortex will break.. will it break right or left? We know at High AOA that a vortex will form, and we can bound several things about it, but some elements will remain undecidable. That doesnt mean models dont work. for grins read this any work on algorithmic information theory
http://en.wikipedia.org/wiki/Chaitin%27s_constant
http://www.umcs.maine.edu/~chaitin/
So, now you change the argument and say that CFD is not an exact science. No science is exact. All observation diverges from the theory ( set of equations ) that “compress” those observations to strings with few fewer bits than the observations themselves. The issue is always “how exact” and for what purpose.
All good questions which help crystallize the issue.
Perceptions of model’s reliability? Is it in the eye of the beholder?
– Yes it would seem so.
[Your answer for discussion?]
And lastly, “they obviously don’t” – from your statement it is obvious to you! Perhaps it should be obvious to me as well?
But if your answer is a discourse on the failure of GCMs, then “it’s obvious to me” (there’s that phrase again) that you haven’t understood the difference between a GCM and the numerical solution to a radiative balance equation. Or I’m plain wrong.. But your answer might be something else entirely.
You have the floor..
Not speaking for Jerry, but models can be confirmed and improved by testing. Testing climate models is difficult and takes time (by measuring actual climate over time and comparing with the models). Testing how the climate processes the physics, not the physics themselves.
scienceofdoom,
The greatest compliment I can pay to you is that I am not certain where you stand on the AGW social side of the issue.
Your patience and ability to pick out the the threads of this bowl of spaghetti is appreciated.
The equations of the physics functions are not, from my perspective, at question.
It is the applications of these physical concepts irt claims that we are experiening now, or are likely to experience in any reasonable future reality, a climate catastrophe caused by CO2.
steve mosher,
I did not mean to imply that the aerospace models do not work in the sense of the final product. I meant to say, and should have said more clearly, that they are not perfect, with results that do not ever diverge from the expected.
ah ok, scratch my other comment about changing terms of the debate.
By analogy I look at it this way. RTE, the core physics, tells us that adding GHGs will warm the planet, all other things being equal.
And so the question comes down to feedbacks. And feedbacks
can’t be computed ( bold over generalization I suppose) from first principles. They can only be estimated by modelling the entire system. And that modelling will be uncertain.
Reminds me of the first time I looked at FCS ( flight control software ) the code was replete with these magic numbers
the FCS engineer had figured out– gains he called them. he didnt compute them directly. picked numbers and ran models and hunted around for numbers that worked. Where
“worked” meant the plane responded within certain boundaries and with certain patterns ( pitch response has to look something like this ). These numbers got tweaked further when you had real metal in the sky. At first it annoyed the hell out of me that this “gain” thing could not be computed directly or “solved” for.
Analogies can be bad of course
hunter said:
>The greatest compliment I can pay to you is that I am not >certain where you stand on the AGW social side of the issue.
Hear! hear!
The genius of this site is exactly that. A great description of the underlying science of climate without resorting to all of the bias and contention that goes on all over the web these days.
I wish more sites would lose the bias and get on with a presentation of the science.
Keep up the great work!
As a Lukewarmer this is one of my favorite sites. We basically accept RTE as ( say the words) settled science, or at least the best explaination we have. In a past life Modtran was just a tool I used and I didnt really have to understand the physics, It was a tool, it worked. We built things using those tools, things went boom. Anyways, I think that steering clear of the “social” issues is great. When I run into somebody who has a technical background but doesnt believe that GHGs have the potential to warm the the planet, I send them here.
Great site. clear writing. no polemics.
steven mosher,
I am pleased you noted my distinction. My position on the social AGW movement is pretty strong, but I know that I can be wrong.
I don’t want to be, but I will accept that if there really is a great CO2 driven calamity coming, we are obliged to do things about it.
But to keep on track for this site, and the tech issues, the example of the flight control software is great.
everson,
The problem is that much of AGW is simply that: opinion and polemics.
If we were dealing with the science, then we would all be talking about the costs and benefits of CO2 in the atmosphere, and how the measured changes of the last 100 years are ~ in the range of historical change,and how we might be approaching the Roman warming or the MWP and how nice it is to be warm.
And how we need to reduce soot and other pollutants, and why are we so behind on going to nuke power, etc.
Instead we have, what we have.
So focusing on the mechanics is good, and wholesome.
Thanks Hunter.
I actually think there is a large ( and influential) group of people who are scientifically or technically educated who like you want to understand the basic science, want to see that it is done with the same care they take in their jobs, and are willing to take action. These people are just not willing to be called names, to be lumped with real live industry shills, to be lumped in with anti science types, etc etc. We understand that the science is uncertain, don’t appreciate fear mongering, data hiding, etc etc.
A place like this where we can ask questions and get answers is what’s missing.
steve mosher,
I have literally lost customers of AGW. I have been accused of being some sort of oil paid shill
(show me the $!).
If I had to label myself, I would say I am a lukewarmer-but having grown up in the 1970’s with the ‘Late Great Planet earth’ claptrap, along with Howard Ruff, the 1970’s ice age, the end of oil (1970’s vintage), I have little to no toerance for apocalyptic claims based on evidence that is not substantially different from noise. Add to that the rewrites of history irt MWP, etc. and my bogosity meter pegs out.
My basic observation is that we muddle through pretty well, if we don’t go all millenial (~990AD).
I severely distrust haughty elites that imply that only they are qualified to an opinion.
This is a great place, where somehow civility has reigned, and even if we disagree we can still be agreeable.
As an experimental fluid mechanics researcher, I worked along side with CFD modelers over many years. They have come a very long way, and can do some flows very well. If you know anything at all about fluid mechanics, you would know that time varying full 3-D flow fields at high Reynolds numbers can not be solved exactly. With complex initial and boundary conditions, the complexity is even more. Add variable phases (rain, ice, liquid, gas), and particles, and the a huge range of forcings (Solar variability, ocean currents, volcanoes, etc) and you have a very complex problem. Models (CFD+) put in such general factors as historical trends and approximate effects of the forcings to try to get future trends. You can always force a fit of past results with artificial adjustments of the parameters, but this has little to do with predictive power of the future due to the short time good data(?) has been available. I would take these models with more than a little grain of salt until they predict a few years ahead with some general accuracy. None of the present models come close, so don’t claim that because they have some of the fluid mechanics and Physics right, that they are on the way to being useful at the present level of capability.
Leonard,
I don’t think I would disagree with any of that. This is all about
that grain of salt. But there are several attitudes or positions
that make conversations difficult.
1. RTE denial. You will find many people who believe that
GHGs cannot “trap” IR. they just dont get how a “trace”
gas can warm the surface.
2. “Models are worthless” scepticism. the version here goes
something like models cant predict the weather how can
they predict the climate.. Sometimes this skepticism takes
the form of “the airplane crashed, therefore CFD is useless”
or “the GCM predicted +2C warming and its only 1.8C, therefore its useless”
3. there is a strong brand of “models are useless” scepticism that is best exemplified by the chaostists ( tomvonk probably) These discussions hurt my head.
This is totally/kinda off topic(well i think it may have something to do with absolute humidity)…but a lil curiosity i havnt been able to google the answer too… Or get an answer to from any other blog.
I guess this is global, but optical depth seems variable on air pressure, its an old farmers weather prediction method. So during a high pressure, distant objects(mountains etc) appear more distant, with less visible definition to the details, and appear closer with greater visible detail pre a change in the weather(its very noticeable)… Is this the result of the SW light being scattered/or absorbed by the greater density o water molecules in the atmosphere during a high pressure system? (it dosnt seem temp dependent so much as pressure)
Random i know, but im curious.
Thanks for all the nice comments on the site.
On models – everyone has a different background here, some have experience of fluid modeling so we should get some great insights.
GCMs are possibly the biggest modeling challenge around and we will be returning to them in Models On – and Off – the Catwalk (when I get around to finishing Part Two).
So, because of the different experiences here, I’m not sure whether 1d vertical simulation of radiative transfer is seen as an embodiment of GCMs so probably wrong.
Or because it’s modeling “the atmosphere” it’s tougher than an airplane wing.
There’s clearly a spread of opinion and probably some people reading but not commenting who aren’t sure either, so a brief explanation.
Everyone, well many people, know that modeling fluid flows is a tough challenge. Turbulence presents chaos.
But modeling other physics/engineering problems is a lot easier. Let’s look at heat flow and dissipation in electronic products. 3d thermal models have been used for a long time and are very accurate. You have a lot of confidence in the model because they get the right result. How? The equations are straightforward, there aren’t unknown processes to consider. There’s no chaotic elements – where slight changes to initial conditions bring totally different outcomes.
The processes that have to be considered include conductive heat flow across boundaries, conductive heat flow within products, radiation of heat, and convection of heat. It’s “complex” because the boundary conditions (the shape and properties of all these little components) are complex. It’s also complex because modeling convection is difficult (fluids again) and convection – if it operates – moves more heat than radiation and conduction.
Anyway, it’s a 4d model (3d through time) and gets good results. I might have missed a process in there. But the point is there aren’t many equations in the process. They can be examined. Before you start, just by looking at the mathematics that describe the process you expect to get a sensible result (so long as the boundary conditions are correctly described).
It’s all about the maths. How complex are the equations? Are processes described and solved by the fundamental equations? Or by “parameterization”?
You also ask questions like, are the constants well-known that describe this particular material for this process. If we talk about thermal modeling it might be the emissivity of the surface for example. If we talk about absorption of radiation by a particular gas we might ask how well known is the absorption cross-section at wavelength λ in the Beer-Lambert law?
So if we return to the “radiative-convective” model, as outlined in Part Five one of the important points is the equations aren’t solving the convective side.
That’s why the equations use a known profile of temperature through the atmosphere to get the right answer. If they had to solve for convection we have a lot more factors to consider.
So solving the radiative convective model is not so complex mathematically. There aren’t many equations. There aren’t large non-linearities. It’s computationally challenging because the absorption and emission at every wavelength has to be calculated. These properties have been thoroughly studied and detailed over many decades. So the properties themselves aren’t in question. And if we solve using the line by line model there aren’t any parameterizations for this absorptivity and emissivity. (The commonly used band models – parameterizations – have also been extensively studied and written about).
If we solve the models under cloudy conditions then there will be some cloud parameterizations. Which is why we often see results broken into cloudy and clear sky conditions, so that the better understood process is seen. (And the paper under discussion here also used clear sky).
So, very long comment, should have written a new post..
One approach is to look at the maths behind the process. It tells you how realizable an accurate result might be. And where complications might arise.
Of course, as we saw in the post, the model matched reality pretty well.
The confusion by the American Thinker writer was about why the 1997 value in the CO2 band was higher than the 1970 value (because the surface temperature was higher). Ignoring his confusion, this radiative convective model of a 1d section through the atmosphere does a pretty good, but not perfect, job of modeling absorption and re-radiation.
That’s why it gets used.
A GCM incorporates this radiative convective model along with 100s of other models.
science of doom,
You make an interesting point at the end of your latest response irt ‘100’s of models’.
Just how many models are incorporated into the GCMs?
Steven,
I appreciate your comments. I think the people that only use the three types of arguments you describe are either not scientists, or are not reasonable scientists. I don’t know of many of the skeptics I agree with that take any of those positions. Please note that:
1) I do agree that there is an effect (commonly called a greenhouse effect), caused by water vapor, CO2, methane and some other gases. My problem (and I think the problem with most scientist skeptics) is with the type and level of feedback, which is claimed to be big, but actually which seems to be small or even negative, and which is invoked to make a small effect into a big problem. In addition, non-greenhouse effects such as ling term ocean currents and Sunspot/cloud effects are not reasonably included in the arguments.
2) I think models are useful in many types of studies and getting better all the time. However, the long term climate is a particularly complex problem, with many less understood physical processes, and limited data available for a limited time period. My problem is how such strong claims were made from such weak understanding.
3) I do not conclude models are useless due to chaos, but climate clearly has a chaotic major component, but also has strong direct drivers. I think the combination of drivers (orbital tilt, Solar activity, volcanoes, etc.) and lagging moderators (mainly ocean long term currents), along with greenhouse gasses, aerosols, and feedback do generally direct the climate trend, but with the chaotic possibility of unexpected medium term trends.
Hunter:
Think of it more like a component. The largest building blocks are atmosphere, ocean, ice and land surface (should be more explanation on this breakdown in the next part on the Models On and Off the Catwalk series.
Then within each building block there will be components or elements that define different processes. Of course they interact.
So if someone wants to carry out a calculation of the RTE through a particular profile in the atmosphere, they might use only that component of the GCM.
If someone wants to run a 1000-year ocean simulation to work out what happens to the thermohaline current (see Predictability? With a Pinch of Salt Please..) they will want to run it without the atmosphere component (although they will want some kind of “broad brush” input) so that the computational time is reasonable. Oceans move slower than the atmosphere.
Then there’s a cloud component, an aerosol component..
I must add my thanks for a very illuminating blog with the most balanced approach of any I have yet seen on climate change.
I would be inclined to agree with hunter, that the models are useful for studying many aspects of climate, but as yet unproven for long term prediction. I would add that the feedback assumptions they include at present seem to have at least a whiff of confirmation bias.
—moderators snip —
No doubt if temperatures refuse to cooperate there will eventually be a rethink.
Alex Heyworth:
Thankyou for the kind words.
I had to snip a little piece out of your post. A few people sailing close to the vortex of Etiquette breach made it through. You didn’t quite. It’s probably a little bit random.
It’s a fine line on people’s motivations but even if they are from an understanding perspective I will be inclined to snip and chop.
Hopefully everyone understands so we can avoid the chaos that threatens.
No problem. Your blog=your rules+your interpretation!
[…] American Thinker – the Difference between a Smoking Gun and a Science Paper « The Scienc… […]
i enjoyed your post here regarding my american thinker article and i’m deeply flattered that this stimulated such a detailed post here. thanks also for your kind comments on the way i posted on skepticalscience and it has always been my contention that if i’m talking to someone who has the intent of understanding the science (like i do) and not concealing a political agenda then a productive conversation can be had.
i have bookmarked this site and have added it to my rotation! well done and i enjoy what i’ve seen so far.
Some of the arguments you raised here were put forth in the skepticalscience article as well as a well thought out rebuttal on American Thinker toward the end of the comments (right above where you recently commented). i will attempt to summarize those comments here but i encourage your readers to visit those sites and take a look for themselves. i know why the authors of the papers were using climate models to simulate the removal of effect from surface temperatures and humidity and that the ‘theory’ says you must do that. but my problem lies in two peer reviewed papers that casts doubt on that theory and that method.
paper 1 is at: http://www-ramanathan.ucsd.edu/RamAmbio.pdf
paper 2 is at http://landshape.org/enm/wp-content/uploads/2009/01/philipona2004-radiation.pdf
according to paper 1, the paper by Ramanathan entitled “Trace-Gas Greenhouse Effect and Global Warming”, the author states on page 3 (which is really labeled page 189 since it was in a larger journal i guess) under the section Anthropogenic Enhancement of the Greenhouse Effect – “an increase in greenhouse gas such as CO2 will lead to a further reduction in OLR.”
notice there is no clarifying statement about having to use model simulated graphs to ‘correct’ for surface temperatures and water vapor before seeing that OLR reduction.
and in the paragraph right above that the author states – “since the emission increases with temperature, the absorbed energy is much larger than the emitted energy, leading to a net trapping of longwave photons in the atmosphere.”
here the author stated clearly that even taking into account higher emissions from warmer surfaces, the net will still be a reduction
according to paper 2, section 3, in order to make the models agree with the measured values over this 8 year period he had to enter an atmospheric CO2 concentration increase of 10%. but the problem is the CO2 actually only increased 3.3% during that timeframe. so the model underestimates the contribution of CO2 to global temperatures by a factor of 3. it’s on page 2 and 3 where he discusses this. of course i would make the case that the model is way off and was one of the driving forces of my AT article. Maybe we have misrepresented the effects of humidity, temperature and CO2 in the models.
I attempted to create a simple model for calculation purposes on the skepticalscience site and even with overly conservative numbers (that favored AGW) i couldn’t account for the 0.73C increase in temperatures over the 36 year period by the reduction in OLR even using the authors’ modeled results (which was between 1-2K reduction in OLR Brightness Temperature). My explanation of that is long and tedious and if you don’t want to go to the skepticalscience site, here is a copy of that post with the links cleaned up – i stink at knowing how to imbed links into these posts.
remember that I am arguing that there is no OLR delta between 2006 and 1970 over the regions where CO2 absorb and i feel the proof of that is fairly evident from
the graph showing the delta but let’s say I’m using “biased” eyes and in reality there is (on average) a -1K delta in Brightness Temperature (which could be argued from figure 5 in my article if you focused more on the lower wave nubmers/cm and ignored the higher wavenumber/cm portion that actually shows an increase in OLR).
But let’s assume that -1K delta and see what that equates to in W/m2 delta over 36 years. Once we get that difference of OLR flux let’s plug that in the Radiative forxing X Climate sensitivity and see what temperature it predicts we should’ve seen. I’m going to do that by assuming calculating the W/m2 for an two ideal Blackbodies (that are radiating 1K different temperature) over all wavelengths to get the delta for the entire spectrum. Then, taking the ratio of that area under the blackbody curve which we are concerned with – the 700-780 wavenumber/cm and the other peak (which isn’t shown on the data but we know is there – although it’s OLR is less as we know from http://cimss.ssec.wisc.edu/goes/sndprf/spectra.gif) find out what proportion of the overall delta is associated with CO2 absorption. but first let’s look at that ratio we are concerned with.
As I noted in my post (#28) and as RC noted (http://www.realclimate.org/index.php/archives/2007/08/the-co2-problem-in-6-easy-steps) under Step 2, calculating the OLR flux from this data can’t be calculated by hand. But let me try and simplify the situation and make it a worst case situation and see if
that makes the calculations easier. As is noted on the RC post (http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument-part-ii) and the post on this page topic (#34 Riccardo) the CO2 absorption is saturated at and around the peak (15um) and all we are left with are these “spurious peaks”. So these edges are all we’ll be concerned with since the saturated delta will be zero. Next, pull up the graphs 3 and 4 in my article which correspond to the TES and IRIS measured data (2006 and 1970 respectively). you can see that they both roughly start around the 220K BT (for 700 waves/cm) and then
rise to about 290K BT at around 780 waves/cm (which comprises the CO2 absorption range for the data in this paper). for the sake of estimating (and doing this by
hand) let’s treat that entire region as if it were a BT of 255K (in the middle of that linear rise). I realize this is an assumption but as I stated above, the other side of this absorption will be less so I feel I’m being conservative and biased toward the AGW position.
Take a curve showing the radiation of a black body that is at 255K. plot that out and draw verical lines at 13.5um (750 waves/cm) and 17um (588 waves/cm), then
shade that region of the curve in. that represents the wavelengths that are absorbed by CO2. If you ratio that integrated area to the total integrated area of
that curve (over the entire wavelength range) you get around 21.5%. I’ll be generous and even call it 22%. Notice also that the wavenumbers/cm shown in the three
cited papers (starting at 700) represent the higher magnitude radiation on this 255K radiation curve (whose peak amplitutde is around 11um). If we assume at and
around the 15um area that there is total absorption then let’s say that portion that absorbs to extinction is half of the total inside that shaded region (and on
the page of RC that talks about CO2 saturation that sounds conservative). So, half of this energy in the shaded region is what we’ll use to calculate the W/m2
conversion from the data in the cited paper. so that 22% is now 11%.
Let’s use Stephan-Boltzman Law to find out what the delta OLR flux would be for two ideal black bodies who were separated by 1K (255 vs. 254K). taking 255^4 minus 254^4 and then multiplying that result by 5.67×10^-8 you get 3.74 W/m2. so the delta in energy from those ideal Blackbodies for all wavelengths was 3.74W/m2. but we are only interested in the part that CO2 absorbs and taking 11% of that yields 0.41 W/m2. so in 36 years we see a difference in OLR of 0.41 W/m2. Assuming 100% of that contributes to forcing attributed to GHG and using the climate sensitivity factor of 0.75C/(W/m2) which is also in the first RC link, you get a contribution of 0.31C in 36 years! so at best, using VERY conservative estimates (which i don’t agree with) the OLR reductions contributed at worst 0.31C warming to the earth from 1970 to 2006. According to GISS trend map from 1970 to 2006 we’ve seen a warming of 0.73C. Even with my oversimplifications and estimates to skew the number
higher, we see the predicted increase is less than half of the observed. and this is neglecting the net cooling forcing due to aerosols and natural changes which is 1.6 W/m2 (again, mentioned in the first RC post). If we deduct 1.6 W/m2 from the 3.74 W/m2 then we get 2.14 W/m2 and that takes the temperature delta down to 0.18C (after multiplying by 11% and 0.75C/W/m2).
gary thompson:
Thanks for posting Gary, and for the kind comments – and sorry the spam filter held your comment up. It’s not a discerning spam filter.. a little over-eager at times.
Plenty to read and take in, so expect a response in a few days. I’m not as quick as other folk.
no problem, take as long as you want. i look forward to your comments and critique. i’m sure i’m like you, i have another job that pays the bills and it isn’t involved in climate science so i understand the delay.
and there is definitely no need to apologize for spam filters. that comes with the territory and this is assumed when having conversations on these websites.
[…] This post is a follow on from my original article: American Thinker – the Difference between a Smoking Gun and a Science Paper. […]
gary thompson:
Responses to your comment are in a new post:
https://scienceofdoom.com/2010/03/27/american-thinker-smoking-gun-gary-thompsons-comments-examined/