Folks who do combustion, solar physics and atmospheric measurements understand that in such systems where the light source(s) are at the same temperature as the absorbers you cannot naively use the absorption coefficient

]]>Question: How is Su determined when calculating absorption and emission from radiosonde profiles? If you’re using the 2m air temperature as being equal to the surface temperature, you’re assuming your conclusion that ta = tg. On a clear calm day, there can be very large surface temperature gradients when the sun is high above the horizon. On a calm night, the surface temperature can be much lower than the 2m air temperature. An IR thermometer would probably be the best measure of effective surface temperature because it’s effectively measuring Su.

]]>In reference to your comment above April 24, 2011 at 5:01 pm

The method is also sensitive enough to show the signal, if it were there.

and my reply April 25, 2011 at 12:17 am :

Where’s the statistical analysis that shows that the theoretical trend is, in fact outside the 95% confidence limits of the data? Eyeballing the data, I seriously doubt that it is.

Ken Gregory’s spreadsheet had the data to reproduce the global average IR absorption anomaly graph in slide 16 in the AGU presentation. I did a regression analysis in Excel and added the 95% confidence limits for the the regression to the graph. The 95% confidence limits on the slope of the regression are -0.0016 to 0.0027. That means a total change in absorption over the 61 year time period of 0.16 would still be inside the 95% confidence limits. Unfortunately, the CO2 only data isn’t in the spreadsheet, but you can overlay the plots and see that the CO2 line is inside the 95% limits. So the method is not, in fact, sensitive enough to show the signal, as I predicted.

]]>The 2nd case, scenario B has exp(-1.58) not exp(-1.8). ]]>

I get a decrease in transmittance for the τB = 30 case.

exp(-1.8)*0.8 + exp(-1.8)*0.2 = 0.1653

exp(-1.8)*0.8 + exp(-30)*0.2 = 0.1322

That corresponds to an optical thickness of 2.023. That’s a lot smaller than the weighted arithmetic average, though.

]]>And in doing so realized that – of side interest only – the optical thickness is not really the (alleged) constant.

It is transmittance that is really the (alleged) constant. Planck-weighted optical thickness can increase dramatically with no noticeable impact on transmittance.

And of course the calculation in M2010 (and M2007) is really Planck-weighted transmittance converted back to optical thickness.

For those few who might be interested..

Suppose you have only 2 parts of the spectrum, A & B. And suppose A is 80% of the weighted spectrum and B is 20%:

**Scenario 1**

τ_{A} = 1.8

τ_{B} = 1.8

This is nice and simple τ_{av} = 1.8.

Transmittance, t = e^{-1.8} = 0.165

**Scenario 2**

τ_{A} = 1.58

τ_{B} = 30

This averaging is nice and simple as well, τ_{av} = 1.58 x 0.8 + 30 x 0.2 = 7.26

Ok, so optical thickness has had a massive increase.

Transmittance, t = e^{-1.58} x 0.8 + e^{-30} x 0.2 = 0.165

Transmittance is the same.

*So an “average optical thickness” increases by a factor of 4, while “average transmittance” is constant.*

I should point out that this is not describing any flaw in Miskolczi’s methodology, as the maths he describes in his paper convert a global transmittance back to an optical thickness.

But average or global optical thickness is actually slightly different. It’s a definition thing..

Of curiosity value only.

]]>Gn = (Su + K – OLR)/OLR

]]>