Perhaps machine learning would be more accurate. ]]>

IF I understand correctly, M&W were also the first to fully describe radiative-convective equilibrium in the atmosphere and possibly recognize radiative imbalances at the TOA were critical. Before then, a surface energy balance perspective dominated.

]]>One can look at the response to doubling CO2 from a TOA energy balance perspective (+3.5 W/m2 less heat escaping) or a surface energy balance perspective (+1 W/m2 more heat arriving).

From the TOA perspective, the planet looks like a graybody with a surface temperature of 288 and an emissivity of 0.61. Planck feedback for such an object is -3.3 W/m2/K. So slightly more than a 1 K increase in surface temperature (with no feedbacks) can restore radiative balance after a doubling of CO2.

From the surface energy balance perspective, the planet is nearly a blackbody at 288 K. Planck feedback for such an object is -5.4 W/m2/K, meaning it only takes a 0.2 K rise in surface temperature to restore radiative balance in response to a 1 W/m2 increase in DLR. That is the 0.17 W/m2 dTs in Figure 4. Somewhere Ramanathan has figured that the warmer atmosphere is going to radiate an addition 2 W/m2 to the surface, meaning 0.5 K of warming is needed without feedbacks.

In either case, the climate feedback parameter is critical, the additional amount of heat emitted or reflect to space per degK of surface warming (W/m2/K). And the increase in upward heat transfer per degK of surface warming must be the same at all altitudes as it is at the TOA (where only radiation is involved). However, while one can calculate changes in radiation with temperature from first principles, convection is more challenging. That is what Ramanathan is trying to do in this paper. (Table 3).

For example, if one assumes that the flux of latent heat from the surface rises as fast as saturation vapor pressure (7%/K), then just the increased latent heat leaving the surface is 5.6 W/m2/K, far too big if feedback is positive. I haven’t check out his rational for all of the values in Table 3.

Hope this helps.

]]>I do think there is information on aerosols in the temperature record. Fitting the past 40-years independently of the pre-1970 periods is one way to access this information.

]]>I found this detail on effective radiative forcings up to 2011. It appears that the aerosol indirect effect is estimated at around -0.7 W/m2 by AR5. If that is reduced significantly, it will reduce any observationally based sensitivity estimate.

]]>I had some time to go back and look at your comments in more detail and check some #. Below are some comments and added detail:

1) I used a 2xCO2 forcing value of 3.7. That best match to the CO2 forcing values in Nick’s spreadsheet is 3.74.

2) To estimate the forcing change I ran a regression through the last 40 years in the spreadsheet and multiplied the per year slope of 0.0387 by 40 giving a total of 1.55 or 0.41 of a CO2 doubling.

3) Not familiar with # behind the Otto paper. The aerosol values you quote are higher than the values in Nick’s sheet, so there is a discrepancy.

4) Our linear fit estimates would be expected to underestimate ECS, since models approach final warming asymptotically not in a straight line. Roy Held had a nice blog on this.

5) As our issues with numbers illustrates, the best way to evaluate climate models is to compare them directly to observed temperature. Here is what I get for linear trends in the past 40 years: HADCRUT – 0.18/decade, BEST – 0.19/decade, Cowtan and Way – 0.191/decade, RCP6SSTblended – .189/decade. So the agreement is good.

6) Yes indirect aerosol effects are being scaled back, but the paper below finds that models are underestimating aerosol direct effects. Per my earlier comment the overall observed temperature trend indicates that aerosols have played an important role. We will need to wait for CMIP6 for an updated and hopefully improved estimate.

(hopefully this comment ends up in the correct location)

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018GL078298

]]>