Climate models are the best tools we have for estimating the future climate state. What will the world be like if we double the amount of CO2 in the atmosphere?
For a non-technical person, and this series is written for non-technical people, it’s hard to understand climate models. Should we trust them because climate scientists do? Do climate scientists trust them? What does “trust them” even mean?
There are lots of papers written by climate scientists on the difficult subject of evaluating climate models. They do some things well. They do some things badly. Different models get different results. Sometimes widely different results.
If you read many climate blog sites that claim to be “skeptic” you may learn:
- climate models are programmed to produce a certain amount of warming
- feedbacks are built in, based on pre-conceptions about the climate and CO2
- climate scientists running climate models always produce the worst case outcomes
- garbage in, garbage out
These are common themes. They have no basis in reality.
If you learn climate science from people who have never read a paper on climate models, or never reviewed the equations used, or have no idea even what the right equations should be (this one is the most entertaining, covering 99.9%).. what can you expect?
If you learn about outcomes of models from reading press releases produced by activists, faithfully reported by media stenographers – rather than reading tens of papers on model outputs for certain scenarios… how can you expect to know the range of results that are produced? How can you conclude that the worst case outcomes are always produced from models?
It’s easy to prove these “skeptic” points wrong. If you understand the physics behind turbulence and heat transfer and read a technical paper on models you can easily see that models don’t have prescribed feedbacks or prescribed warming. On this site, in comments on previous articles I’ve shown the published model maths to people who think this, or have been taught this. No one has commented further. That’s a clue. Ignorance is bliss.
If you can’t read Chinese, why would you think you can comment on the accuracy of Chinese to English translations?
But there aren’t two sides to every story. Sometimes there’s only one. Sometimes there are 10. The choice isn’t “they are all garbage”, or “they are all trustworthy”.
The story is complicated and I’ll try and explain more in future articles.
Note to commenters – if you want to question the “greenhouse” effect post your comment in one of the many articles about that, e.g. The “Greenhouse” Effect Explained in Simple Terms. Comments placed here on the science basics will just be deleted.
If you want to question model physics – the subject of this article – that is completely fine. I’ll be happy to provide the technical documents that describe the model maths.
Articles in this Series
Opinions and Perspectives – 1 – The Consensus
Opinions and Perspectives – 2 – There is More than One Proposition in Climate Science
Opinions and Perspectives – 3 – How much CO2 will there be? And Activists in Disguise
Opinions and Perspectives – 3.5 – Follow up to “How much CO2 will there be?”
What is the point innan article about climate models that are based on the greenhouse effect and which assumes it is real, but then tells us we cannot comment on it? This is scientific fraud.
Alan,
You can comment on, or question, the “greenhouse” effect in The “Greenhouse” Effect Explained in Simple Terms, or any of the many other articles on that subject.
There have been 1000s of comments on that one subject here on this blog. Add your insights over there. I don’t want to derail this discussion for something on which we have had 10s of articles and 1000s of comments.
Dr Roy Clark explains how and why the climate consensus on global warming and climate change are wrong. Their central error is a reliance on a crude (and wrong) model by Manabe and Wetherald (1967, 1975), and iterations on that model. https://www.amazon.com/dp/B005WLEN8W/
Well, this man, Roy Clark, has an opinion. ” A doubling of the atmospheric CO2 concentration has no effect on the Earth’s climate.”
I don`t think this contributes to an enlightening discussion. And as SoD writes: “I don’t want to derail this discussion for something on which we have had 10s of articles and 1000s of comments.”
Roy Clark’s detailed refutation is inside the book in 100+ pages of words and 90+ diagrams.
Why bother commenting on a book you’ve not read, because you, absolutely, must have the last word?
SoD: “Climate models are the best tools we have for estimating the future climate state. What will the world be like if we double the amount of CO2 in the atmosphere?”
And climate models are the best tool to give us wrong ideas about future climate state. We know that climate models are systematically biased, which means that bias in most of the models goes in the same direction. This is affecting parameters through the whole climate system, from ice crystal scattering high up to deepest ocean uptake. And it is affecting all kinds of feedbacks, like water vapor, clouds, CO2 outgassing, wind systems and much more. There are great uncertainties, when it comes to long term CO2 sinks. Then it doesn`t help much to have a multimodel mean. A small bias can lead us to wrong conclusions when it is used for long term calculation, 50 small systematic biases can lead us astray.
Models have no deeper understanding of the dynamics of climate, and cannot stop estimating and begin to think. And luckily there are some users of models who are sober, and who wonder when models don`t fit observations.
But there are too many scientists (read activists) who get high on their high sensitive models.
We know that climate models are systematically biased,
We do?
If you run the CMIP5 models with the real-world forcing inputs (volcanos, GHGs, solar, etc.), and you take care to compare like with like (comparing the same parts of the model results as we have observations), then their results appear to be consistent with observations.
Conclusion from a sober scientist:
” Conclusions
Climate prediction models provide the scientific input which underpins climate change mitigation treaties and adaptation strategies. Whilst there has been considerable improvement in climate simulations over recent years, climate models have quantifiable shortcomings and develop biases of magnitude comparable to the climate change signals such models are trying to predict.”
From: Diagnosing the causes of bias in climate models – why is it so hard?
T. N. Palmer & Antje Weisheimer 2011
And systematic errors in models are discussed:
Systematic Errors in Weather and Climate Models: Nature, Origins, and Way
Forward. From Ayrton Zadra et.al 2018
“All model evaluation efforts reveal differences when compared to observations. These differences may reflect observational uncertainty, internal variability, or errors/biases in the representation of physical processes. The following list represents errors that were noted specifically during the meeting:
● Convective precipitation: diurnal cycle (timing and intensity); the organization of convective systems; precipitation intensity and distribution; relationship with column integrated water vapor, SST, and vertical velocity;
● Cloud microphysics: errors linked to mixed-phase, supercooled liquid cloud and warm rain;
● Precipitation over orography: spatial distribution and intensity errors;
● MJO modelling: propagation, response to mean errors and teleconnections;
● Sub-tropical boundary layer clouds: still underrepresented and tending to be too bright in models; their variation with large-scale parameters remain uncertain; their representation may have a coupled component/feedback;
● Double Inter Tropical Convergence Zone/biased ENSO: a complex combination of westward ENSO overextension, cloud-ocean interaction, representation of tropical instability waves (TIW);
● Tropical cyclones: high-resolution forecasts tend to produce too intense cyclones, although moderate improvements are seen from ocean coupling; wind-pressure relationship errors are systematic;
● Surface drag: biases, variability and predictability of large-scale dynamics shown to be sensitive to surface drag; CMIP5 mean circulation errors are consistent with insufficient drag in models;
● Systematic errors in the representation of heterogeneity of soil;
● Stochastic physics: current schemes, whilst beneficial, do not necessarily/sufficiently capture all aspects of model uncertainty;
● Outstanding errors in the modeling of surface fluxes; errors in the representation of the diurnal cycle of surface temperature;
● Errors in variability and trends in historical external forcings;
● Challenges in the prediction of mid-latitude synoptic regimes and blocking;
● Model errors in the representation of Teleconnections through inadequate stratosphere troposphere coupling;
● Model biases in mean state, diabatic heating, SST, errors in meridional wind response, tropospheric jet stream impacts simulation of teleconnections.
You can start with Lewis and Curry 2018 where observationally constrsined energy balance methods are used. The GCM mean ECS is I believe outside their 5% – 95% range. I would need to double check the TCR values
Models that have the lowest ITCZ bias have higher ECS.
https://www.researchgate.net/publication/276471866_Spread_of_Model_Climate_Sensitivity_Linked_to_Double-Intertropical_Convergence_Zone_Bias_Double-ITCZ_Bias_and_ECS
I’ve never placed much confidence in the emergent constraint business. Selection bias is unavoidable in my opinion.
If you don’t give credence to IPPPC forcing estimates then without another source of data it’s impossible to estimate TCR using historical temps.
I’m not very excited about ECS estimates in the emergent constraint papers either. The point I was trying to make is that there is no evidence that any of the “errors” in nobody’s list bias climate model ECS estimates. The emergent constraint papers show in general that models with the best performance tend to have higher ECS. If models were overestimating ECS due to “errors” then I would expect the opposite.
Click to access Ken_Ch8.pdf
I found this detail on effective radiative forcings up to 2011. It appears that the aerosol indirect effect is estimated at around -0.7 W/m2 by AR5. If that is reduced significantly, it will reduce any observationally based sensitivity estimate.
Sorry, I should have said -0.4 W/m2.
Well Windchaser, There are a large number of model outputs that can be compared to data. Most that pertain to regional climate are not well reproduced. Recall all the papers on why models have a higher ECS than shown by recent observations? They all point out that models have a different pattern of warming SST’s than observations and that when the models are driven by that SST pattern their apparent TCR is lower and closer to the EBM’s. This is all framed as “but in the very long term” the models must be right. But the correct point here is lack of model skill on a very important measure.
Yeah, the moment I hit “submit” on that reply, I thought I should’ve clarified that I was just talking about the global temperatures. I agree there are systematic biases in other parts of the models.
But I don’t think the models have ECS higher than recent observations. That has not been shown, not even a little, as any discrepancy between models and observations can also be attributed to comparing apples-to-oranges, or not using the real-world forcings.
If you say “the models have ECS higher than the shown by recent observations”, you need to show it.
Sorry Windchaser my response is above and out of place
SOD, The four talking points you point to are not really representative of skeptical scientists. Lindzen for example says that models get the large scale dynamics mostly right but that aerosol forcing models are a giant knob to compensate for high ECS but yet do a decent job of hind casting global temperature.
Experienced fluid modelers and even Hansen himself know there are serious and obvious issues with climate models. The main problem is that the truncation errors are vastly larger than the changes in energy fluxes they are trying to model. Parameters are tuned to get certain measures to agree with the data, such as top of atmosphere energy fluxes. Skill on other measures is more a matter of chance and probably is due to cancellation of errors. On many more detailed measures, such as the pattern of recent warming, cloud fraction as a function of lattitude, etc. the skill is not good.
My main complaint is just that so much time and money is spent developing and running models that there is little time or money to address more fundamental theoretical issues, such as the lapse rate theory, tropical convection, clouds, etc.
The other thing that is very obvious is that there is overconfidence in CFD generally and particularly with climate models. And there are also talking points used by climate scientists to “communicate” the virtues of models that are also nonsense. The “climate is a boundary value problem” one is very prevalent and has no real basis. It’s about the mythical attractor and we are mostly ignorant on this subject. We know of lots of counterexamples for simple systems of structural instability for example. I think I commented a few weeks ago here on a good paper on that.
I’m not attempting to comment on climate scientists who find themselves against the consensus.
I’m commenting on the outpouring of uninformed rubbish written by people who haven’t read or understood the details of climate models. Likewise the rubbish written by people who have no idea of the range of model outcomes, or of what climate science papers contain.
Well yes, the internet generates mostly rubbish from people of all persuasions including some climate scientists.
For short-term trends in global mean temperature the models are fine. They have a 30-year track record. But complicated models aren’t needed. Warming has been very steady during the forcing ramp that started in the 70s. As Hanson suggests, a 11-year running average takes out almost all the variability due to enso, volcanoes and solar; leaving a nice straight line – just under 0.2C/decade, 1.5C warming by 2040 at the latest.
In terms of ECS or 550ppmC, I don’t think the models are going to be that accurate. My concerns though are the opposite of DPY’s. He is concerned with all the small stuff, the sub-grid scale error. That doesn’t concern me in the slightest. Based on the 30-year modeling track record, most of the errors in small-scale phenomena must cancel out or can be tuned away.
What concerns me are the boundary conditions: ice sheets, permafrost, vegetation etc. Holding those fixed or using rudimentary models is going to become increasingly inaccurate. For this reason, I wouldn’t sweat the details in any climate modeling beyond roughly 2C of warming. If needed, Paleo data can provide a rough estimate of ECS or ESS. In terms of the transition to equilibrium, particularly if ice sheets start going quickly – we are just going to have to play it by ear.
Chubbs, These “small scale errors” such as turbulence modeling can have large global effects as has been proven in CFD for example where the problems are much simpler and idealized. Simple airfoil calculations show up to 20% variations in global forces using a range of very credible boundary layer turbulence models. And CFD uses turbulence models that are global PDE’s. Most climate sub grid models are algebraic, i.e., local.
Aggregation of convective cells has been shown to have a quite significant effect on a model’s ECS. TCR however also seems to be higher in models than observationally constrained estimates. That seems to be because the pattern of SST warming in the models is wrong. What this illustrates is that the details of the dynamics will change the TCR of models significantly.
Your intuition about energy conservation and well posed problems is not going to help much here. All scales affect all other scales in a chaotic simulation.
There is simply a vast body of science showing that the expectation of skill for climate models is badly misplaced either in terms of global averages or in terms of distributions of quantities (which is often critical).
The reason they may do a reasonable job on global mean temperature is that they get the top of atmosphere radiation balance rather closely (being tuned for it) and probably also the ocean heat uptake flux. That just makes them vastly expensive energy balance methods. It’s the detailed distributions that we paid these billions of dollars for that are missed and these details have large influences on TCR and ECS.
BTW SOD, my final paragraph above is a possible explanation of how tuning could constitute “programming in a certain amount of warming.” Another explanation is Lindzen’s about aerosol forcing models which were in the past way too negative.
There is simply a vast body of science showing that the expectation of skill for climate models is badly misplaced either in terms of global averages or in terms of distributions of quantities (which is often critical).
Again, I don’t think this is true, at least for global averages.
So long as you:
(a) use real-world forcings in models that you want to compare to the real-world observations, and
(b) properly mask the model outputs so that you’re comparing the same geographic data from both models and observations,
then the models seem to do just fine at producing solid estimates of global temperature sensitivity.
There are plenty of things the models don’t do well yet (like regional precipitation!), but this, so far, is not one of them.
Here, see the comparison of the observations and models. Particularly CMIP5 surface observations. The dotted line includes the correct forcings, while HadCRUT temperature series is probably the *worst* one when it comes to correctly deriving the global temperature (and is the lowest). GISS and C&W both seem to line up with the forcing-adjusted temperature anomaly pretty well.
http://www.realclimate.org/index.php/climate-model-projections-compared-to-observations/
I get that you have valid complaints with the climate models. They do indeed need work, and scientists are making progress. I just don’t see any solid arguments that the models’ problems suggest that they systematically overestimate TCR or ECS.
I’ve seen the RealClimate page and it doesn’t prove what you claim. CMIP5 is mostly a hindcast where the effect of tuning can’t be ruled out. You will note the final graphs about measured TLT measurements vs. climate models. Look at the rate of change bar charts. They are not very good.
Here’s another one where in the tropics models do poorly notwithstanding a recent large upward revision to the RSS data processing algorithm making RSS warm a lot faster than any other surface dataset. The global data is better but there is still a mismatch with models warming faster than this data.
http://www.remss.com/research/climate/#Atmospheric-Temperature
The mismatch with UAH and with radiosonde data is much worse than this in the topics.
There are papers out there on the pattern of warming that do suggest that TCR is overestimated by models because they get the pattern of warming wrong. Lewis and Curry 2018 suggests model ECS is too large.
CMIP5 is mostly a hindcast
CMIP5’s forecasting is nearly 15 years now, and CMIP3’s is nearly 20. There’s not a meaningful difference in their ECS estimates; heck, the central estimate has been around 3.0C/doubling since, what, AR1?
There are papers out there on the pattern of warming that do suggest that TCR is overestimated by models because they get the pattern of warming wrong.
Sure, there are some. Not a lot, but some. And there are other papers out there that show how some of these first papers are wrong (e.g., why two-box models which give a low TCR are misleading).
All in all, when you look at all the evidence, it does not suggest that TCR or ECS is much below what the models predict. You can find some papers, sure, and I’m sure you think these ones are scientifically stronger, but it is simply a flat fact that they are rarer.
And meanwhile, the observed warming continues at the 0.15-0.20C/decade projected back in early IPCC reports. The observations for the average temperature anomaly do match the models, period.
Let me ask you a question: how much longer would the present 30-year warming trend have to continue, for you to reconsider your position? What evidence would falsify your beliefs? Because I can well imagine this warming continuing for another twenty years and you continuing to say “nuh-uh, it’s not happening; CO2 sensitivity is low”.
Well, matching a well known (to modelers) metric, namely global temperature anomaly is not hard. The question is what about other measures, such as TLT. I referenced these comparisons and they are not good. You haven’t addressed these failures.
What would cause you to reconsider your blind faith that because models replicate one average quantity that this is not due to tuning and is due to skill?
If models replicated cloud fraction as a function of latitude that would be more convincing. If they replicated the pattern of warming that would be more convincing. If they replicated regional climate that would be more convincing.
There is a vast literature in CFD and in the climate literature that shows that these models are quite suspect, i.e., that their ECS is strongly dependent on parameter choices that cannot be constrained with data.
It is extraordinary that someone would claim that replicating a single integral quantity is evidence of skill. It’s not a serious contention.
I think you are wrong about TCR for CMIP3 vs. CMIP5. That metric has been decreasing with time.
SOD wrote: “Climate models are the best tools we have for estimating the future climate state. What will the world be like if we double the amount of CO2 in the atmosphere?”
Each AOGCM is a hypothesis about how our are climate system behaves. Each hypothesis is based on several dozen key parameters that are tuned by an ad hoc process which can’t deal with the numerous local optima that lie between a starting value and an optimum value for each parameter. And that assumes that even an optimum set of parameters can produce useful output given a need for a computationally practical number of grid cells.
In the scientific method, hypotheses aren’t validated until they have been tested by attempting to falsify their predictions with observations. However, serious attempts to falsify even the most pessimistic models would limit their usefulness in the political arena.*
Until these hypotheses are validated, I suggest that observed warming and estimated forcing (EBMs) provide the best estimate for the future climate state (TCR and ECS), but am certainly willing to learn why this is the wrong approach.
*The same problem plagues economic models and other social science research. As in climate science, the goal – all too often – is to accumulate information that agrees with the researcher’s current system of beliefs – not to reject hypotheses that are wrong. I’ll be the first one to admit that performing well-controlled invalidation experiments in climate science and many social sciences is challenging (and perhaps impossible). Should policymakers treat such studies with the same respect they should reserve for traditional science?
Frank – When model predictions are prepared the same way as HADCRUT, i.e. using SST instead of TAS, they predict nearly the same Delta T for 1869-82 to 2007-16 as HADCRUT. That is not surprising since models match the entire observed temperature record. So EBM are not providing different results than models because observations are used. Instead it is the method, which causes the difference. Why? Here are my favorites: SST blending, poor observational coverage in the 19’th century, aerosols which are not uniform in space or time and deep ocean mixing. Note that just aligning the observations and the model predictions in an apples-to-apples manner cuts the difference significantly.
EBM are underpredicting recent warming rates. If you want to use observations, draw a straight line through the past 40 years. Very similar to model predictions.
Chubbs: Your understanding and mine disagree about the agreement between models and observations. I tried to find some data we could all agree upon:
AR5 WG1 Figure 8.18: Time evolution of forcing for anthropogenic and natural forcing mechanisms … The total antropogenic forcing [ERF] was 0.57 (0.29 to 0.85) W m–2 in 1950, 1.25 (0.64 to 1.86) W m–2 in 1980 and 2.29 (1.13 to 3.33) W m–2 in 2011.
HADCRU4 1/1980 to 1/2011 from Nick Stokes: 1.765 K/century * 0.31 century = 0.55 K. F_2x = 3.44 W/m2. TCR = 1.82 K. This agrees with the central estimate from AOGCMs and is consistent with your assertions.
Then I looked at the data in the Supplementary Material for Otto (2013), with 15 co-authors from AR5. They use an average radiative forcing [ERF] of 0.75, 0.97, 1.21, and 1.95 W/m2 for the 1970s, 1980s, 1990s, and 2000s. This provides a much larger radiative forcing change of 1.20 W/m2. Their warming was 0.53 K, which they say yields a TCR of 1.4 K (but I get 1.5 and am not sure what is wrong.) If you read carefully they say:
“For radiative forcing, we use the multi-model average of the CMIP5 ensemble of the RCP4.5 total radiative forcing scenario, including the historic record from 1850-2005 and the scenario values from 2006-2010, adjusted for consistency with recent estimates of aerosol forcing, as follows. The total “Effective Radiative Forcing” (ERF, anthropogenic and natural) is estimated in CMIP-5 to be 1.9 [±0.8] W/m2 in 2010. Examining the short-lived drivers of climate change in 10 current climate models, 8 of which are part of the CMIP5 ensemble, The Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP) estimated the 1850 to 2000 aerosol ERF (effective radiative forcing) as -1.17 [-0.71 − -1.44] W/m2. This ERF is approximately 0.2 to 0.4 W/m2 stronger than the most recent satellite constrained estimates of the same forcing. We therefore add an additional +0.3 W/m2 onto the CMIP5 forcing in 2010, scaling the historical ERF time series.”
In other words, the aerosol ERF produced by AOGCMs is unrealistically negative in the eyes of the IPCC royalty who co-authored Otto (2013). That explains how AOGCMs can reproduce historical warming and at the same time have climate sensitivities that are higher than observations/EBMs. There are about a dozen papers now with TCR in the vicinity of 1.4 K and they all get this result by using less negative estimates for aerosol forcing*.
So one gets a TCR from observations of 1.8 K if one uses the aerosols forcing produced by AOGCM, but one gets around 1.4 K is one uses the IPCCs recommended aerosol forcing.
There is a second way that AOGCMs can get the right amount of observed warming, but still have ECS that is “too high” – they can send too much heat into the ocean and thereby reduce transient warming at the surface. The central estimate for TCR and ECS from AOGCMs are 1.8 and 3.3 K. Rearranging the formulas for these quantities gives:
TCR/ECS = 1 – dQ/dF
If dF is about 2.3 W/m2 in models, then dQ needs to be 1.1 W/m2, higher than the ocean heat uptake reported by ARGO (0.7 W/m2).
The IPCC appears to be losing confidence in the importance of the aerosol indirect effect on clouds. One anecdotal reason I have heard is that the relative rate of warming of the SH (with few aerosols) and the NH (most of the aerosols) don’t make sense if there is a strong aerosol indirect effect.
AR5: 7.4.3.1 The Physical Basis for Adjustments in Liquid Clouds
The adjustments giving rise to ERFaci are multi-faceted and are associated with both albedo and so-called ‘lifetime’ effects (Figure 7.3). However, this old nomenclature is misleading because it assumes a relationship between cloud lifetime and cloud amount or water content. Moreover, the effect of the aerosol on cloud amount may have nothing to do with cloud lifetime per se (e.g., Pincus and Baker, 1994).
The traditional view (Albrecht, 1989; Liou and Ou, 1989) has been that adjustment effects associated with aerosol–cloud–precipitation interactions will add to the initial albedo increase by increasing cloud amount. The chain of reasoning involves three steps: that droplet concentrations depend on the number of available CCN; that precipitation development is regulated by the droplet concentration; and that the development of precipitation reduces cloud amount (Stevens and Feingold, 2009). Of the three steps, the first has ample support in both observations and theory (Section 7.4.2.2). More problematic are the last two links in the chain of reasoning. Although increased droplet concentrations inhibit the initial development of precipitation (see Section 7.4.3.2.1), it is not clear that such an effect is sustained in an evolving cloud field. In the trade-cumulus regime, some modelling studies suggest the opposite, with increased aerosol concentrations actually promoting the development of deeper clouds and invigorating precipitation (Stevens and Seifert, 2008; see discussion of similar responses in deep convective clouds in Section 7.6.4). Others have shown alternating cycles of larger and smaller cloud water in both aerosol-perturbed stratocumulus (Sandu et al., 2008) and trade cumulus (Lee et al., 2012), pointing to the important role of environmental adjustment. THERE EXISTS LIMITED UNAMBIGUOUS OBSERVATIONAL EVIDENCE (EXCEPTIONS TO BE GIVEN BELOW) TO SUPPORT THE ORIGINAL HYPOTHESIZED CLOUD-AMMOUNT EFFECTS, WHICH ARE OFTEN ASSUMED TO HOLD UNIVERSALLY AND HAVE DOMINATED GCM PARAMETERIZATION OF AEROSOL-CLOUD INTERACTIONS. GCMs lack the more nuanced responses suggested by recent work, which influences their ERFaci estimates.
As best I can tell, your view of the agreement between AOGCMs and observations/EBMs is obsolete.
Frank, Thanks for a clear explanation. I do believe that GCM’s are subconsciously tuned for global temperature anomaly over the historical period. If they were way off, that set of parameters would be discarded and a “better” one found.
The point about aerosols being a knob to counteract too high a rate of warming is of course one Lindzen made along time ago.
I’m actually surprised they don’t tune for ocean heat uptake. If that and top of atmosphere radiation balance matched data, one would have to assume that the global mean temperature would have to be close if indeed the models conserve energy.
Frank wrote: “The total antropogenic forcing [ERF] was 0.57 (0.29 to 0.85) W m–2 in 1950, 1.25 (0.64 to 1.86) W m–2 in 1980 and 2.29 (1.13 to 3.33) W m–2 in 2011.”
The confidence intervals are quite big it seems to me and that is another area of climate science where there is a lot of uncertainty. I guess Lewis and Curry must have taken that into account though.
As I understand it ERF can only be estimated using climate models which as you point out may be quite wrong on effective aerosol forcing and things like cloud fraction. Am I wrong about that?
Frank:
Yes aerosols are a big uncertainty, but I don’t agree with your assessment.
Climate models have several advantages in dealing with aerosols vs EBM. Aerosols are not uniform spatially or temporally, confounding simple EBM.
Aerosol precursor emissions are concentrated over northern hemisphere continents. Since continents are more responsive, the aerosol forcing effect is magnified beyond what a single global average forcing number would imply. Climate models have a big advantage over EBM because they can simulate the spatial pattern of emissions. In addition, aerosols have spatial correlation to the location of 19’th century temperature measurements, which only cover 20-30% of the globe. So spatial non-uniformity in aerosols leads to negative EBM bias
Aerosols also have a different temporal pattern than GHG, with a much heavier weighting to the early portion of the historical record, particularly the period before 1970. So the shape of the temperature record provides useful information on the strength of aerosol forcing. The fact HADCRUT temperatures barely increased between 1870 and 1970 when GHG forcing increased by 1.1 W/M2 indicates that aerosols do indeed play an important role.
EBM, however don’t take advantage of this valuable information in the temperature record. All of the L&C EBM base periods are before 1950. So observational information on the relative importance of aerosols is missed by EBM. Climate models, on-the-other hand, match the entire temperature record since the 19’th century. Giving evidence that climate model treatment of aerosols can’t be that far off.
Restricting to post-1970 data in a simple EBM would largely eliminate aerosol confounding, since the increase in GHG forcing dominates over that period. Here is a simple illustration.
In the past 40 years forcing has increased by a little over 0.4 of a CO2 doubling (Nick Lewis spreadsheet for 2018 paper, volcanoes reduced by 80% due to short time duration). Temperature has increased by 0.18C/decade or 0.72C total (HADCRUT) or 0.76C total (BEST). Delta Q has increased by roughly 0.6 W (recent Real Climate post). Putting it all together TCR = 1.6 to 1.8C and ECS = 2.8 to 3.0. So observations, when properly analyzed, are in good agreement with climate models.
The recent paper of Goodwin (2018, link below) is another illustration of how simple EBM can be improved by weighting recent data more heavily. He gets an century-scale climate sensitivity of 2.9C with a more sophisticated EBM treatment.
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018EF000889
It looks to me as if there are quite a few papers coming up with a TCR of 1.3-1.4 for various historical periods. Chubbs has a paper that gets a higher number.
I did a little research and found the forcing data from AR3. They had an indirect aerosol effect of about -2 W/m2. If they abandon this effect as Frank says they may do, it will further lower the TCR values for the historical period.
But the point about models seems to me quite strong. Their aerosol models can be tuned to reduce ERF below IPCC values and thus match global temperatures while have a TCR (and an ECS) that is too high. Similarly with ocean heat uptake.
I already saw the Goodwin paper. It’s an obvious result. Feedbacks change over time in GCM’s. If you constrain energy balance models with those GCM’s the surviving runs will show the same thing. The question of course is whether we have any confidence whatsoever that GCM’s can predict changes in things like could feedbacks over century time scales. I don’t find that a credible claim.
Forcing has been increasing at roughly 1% of 2X CO2 per year. So as long as global temperature keeps increasing at 0.18 to 0.19C per decade, TCR is tracking -1.8 to 1.9. As I said above, if you don’t like models, drawing a straight-line through recent observations is OK with me.
Well Chubbs, this appears to be based on incorrect ERF estimates as Frank pointed out. There is a post at Climate Etc. where Frank Bosse shows that with appropriate corrections for short term effects the TCR based on data is the same as Lewis and Curry 2018. If the indirect aerosol effect on clouds turns out to be very weak, that will further increase ERF.
Chubbs: You and I both think the past 40 years is simpler to analyze than the 130 year and 65 year periods preferred by Lewis – unless unforced variability from the AMO is important.
You say Nic Lewis’s spreadsheet says that the total forcing over the last 40 years is 0.40 doublings which would be 1.38 W/m2 (using 3.44 W/m2 as the model consensus ERF for 2XCO2). Simply eye-balling L&C18 Figure 2, the change in forcing over 40 years is closer to 2 W/m2. Lewis’s total forcing change over 130 years is 2.8 W/m2, so the value you cite would say that 50% of the forcing developed before 1970. Someone may believe that the forcing change since 1970 is only 1.38 W/m2, but it is not Nic Lewis. Nor is it the authors of Otto (2013) who used a 1.20 W/m2 change over the first three of these four decades. They added 0.30 W/m2 to the ERF forcing to correct for the fact that models are parameterized to produce a forcing that many now recognize is too negative.
When I use a forcing change of 2 W/m2 and a your value for warming of 0.76 K, I get a TCR of 1.3 K. If I add 30 W/m2 of corrected aerosol forcing to your value of 1.38 W/m2 (as Otto did), I get a TCR of 1.55.
Using too small a forcing also effects the conversion of TCR into ECS:
ECS/TCR = 1 – (dQ/dF) = 0.7
So ECS should be about 50% greater than TCR, not almost 100% greater.
Looking at L&D18 Figure 2 again, Nic has adjusted aerosol forcing in a different way that Otto (2013). Aerosol forcing has been falling since 1995 according to this figure and there is a only a very modest difference between the aerosol level in 1975 and 2015.
The more closely I look, the more complicated the details, but the big picture of EBMs still says TCR and ECS are low.
Frank:
I had some time to go back and look at your comments in more detail and check some #. Below are some comments and added detail:
1) I used a 2xCO2 forcing value of 3.7. That best match to the CO2 forcing values in Nick’s spreadsheet is 3.74.
2) To estimate the forcing change I ran a regression through the last 40 years in the spreadsheet and multiplied the per year slope of 0.0387 by 40 giving a total of 1.55 or 0.41 of a CO2 doubling.
3) Not familiar with # behind the Otto paper. The aerosol values you quote are higher than the values in Nick’s sheet, so there is a discrepancy.
4) Our linear fit estimates would be expected to underestimate ECS, since models approach final warming asymptotically not in a straight line. Roy Held had a nice blog on this.
5) As our issues with numbers illustrates, the best way to evaluate climate models is to compare them directly to observed temperature. Here is what I get for linear trends in the past 40 years: HADCRUT – 0.18/decade, BEST – 0.19/decade, Cowtan and Way – 0.191/decade, RCP6SSTblended – .189/decade. So the agreement is good.
6) Yes indirect aerosol effects are being scaled back, but the paper below finds that models are underestimating aerosol direct effects. Per my earlier comment the overall observed temperature trend indicates that aerosols have played an important role. We will need to wait for CMIP6 for an updated and hopefully improved estimate.
(hopefully this comment ends up in the correct location)
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018GL078298
Chubbs, This paper seems only indirectly related to models and is mostly about data analysis. I thought the IPCC gave effective aerosol forcings for the CMIP5 models.
The paper finds large measured direct aerosol effects, so, it is not clear that aerosol effects are overestimated in models. Drawing conclusions about climate models based on aerosols is subject to the same criticism that emerging constraint studies are. There is a selection bias and potential for overfitting when focusing on any one model factor.
I do think there is information on aerosols in the temperature record. Fitting the past 40-years independently of the pre-1970 periods is one way to access this information.
Frank and DPY.
You are making this way to complicated. Nothing is gained by analyzing pre-1970 data. The observations are too limited and the aerosol effect is too uncertain, too complicated, and not uniform spatially or temporally making, a simple model unsuitable.
Temperatures are increasing at a very steady 0.18C (Hadcrut) or 0.19C (BEST, Cowtan and Way) over the past 40 years. The current heat imbalance is 0.8 W/M2, increasing from roughly 0.2 W/m2 over the post-1970 forcing ramp. That is all you need to know.
DPY – We’ve been over the same material recently and I have no desire to repeat. I’m not going to convince you and you are not going to convince me. The most recent paper to compare models and observations, Lewandowsky et. al. (2018, link below), does a more thorough job than Real Climate, including a variety of cases to better match model predictions with observations. The agreement between model and observations is quite good. If that is due to skillful tuning so be it.
As I said before warming has been reasonably steady for the past 40 years (see chart at bottom), and model predictions are linear in forcing, so I expect good model performance to continue. In any case, whether I draw a straight-line through the observations or use climate model projections. I get about the same answer. If you think models are wrong, then you expect a slowdown in the steady rate of warming that we have experienced for 40 years. Good luck with that.
http://iopscience.iop.org/article/10.1088/1748-9326/aaf372
http://woodfortrees.org/plot/best/mean:132
Chubbs, You cite a paper by Lewindowski and a host of authors famous for very poor statistical methods and bias. It’s not convincing. Lew is a psychologist and you believe he has expertise in climate? This paper you cite contradicts at least a hundred peer reviewed papers saying the pause exists and trying to explain it. Stop citing outlier papers as if they are authoritative. Further the paper you cite claims that these earlier papers are due to confirmation bias with essentially no evidence.
I cite a 60 year history of literature on CFD and you revert to simple averages over the last 40 years. It’s not convincing. Errors in climate models are well documented in the literature. Models are not reliable and that is even admitted by many of those working on those models.
Its a peer reviewed paper. The results match those I obtain myself when I download model predictions. Sorry you can’t accept it.
Below is a plot made by running surface date through an 11-year running average. I don’t see a pause. I also note forcing has increased by over 40% of a CO2 doubling in the past 40 years. Unlikely that TCR is going to deviate much from the observational history that has already been established.
Without quantitative evidence relevant to climate models, your qualitative points on CFD are unconvincing. On a global scale, the atmosphere is 2-dimensional, so experience gained on complex 3-dimensional flows may not be relevant.
My experience has been different than yours. I worked on developing atmospheric boundary layer parameterizations in grad school 40 years ago. They worked well enough to match field experiments and to be useful in weather models. I am sure they have better ones now.
I am not at all concerned about use of sub-grid scale parametization. Climate models are only predicting long-term averages. The details of fluid flow/clouds etc. are not important as long as a suitable average is obtained. We have plenty of observations both recent and Paleo to use in calibration. Highly unlikely at this point that models are off by enough to make a difference.
http://woodfortrees.org/plot/gistemp/mean:132/plot/best/mean:132/plot/hadcrut4gl/mean:132
Also Chubbs, look at the TLT comparisons at Real climate of RSS. Using global mean temperature is not a good idea as all modelers know what that metric is and there is almost certainly subconscious or conscious tuning for that metric. Other metrics are more meaningful. All modelers know this and i”m surprised you are so naive.
Look at RSS TPW, lines up perfectly with climate models. An easier measurement with less stratosphere contamination.
Chubbs, I looked at the TPW data. It’s different for each version of UAH and RSS and is generally higher than models suggest. Their reasoning here is odd. They say that models disagree with the data and that suggests measurement or processing errors. The logical conclusion is that it suggests model errors.
Particularly in the tropics, precipitation and convection are weak points of models. Why would one a priori believe they are correct on TPW since things like cloud fraction are not correct?
nobodysknowledge above cited some of the literature. I need to look at those in more detail as well.
DPY:
I want to start out by noting that if you consider uncertanty in TLT there is broad overlap between RSS TLT and model predictions. So TLT really doesn’t say anything about model performance.
Are you sure you were looking at total precipitable water (TPW)? UAH doesn’t have a product. Mears has updated the RSS TPW data through 2017 in a recent paper (link below). The linear TPW trend (1988-2017) has increased to 1.50% per decade (60N-60S) and 1.49% in the tropics only (20N-20S).
These are a large increases in water vapor. The TPW trend implies 0.21 to 0.24C warming per decade using 7%H2O/C (CC) or 6.2%H2O/C (climate models), respectively. Mears does a good job of assessing uncertainty in TPW in the paper below. Bottom-line TPW is less uncertain than TLT. So I don’t see any problems with climate models based on satellite data.
I want to come back to your assertion that 40 years is not a long enough period of time for good model performance. Look at the TPW record, water vapor follows temperature very closely, they are tied at the hip. The critical feedbacks: water vapor, clouds and snow/ice albedo; are all fast acting. 40-years is more than sufficient for these to stabilize.
You dismiss tuning, but again, the fast feed backs aren’t going to change. Tune them once and you are good to go. Good tuning of sub-grid scale phenomena is a feature not a bug. CMIP5 is getting better with time, not worse, as enso and other natural variability evens out.
Finally it pays to look at the big picture as well as the small-scale phenomena. Models do a good job of matching the large-scale flow. Many of the important cloud feed backs are due to simple thermodynamics and changes in the large-scale flow: increasing height of tropical convection, the expansion of Hadley Cells and the northward migration of storm tracks. These real-world trends are all well captured by climate models.
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2018EA000363
You are right about the water vapor graphs. I was confused by the double vertical axes. I’ll look at your reference.
I am fine with 40 year trends. That should be long enough to mostly avoid things like ENSO bias.
You should look at what nobodysknowledge excerpted in comments above. It kind of confirms my view that biases in models are on the same order as the deltas in fluxes we are interested in. I just think that there are enough negative results out there to cause strong skepticism about skill for things like small changes in clouds.
Frank’s explanation also is excellent. What has really happened is that the IPCC has found that better data shows they were overestimating aerosol forcing by a lot. When that is taken into account, its clear that Lindzen was right in that aerosol forcing (an emergent property in models that is tunable) is a giant knob that can be used to counteract a too high TCR to allow good matches with global temperature at the surface. If they abandon the indirect aerosol effect, that will further lower the TCR calculated from temperature changes and effective forcing.
Models are pretty good at predicting Rossby waves which is why they work for weather forecasting. They are weak for many other important processes.
OK Chubbs, it looks like the calculated 5-95% range for the water vapor trends is about 10% of the mean value.
I still don’t think you can dismiss the TLT results. There are a number of data sets including radiosondes. They generally show a much lower trend over 40 years as CMIP5. It’s quite unlikely that all these data sets are biased so strongly in the same direction.
Garbage in garbage out has no basis in reality? That is incorrect and caused me to stop reading.
I have a question that is probably off-topic, but I don’t know where to ask it else so here goes:
There is an older paper by Ramanathan favorably mentioned here some time ago: The Role of Ocean-Atmosphere Interactions in the CO2 Climate Problem ( https://journals.ametsoc.org/doi/abs/10.1175/1520-0469%281981%29038%3C0918%3ATROOAI%3E2.0.CO%3B2)
The paper postulates a temperature increase of 2.2K for a doubling of CO2, of which 1.7K are due to feedback effects, so only 0.5K as the no-feedback Planck response. But the usual value for the Planck response is supposed to be about 1K – what exactly is done differently here?
Menschmaschine: If you perform radiative transfer calculation for instantaneously doubling CO2, there is approximately a 3.5 W/m2 decrease in LWR leaving the TOA, but only a 1 W/m2 increase in LWR reaching the surface. (The difference is going into heating the atmosphere.)
One can look at the response to doubling CO2 from a TOA energy balance perspective (+3.5 W/m2 less heat escaping) or a surface energy balance perspective (+1 W/m2 more heat arriving).
From the TOA perspective, the planet looks like a graybody with a surface temperature of 288 and an emissivity of 0.61. Planck feedback for such an object is -3.3 W/m2/K. So slightly more than a 1 K increase in surface temperature (with no feedbacks) can restore radiative balance after a doubling of CO2.
From the surface energy balance perspective, the planet is nearly a blackbody at 288 K. Planck feedback for such an object is -5.4 W/m2/K, meaning it only takes a 0.2 K rise in surface temperature to restore radiative balance in response to a 1 W/m2 increase in DLR. That is the 0.17 W/m2 dTs in Figure 4. Somewhere Ramanathan has figured that the warmer atmosphere is going to radiate an addition 2 W/m2 to the surface, meaning 0.5 K of warming is needed without feedbacks.
In either case, the climate feedback parameter is critical, the additional amount of heat emitted or reflect to space per degK of surface warming (W/m2/K). And the increase in upward heat transfer per degK of surface warming must be the same at all altitudes as it is at the TOA (where only radiation is involved). However, while one can calculate changes in radiation with temperature from first principles, convection is more challenging. That is what Ramanathan is trying to do in this paper. (Table 3).
For example, if one assumes that the flux of latent heat from the surface rises as fast as saturation vapor pressure (7%/K), then just the increased latent heat leaving the surface is 5.6 W/m2/K, far too big if feedback is positive. I haven’t check out his rational for all of the values in Table 3.
Hope this helps.
[quote]
“If you perform radiative transfer calculation for instantaneously doubling CO2, there is approximately a 3.5 W/m2 decrease in LWR leaving the TOA”
[/quote]
When one actually looks at the change in OLR over the last 33 years one finds an _increase_ in LWR leaving earth. Not a _decrease_
I have no doubt that this real world fact will leave modellers entirely unfazed. You lot will just brush it off as a “contrarian myth”, or some such.
https://ibb.co/n8Q6smS
Dewitte & Clerbaux; Remote Sensing 2018, 10, 1539; doi:10.3390/rs10101539
Mark: The problem of rising OLR is easier to understand when expressed in terms of the climate feedback parameter: the increase in emitted OLR and reflected SWR per degK of surface warming (W/m2/K). The climate feedback parameter is the sum of all feedbacks including Planck feedback. From a practical point of view, we can think of the climate feedback parameter as being -1, -2 or -3 W/m2/K (with the negative sign indicating more heat lost as the temperature rises. If you take the reciprocal of the climate feedback parameter (K/(W/m2)) and convert 3.6 W/m2 to 1 doubling, 1, 2 or 3 W/m2/K turn into ECS of 3.6, 1.8, and 1.2 K/doubling.
One also needs to clearly distinguish between radiative forcing (the total change in the rate of radiative cooling to space since pre-industrial assuming no warming) and the radiative imbalance. If you imaging an instantaneous doubling of CO2, the radiative forcing and radiative imbalance will both be about 3.6 W/m2, but as the planet warms, the radiative forcing will be unchanged while the radiative imbalance will gradually shrink to zero as a new steady state is approached. In the real world, radiative forcing has been rising gradually and the radiative imbalance is always less than the forcing because forcing has already produced some warming.
Let’s suppose the climate feedback parameter is 2 W/m2/K. In that case, the roughly 1 K of warming we have experienced would be associated with a increase emission or reflection (OLR+OSR) of 2 W/m2. Radiative forcing is currently at about 2.7 W/m2, meaning that about 70% of forcing would have been counterbalanced by increased OLR+OSR, while 0.7 W/m2 is still going into warming the planet, mostly the ocean (ARGO). This is why energy balance models say ECS is about 1.8 K/doubling
Expressed mathematically:
RF = CFP*dT + OHU
At any time during forced warming, the radiative forcing (RF) is going into warming the planet, mostly the ocean (OHU), and being lost by increased radiative cooling to space. When ocean heat uptake becomes negligible, warming has resulting in a new steady state where dT = RF/CFP.
So why do early and unsophisticated models like Manabe and Wetherall do well? The answer is simply that there is one overwhelming forcing, increasing concentrations of ghgs which determines the TOA emissions.
[…] « Opinions and Perspectives – 4 – Climate Models and Contrarian Myths […]
Building on Eli’s point, one contrarian myth is the idea that ECS estimates rely on climate models. ECS estimates were 2 to 4C in the 1970s before climate models were developed. The estimate was based on physical reasoning and simple models.
In the 70s, paleoclimate information was limited. It was known that temperature varied widely, but there was little information on how or why CO2 varied. Today we know that temperature and CO2 have been closely linked. We also have the past 40 years of rapid warming, very much in-line with 2-4 ECS. So imagine a world without climate models – why would ECS estimates change vs the 1970s?
Chubbs: Have any references to back this claim up? AFAIK, Manabe and Wetherall made the first AOGCM and used it to determine ECS. The fundamental problem is one can’t calculate convection from first principles, as you can radiation. You need grid cells, either 1D, 2D or 3D. And there is a lot of horizontal convection, so I’m not sure who would be convinced about ECS without 3D.
IF I understand correctly, M&W were also the first to fully describe radiative-convective equilibrium in the atmosphere and possibly recognize radiative imbalances at the TOA were critical. Before then, a surface energy balance perspective dominated.
I was thinking primarily of Manabe and Weatherwald’s 1967 paper using a 1-D model with an estimated ECS of 2.3 for the fixed relative humidity case. Plass in the 50s was higher. Kaplan in the early 60s was lower.
Manabe and Weatherwald’s 1967 paper is seen as the most influential paper in climate science. And for good reasons. It was a clever job to select good values for ozone, water vapor and clouds, and how these values could change with CO2. But it is a physical model, with an idealized atmosphere, calculated from TOA radiation balance. I think the uncertainty in such a model has to be very high, so predications based upon it could easily go wrong. And it illustrate the problem with using climate models for predictions even today.
Perhaps machine learning would be more accurate.
Is climate science a social science? That sounds outrageous, doesn’t it? However, like most social science, the climate science hypotheses are difficult to falsify (in climate science because they are for so many decades in the future and because projections are for a range of outcomes). Also, like social science, there are real world consequences and considerable dispute on the meaning of those consequences (catastrophic warming, serious warming, benign warming).
I agree with SoD that “Climate models are the best tools we have for estimating the future climate state,” and also that few on the blogs and in the media have any idea what they’re talking about. Also with John that “Well yes, the internet generates mostly rubbish from people of all persuasions including some climate scientists.”
Several above say they’re OK with 40 year trends. Same here, but we’re assuming that natural variability these past 40 years is something like zero or very little variability. Maybe. Actual climate sensitivity may be much higher or lower than the recent, mostly 1.5- 2.0 estimates, depending on the unknown of natural variability. Here we have, unfortunately, another likeness to social science, the inability to know certainly, and yes, the inability to know probabilistically.
All of this makes climate science and energy policy the wicked problems most of us admit it is. How do we convey the complexity to those who want simplistic, binary answers to the problem and to the solutions.
Doug wrote: “Several above say they’re OK with 40 year trends. Same here, but we’re assuming that natural variability these past 40 years is something like zero or very little variability. Maybe. Actual climate sensitivity may be much higher or lower than the recent, mostly 1.5- 2.0 estimates, depending on the unknown of natural variability”
One can divide climate change into three categories: “anthropogenically forced” variability, “naturally forced” variability (volcanos and sun), and “unforced” or “internal” variability. The historical record of temperature variation during the Holocene provides evidence about the sum of naturally-forced and unforced variability, and solar and volcanic proxies provide some ability to separate naturally-forced variability from unforced variability – though it still isn’t clear to me if unforced variability played an important role in the LIA. My qualitative understanding (I haven’t seen a good quantitative record) is that the nearly 1 K of warming over the last 50 years is unusual in light of the Holocene, especially given that it followed the warming that ended the LIA.
On the other hand, some of the warming from 1920-1945 and the cooling that followed (both about 0.2 K clearly appears to be unforced. The red line on the linked graph is AR5 best estimate of forcing vs time. (Figure 8-18).
The big question is : What is the real agenda of those who fund climate modellers (and climate scientists in general) ?
To learn the truth, or to advance some other agenda using the good name of science as cover ?
bfjcricklewood,
You’re probably on the wrong blog. Please read the etiquette before you post another comment.
Thank you so much for this.
I am now for two or three months searching for information about AGW.
And most things I find is alarmistic propaganda on the one side and a lot of counter-arguments on the other.
The counter-arguments are sometimes obvioulsly rediculous. (e.g. There is no such thing like green house effect. Wood has proven in 1902 …)
But apart from this obvious b***s**** many of the opponents to AGW theory use hard to ignore arguments while climate alarmists just demand blind faith.
This is the first site I found, which delivers credible information about the state of the science which can be read and understood by non-scientists.
Though it might be helpful to have some kind of technical background.
Thank you so much again.
Not many will read it and even fewer will understand but it is important what you do here.
Dodo,
Thanks for your kind comments.
Dodo: Give yourself a hand for critically looking for information about AGW that doesn’t simply agree with your preconceptions about the subject.
There is a great deal of controversy about experiments like Wood’s because radiative heat transfer experiments performed here on Earth are complicated by heat transfer by convection and conduction. There are other complicating factors: the inward transmission of SWR by glass, plexiglass, polyethylene or salt may not be equal; on the average twice as much energy arrives at the surface as DLR (which is blocked by glass or plexiglass) rather than SWR, and the temperature inside the box varies with the location of the thermometer. For example see:
http://www.drroyspencer.com/2013/08/revisiting-woods-1909-greenhouse-box-experiment-part-ii-first-results/
So I wouldn’t recommend relying on Wood’s experiments to inform yourself about the GHE.
We know the Earth has a greenhouse effect because the average location on the surface of the planet is emitting an average of about 390 W/m2 of LWR (based on its temperature and emissivity), but measurements from space show that an average of only about 240 W/m2 is reaching space. That 150 W/m2 difference is an unambiguous measure of the greenhouse effect. We can be sure that GHGs are responsible because the reduction in LWR occurs at the wavelengths CO2 absorbs. This slowdown in radiative cooling allows the Earth to be much warmer than it would be without GHG’s. (We can’t say exactly how much warmer without making some assumptions about how the Earth would behave without an atmosphere. The Moon has no atmosphere, but calculating its surface temperature is a pretty complicated process.)
The “enhanced GHE” from rising GHGs involves a much smaller change in radiative cooling to space. Those small changes are quantified by “radiative transfer calculations” and involves some sophisticated physics.
https://en.wikipedia.org/wiki/Schwarzschild%27s_equation_for_radiative_transfer
Dodo and Frank,
As to Wood, his note was ripped later that year by Charles Greely Abbot, the director of the Smithsonian Astronomical Observatory a much better physicist than Wood ( https://en.wikipedia.org/wiki/Charles_Greeley_Abbot ). See details in this post by Vaughn Pratt at Stanford: http://clim8.stanford.edu/WoodExpt/ The fact that a couple of well regarded climate scientists missed this and resurrected the Wood note to be a PITA for them and others is not a good indicator of the quality of at least some climate scientists.
In a word, Wood was wrong. It’s easy enough to do the experiment yourself and prove it. I have, and found a substantial difference in the temperature of the bottom of the box between a glass covered box and a polyethylene film covered box. A night time cooling experiment also showed that the bottom of a film covered box cooled faster than the bottom of a glass covered box until condensation on the film made it IR opaque. See also Roy Spencer’s experiment: http://www.drroyspencer.com/2010/07/first-results-from-the-box-investigating-the-effects-of-infrared-sky-radiation-on-air-temperature/
One problem is that there is quite a bit of convection in a closed box tilted to keep the bottom perpendicular to incoming solar radiation, leading to significant temperature differences at different places inside the box. Rather then heat the box with sunlight, it would be better to invert the box and heat the inside top surface with constant power and compare temperatures between an nearly IR opaque glass cover and a relatively IR transparent polyethylene film ( cling wrap ). A well insulated box will make the difference larger.
By the way, those who claim that greenhouses work mainly by blocking convection are missing the point. Refrigerators, ovens and houses, for that matter, also must minimize heat transfer to the outside world. Just imagine, for example, living in a kitchen with an oven with an IR transparent window in the door.
DeWitt: Thanks for the added information. Do you think any useful conclusions can be drawn from anyone’s daytime experiments with IR-transparent and -opaque covers.
The nighttime experiments where the inside of a box cools when covered with an IR transparent material seem relatively straightforward to me. But it is easier to point an IR thermometer at the sky at night than wait for the temperature of such a box to drop to roughly the same temperature if everything works properly and conduction of heat is negligible. (I’d be a better scientist if I weren’t overconfident when predicting how experiments will turn out.)
Long ago, another one of your well-regarded climate scientists blogging at RC tried to convince me that Gore’s failure to inform AIT viewers that correlation (of CO2 and warming in Antarctic ice cores) does not imply causation was an unimportant oversight. He then proceeded to criticize some guy I never heard of named McIntyre. That made me want to find out if McIntyre had something valuable to say.
Finally, I bought Archer & Pierrehumbert’s book “The Warming Papers” to try to understand how the climate sensitivity for doubled CO2 was “determined” in 1896 by Arrhenius. Langley had collected data on the power delivered to the Earth’s surface by various wavelengths of moonlight during full moons and Arrhenius analyzed that data. Unfortunately, the data missed the 15 um CO2 absorption! And Arrhenius’ analysis appears to have ignored the emission of LWR by CO2 in the air – though he apparently was aware of it. The discussion states that Langley, at least, believed the Moon and Earth had similar surface temperatures. So there were a few minor problems with this legendary analysis that alerted humanity to the potential dangers of burning fossil fuels.
To be fair, Arrhenius apparently was the first and only one before Manabe to properly model the greenhouse effect using a single slab of atmosphere at one temperature. (Everyone else was fixated on surface energy balance rather than balance at the TOA.) And Arrhenius found evidence for water vapor feedback in the data,