In Frontiers of Climate Modeling, Jeffrey Kiehl says:
The study of the Earth’s climate system is motivated by the desire to understand the processes that determine the state of the climate and the possible ways in which this state may have changed in the past or may change in the future..
Earth’s climate system is composed of a number of components (e.g., atmosphere, hydrosphere, cryosphere and biosphere). These components are non-linear systems in themselves, with various processes, which are spatially non-local.
Each component has a characteristic time scale associated with it. The entire Earth system is composed of the coupled interaction of these non-local, non-linear components.
Given this level of complexity, it is no wonder that the system displays a rich spectrum of climate variability on time scales ranging from the diurnal to millions of years.. This level of complexity also implies the system is chaotic (Lorenz, 1996, Hansen et al., 1997), which means the representation of the Earth system is not deterministic.
However, this does not imply that the system is not predictable. If it were not predictable at some level, climate modeling would not be possible. Why is it predictable? First, the climate system is forced externally through solar radiation from the Sun. This forcing is quasi-regular on a wide range of time scales. The seasonal cycle is the largest forcing Earth experiences, and is very regular. Second, certain modes of variability, e.g., the El Nino southern oscillation (ENSO), North Atlantic oscillation, etc., are quasi-periodic unforced internal modes of variability. Because they are quasi-periodic, they are predictable to some degree of accuracy.
The representation of the Earth system requires a statistical approach, rather than a deterministic one.
Modeling the climate system is not concerned with predicting the exact time and location of a specific small-scale event. Rather, modeling the climate system is concerned with understanding and predicting the statistical behavior of the system; in simplest terms, the mean and variance of the climate system.
He goes on to comment on climate history – warm periods such as the Cretaceous & Eocene, and very cold states such as the ice ages (e.g., 18,000 years ago), as well as climate fluctuations on very fast time scales.
The complexity of the mathematical relations and their solutions requires the use of large supercomputers. The chaotic nature of the climate system implies that ensembles are required to best understand the properties of the system. This requires numerous simulations of the state of the climate. The length of the climate simulations depends on the problem of interest..
And later comments:
There is some degree of skepticism concerning the predictive capabilities of climate models. These concerns center on the ability to represent all of the diverse processes of nature realistically. Since many of these processes (e.g., clouds, sea ice, water vapor) strongly affect the sensitivity of climate models, there is concern that model response to increased greenhouse-gas concentrations may be in error.
For this reason alone, it is imperative that climate models be compared to a diverse set of observations in terms of the time mean, the spatio-temporal variability and the response to external forcing. To the extent that models can reproduce observed features for all of these features, belief in the model’s ability to predict future climate change is better justified.
Interesting stuff.
Jeffrey Kiehl has 110 peer-reviewed papers to his name, including papers co-authored with the great Ramanathan and Petr Chylek, to name just a couple.
Probably the biggest question to myself and the readers on this blog is the measure of predictability of the climate.
I’m a beginner with non-linear dynamics but have been playing around with some basics. I would have preferred to know a lot more before writing this article, but I thought many people would find Kiehl’s comments interesting.
In various blogs I have read that climate is predictable because summer will be warmer than winter and the equator warmer than the poles. This is clearly true. However, there is a big gap between knowing that and knowing the state of the climate 50 years from now.
Or, to put it another way – if it is true that summer will be warmer than winter, and it is true that climate models forecast that summer will be warmer than winter, does it follow that climate models are reliable about the mean climate state 50 years from now? Of course, it doesn’t – and I don’t think many people would make this claim in such simplistic terms. How about – if it is true that a climate model can reproduce the mean annual climatology over the next few years (whatever precisely that entails) does it follow that climate models are reliable about the mean climate state 50 years from now?
I haven’t found many papers that really address this subject (which doesn’t mean there aren’t any). From my very limited understanding of chaotic systems I believe that the question is not easily resolvable. With a precise knowledge of the equations governing the system, and a detailed study of the behavior of the system described by these equations, it is possible to determine the boundary conditions which lead to various types of results. And without a precise knowledge it appears impossible. Is this correct?
However, with a little knowledge of the stochastic behavior of non-linear systems, I did find Jeffrey Kiehl’s comments very illuminating as to why ensembles of climate models are used.
Climatology is more about statistics than one day in one place. Which helps explain why, just as an example, the measure of a climate model is not measuring the average temperature in Moscow in January 2012 vs what a climate model “predicts” about the average temperature in Moscow in January 2012. You can easily create systems that have unpredictable time-varying behavior, yet very predictable statistical behavior. (The predictable statistical behavior can be seen in frequency based plots, for example).
So the fact that climate is a non-linear system does not mean as a necessary consequence that it is statistically unpredictable.
But it might in practical terms – that is, in terms of the certainty we would like to ascribe to future climatology.
I would be interested to know how the subject could be resolved.
Reference
Frontiers of Climate Modeling, edited by J.T. Kiehl & V. Ramanathan, Cambridge University Press (2006)
Regarding Climate Chaos You may want to look at the following article which has been authored – amongst others – by Pielke Senior.
Click to access r-260.pdf
You may also want to read this page on simple but chaotic models of climate
http://itia.ntua.gr/en/docinfo/923/
I think the influence chaos for climate prediction is being in some sense overrated in some sense underrated. If the changes in the external forcing are large enough, chaos will not be that relevant to predict changes in the mean state. Just think of heating a pot of water: convention and chaotic dynamics will set in but the water temperature will rise, and this rise in temperature can be predicted to some accuracy from the energy input and some environmental conditions alone (e.g. temperature in the room. However, if we knew nothing about phase transitions ( a collective phenomenon), we would be surprised that beyond some point our prediction would break down. In the climate system there are a lot of phase transitions going on, and this one of the factors that make climate prediction really difficult.
However, in the broad sense, if you change the energy balance by increasing the solar input or increasing greenhouse gases, temperatures tend to change. Chaos is a secondary agent. The annual cycle is a very obvious proof of this.
This is a sober assessment which seems to cast doubt on whether any of the predictions made by current climate models can be relied on.
The public see climate science making predictions, for instance Mann’s hockey stick projections.
When the average of these predictions does not seem to happen the public is bewildered.
Politicians have to rely on the best scientific advice they get and act on it.
Climate scientists should not “spice up” their reports with undue certainly and inflate their conclusions to secure more prestige and grant funding.
In Scotland after the coldest winter for 40 years, pensioners were staying in bed rather than get up during the day because of the cost of fuel.
It has recently been announced that gas will increase by a further 19% and electricity by 12%.
Scotland is built on coal yet all its coal mines are now closed because of fears of CO2 induced global warming.
There is a heavy moral responsibility on climate science to justify the sacrifices it is expecting ordinary people to cope with.
All data that supports the conclusions that we have a CO2 induced crisis should be readily available for interested parties to test its veracity.
Continued obstruction however seems to be the norm despite promises made at the climategate inquiries.
Small wonder that the number of sceptics is growing.
The various flavors of hockey stick are not predictions. Mann does not do prediction, nor does he do models. What he does is to try and figure out climate variables from proxy data.
Scotland is not built on coal, although there is coal there. The mines are closed because Margret Thatcher decided to breat the coal miners union in the 1980s and pulling the coal out of deep mines was not very economical. (Google Arthur Scargill) Mountain top removal is less expensive.
Now some, not Eli to be sure, might jump up and down on you for this, but the best thing is for you to go do a bunch of reading and be more cautions when you wander into a den of obsessives.
Eli should go and do a bunch of reading on irony. And for SoD I would suggest Koutsoyiannis who has already been recommended by Jeremy above.
I’ve mentioned this before on this site (to the dismay of SoD), but how can the models get away with predicting a sensitivity to GHG ‘forcing’ that is 3 times more powerful than the system’s measured response to solar forcing? This seems to have everybody stumped – not just here but all over the blogosphere.
The fact that the models use ‘ensemble’ prediction is meaningless if they are all more or less doing the same things wrong. Most of the enhanced warming they’re getting is from positive cloud feedback, which is almost certainly incorrect. They even admit that if the cloud feedback is neutral, there average model sensitivity comes down the 1.9 C. What they don’t say is that even if it were only moderately negative, the sensitivity would probably come down to only about 1 C and less than 1 C if strongly negative.
Kudos on an excellent article.
I referenced it on the comment thread to “Climate models are creating a false sense of security, or at least insufficient terror” by Dave Roberts, Grist, June 30, 2011
http://www.grist.org/climate-change/2011-06-30-climate-models-are-creating-a-false-sense-of-security/N0
Climate —-ers [moderator’s note, please read the Etiquette] sure do love to piously pontiifcate about climate models. the vast majority don’t have a clue about what a climate model is and how it is developed and maintained.
@RW:
The issue you raised is addressed head-on in “Roy Spencer on Climate Sensitivity – Again” by Chris Colose, Skeptical Science, July 1, 2011
http://www.skepticalscience.com/spencer_ocean.html
The fundamental issue I’m referring to is not addressed there.
Bryan:
Your post is nothing more than a Gish Callop of climate —-er gibberish.
Each of your unsubstantiated assertions and accusations are thoroughly debunked by the authors at SkepticalScience.com.
I would take what’s presented at SkepticalScience.com with a grain of salt. Most of the authors there overly rely on the peer review process rather than first hand understanding and knowledge of the science itself. The comments on the site are also heavily filtered and the moderation is heavily biased.
I think the point that is being intentionally overlooked right now in climatology is that ensemble average trends are statistically 2-4 times observation. This result has been shown a number of different ways – see treesfortheforest blog for Chad Herman’s work on the matter. It is also in publication through panel regression methods by Ross McKitrick.
We had quite a lively discussion on the topic when the paper came out but it has been a common theme on a few sites, blackboard, CA, and tAV for some time. Some climate scientists have even taken the position that ensembles should not be used – as a resolution to the conflict between model and data. Ben Santer’s work demonstrated the opposite, that models do match observation but using the same methods with up-to-date data strongly reverses the conclusion. That work prevented from publication until Ross found another way to get it published.
I am of the opinion that the climate can be modeled effectively but that there is substantial evidence of generalized bias in models toward warming. The situation where models agree with each other more than with observation would likely be caused by similar assumptions between the models but I don’t have enough experience to make a blanket statement like that.
Jeff
Would be appreciated if you could give a link to the relevant TFTF post.
‘I am of the opinion that the climate can be modeled effectively but that there is substantial evidence of generalized bias in models toward warming.’
GCMs are not simply a tool for printing out upward-sloped graphs. They are physical models of our current understanding of ocean-atmosphere dynamics and as such can be setup with any scenario imaginable. You could setup a scenario with decreasing solar activity or reduced ‘greenhouse’ gases (sorry SoD!) and you would see a cooling trend.
You can see in the IPCC report Figure SPM.4 where models have been setup in a scenario without any anthropogenic forcings and a slight negative trend is evident over the second half of the twentieth century. Models appear quite adequate at producing both warming and cooling trends depending on the inputs.
I guess what I’m saying is I’m not sure what you mean by a ‘bias in models toward warming.’ Perhaps you mean a bias towards a climate system more sensitive to peturbation? That is plausible but you should appreciate that peturbations can be negative as well as positive.
IMHO – the comment has to be weened off normal distributions and the statistical tests which assume them.
Zipf, perito et al have more appripriat tools available.
SoD,
there is actually a very interesting paper by Reto Knutti
Should we believe model predictions of future climate change? that addresses model weaknesses, model strengths and problems in the interpretation of model results.
i would very much appreciate an expanded discussion of the article, maybe even a guest post by Dr. Knutti updating us on the development to date.
Drew Shindell had a talk about 6-7 years ago, that was titled, “Who should we trust about data from the 19th century, models or measurements” The answer was not simple.
RW: “They even admit that if the cloud feedback is neutral, there average model sensitivity comes down the 1.9 C…”
Yup, that would be the error bars, wouldn’t it? Everyone knows water vapour feedback is one of the major wildcards. What you seem to be forgetting is that – as Brad deLong has pointed out – uncertainty has two tails. It’s not an “admission” to say the bottom range of a distribution of outcomes is low. It *is* a scientific error to concentrate on those low values and ignore the full range of uncertainty.
So a question for you: what probability is the 1.9 C outcome you picked out? What outcome at the other end of the tail is the same probability?
SOD says
“Or, to put it another way – if it is true that summer will be warmer than winter, and it is true that climate models forecast that summer will be warmer than winter, does it follow that climate models are reliable about the mean climate state 50 years from now? Of course, it doesn’t – and I don’t think many people would make this claim in such simplistic terms.”
Is this true? I’m sure you are following Issac Held’s blog. He seems to think seasonal changes are a good indicator for long term change.
http://www.gfdl.noaa.gov/blog/isaac-held/2011/04/27/9-summer-is-warmer-than-winter/
http://www.gfdl.noaa.gov/blog/isaac-held/2011/06/13/12-using-model-ensembles-to-reduce-uncertainty/
And he seems to be a very thoughtful, balanced, respected, experienced atmospheric scientist. I’m sure there are numerous caveats in those articles but the general message seems to be don’t worry so much about predictability.
Averages hide some nasty extremes. If you have one foot in the freezer and one in the oven on average your feet are comfortable.
How many dought years, followed by a flood year do the models predict. Sure each run gives a different start date for the drought run and a different year for the flood. But aggregating the runs, the rainfall the averages do not sound too bad.-
I don’t think you are understanding what statistics of climate model runs are about.
In any case, it is possible to produce more from statistics of results than a mean value.
danolner, you say:
“Yup, that would be the error bars, wouldn’t it? Everyone knows water vapour feedback is one of the major wildcards. What you seem to be forgetting is that – as Brad deLong has pointed out – uncertainty has two tails. It’s not an “admission” to say the bottom range of a distribution of outcomes is low. It *is* a scientific error to concentrate on those low values and ignore the full range of uncertainty.
So a question for you: what probability is the 1.9 C outcome you picked out? What outcome at the other end of the tail is the same probability?”
I do not agree that the water vapor feedback is a ‘major wildcard’. The water vapor feedback is directly tied to the cloud feedback, as the clouds are controlling the water vapor concentration in a dynamic manner by reflecting sunlight and precipitating out the water vapor from the atmosphere. The notion that the cloud feedback is positive in conjunction with positive water vapor feedback does not make any sense physically or mechanistically. Also, if water vapor is the primary amplifier or warming, what then is the controller? What’s controlling the earth’s energy balance if not clouds through their ability to modulate incoming sunlight and precipitate out the water vapor from the atmosphere? Is it just a coincidence that energy from the Sun drives evaporation of water, the water vapor condenses to form clouds, and as the clouds form they reflect sun’s energy? Overall, the system is really tightly constrained despite a large amount of local, seasonal hemispheric, and even global variability. This is very consistent with strong net negative feedback, as the global temperature anomaly barely moves by more than few tenths of degree a year, and even when it does, it tends to revert to its pre-equilibrium state quickly. Hardly consistent with net positive feedback, let alone net net positive feedback of 300% required for a 3 C rise.
I also do not agree that just because a model predicts something that it’s possible. I think many of the sensitivity outputs from the models are virtually impossible, if not literally impossible. I think a anything above 3 C is literally impossible, anything above 2 C is virtually impossible, and anything significantly above 1 C is extremely unlikely.
RW:
“Water vapour is the most dominant greenhouse gas. Water vapour is also the dominant positive feedback in our climate system and amplifies any warming caused by changes in atmospheric CO2. This positive feedback is why climate is so sensitive to CO2 warming.”
Source: “Water vapor is the most powerful greenhouse gas,” Intermediate rebuttal by John Cook, Skeptical Science, June 26, 2010
To access the detailed rebuttal and a video, go to:
http://www.skepticalscience.com/water-vapor-greenhouse-gas-intermediate.htm
I’m well aware that water vapor is the most dominate greenhouse gas, but it’s concentration is not homogeneous like CO2. The climate system is not a static steady-state system – water vapor and clouds in particular are very dynamic and constantly changing spatially and in time – all the time, due to changing conditions (temperature, incoming solar energy, rates of evaporation, changes in atmospheric circulation patters, etc. etc.), yet overall the system is remarkably stable and very tightly constrained.
My main point is that the water vapor feedback is directly connected to the cloud feedback, as both are what drive the water cycle of the planet and ultimately the whole energy balance of the climate system. Yes, water vapor feedback is generally assumed to be positive – that is warmer air produces higher water vapor concentrations, which absorbs more outgoing LW that further warms the surface, so on and so forth. However, if the net cloud feedback was positive in conjunction, what would prevent the temperature from rising higher and higher from even just a few days or few weeks of abnormally warm weather? Yet this never happens – abnormally warm weather periods end and normal or colder weather inevitably commences. The reason is simple. The forces that drive evaporation/water vapor are not as strong as the combined forces of clouds and precipitation. In effect, the water vapor feedback is positive (i.e. it amplifies warming) until enough clouds start to form, which as they form reflect more and more sunlight, cooling the surface, reducing evaporation and ultimately precipitating out the water vapor from the atmosphere. Ultimately, clouds and precipitation win out – completely consistent with net negative feedback.
One comment on Kiehl’s quote.
“To the extent that models can reproduce observed features for all of these features, belief in the model’s ability to predict future climate change is better justified.”
Dont take him on his word here. For example it is unknown how the models have performed over the last 8 years when compared to ocean heat uptake because it hasn’t been output from the models. Or at least Gavin Schmidt doesn’t know about it so thats fair indication it hasn’t actually happened… Ocean Heat Content is arguably one of the most important indicators of CO2 induced global warming.
I, for one, truely cant fathom this revelation made on RC earlier this year.
This last decade seems to have kicked the model’s credibilitys and if they do eventually “get it right” then its going to be because of another step increase which afaik AGW theory doesn’t explain and the models dont model…and therefore we don’t really understand the implications of whats happening.
There’s no point in handwaving “natural variability”. Either science understands this stuff or it doesn’t.
eduardo:
I think I am coming to understand this, probably somewhat slowly, through some simple (mathematical) experiments. Hopefully I will get around to demonstrating these points in a later article.
The key point seems to be how to assess this quantitatively.
HR:
I don’t think he would stretch to the conclusion following the premise.
But thanks for reminding me about his blog – I have read some of his articles, including the 1st link when it was first published – but have now re-read this one including comments and see that Roger Pielke Sr responds and puts up a paper in response.
I will reread and think.
Professor Held certainly knows his stuff, has written some excellent papers and should be listened to much more than me.
“Analogously, when we talk about predicting the trend in the climate over the next 100 years due to a projected increase in carbon dioxide, we are talking about a forced response, fully analogous to predicting the extent to which summer is different from winter on average.”
From Helds post http://www.gfdl.noaa.gov/blog/isaac-held/2011/04/27/9-summer-is-warmer-than-winter/
oarobin:
Thanks for the paper.
And it seems that Knutti has also written/co-authored some other excellent papers on this subject. For example, The use of the multi-model ensemble in probabilistic climate projections, Tebaldi & Knutti, Phil Trans A (2007).
There are many interesting papers out there. For example, from a name that people might recognize:
Can chaos and intransitivity lead to interannual variability?, EN Lorenz, Tellus (1990)
The seasonal cycle would provide an interesting way to explore how well climate models perform. The seasonal change in solar forcing is massive compared with anthropogenic GHG forcing, but changes with latitude. Useful climate models must handle SWR and LWR changes correctly, so the fact that seasonal forcing is mostly in the SWR and GHG forcing is mostly LWR shouldn’t be important. Water vapor and cloud feedbacks should both operate on the seasonal time scale. If climate models don’t reproduce these feedbacks correctly, the amplitude of the seasonal cycle (which is somewhat analogous to climate sensitivity) should be way off.
Excellent point. Exactly such an analysis has been done and the models fail miserably in this regard. As the season solar flux increases, the surface response relative to the incident energy decreases and vice versa, indicating very strong net negative feedback. The observed behavior and response of the system is exactly the opposite of the models. Check out this analysis here, which I believe is about the closest thing to absolute proof that the sensitivity and the system behavior the models predict can’t happen:
http://www.palisad.com/co2/eb/eb.html
BTW, aside from the somewhat ‘controversial’ halving of the 3.7 W/m^2 from 2xCO2, if anyone can find any fault with this analysis I’d be most curious to hear it.
The linked post does not show the information I was seeking: How well climate models reproduce aspects of the seasonal cycle, ie observations vs model predictions? For example, how well do models predict the observed amplitude of the seasonal cycle at various locations on surface of the earth? How well do models predict the phase shift between maximum insolation and maximum temperature, which I assume is a measure of the thermal inertia (heat capacity) of the atmosphere and the portion of the ocean reachable by thermal diffusion in six months.
I’d prefer to get my information on this subject from peer-reviewed publications. If George White hasn’t recognized the error he made when dividing radiative forcing in half and done something about it (correction or explanation), it isn’t clear why I should place any credibility in the rest of his analysis. Does albedo really rise and fall with solar power as suggested by the second figure, making cloud feedback seem to be strongly negative? That might be interesting, but the second figure indicates that the earth’s average albedo is 27.2%, far below the usual value of 30%. For those skeptical of the IPCC consensus, the peer-reviewed scientific literature has some limitations, but it is the sensible place to start.
Frank,
This might be what you’re looking for: Constraining Climate Sensitivity from the Seasonal Cycle in Surface Temperature – Knutti & Meehl 2006 (http://journals.ametsoc.org/doi/full/10.1175/JCLI3865.1)
They find that the best fit to the seasonal cycle is in model runs exhibiting climate sensitivity of 3-3.5 deg C for 2xCO2.
solar radiation pressure -> air pressure -> wind pressure -> windmill -> electron pressure -> residual radiation pressure -> cosmic background radiation pressure
In modelling a system such as the Earth you also have to consider that the planet itself could be ‘wrong’, for want of a better word.
As an analogy think about a hypothetical perfect model reproducing the physics of rolling 6-sided dice – it represents the physical system with 100% accuracy though this is unknown to those who built it. You then setup a problem where you have to use the model to predict how many times ‘3’ will appear when rolling 60 dice at one time. Multiple runs are performed and a mean produced which essentially shows that ‘3’ should appear on 10 dice.
You then test the model by rolling 60 real dice and the chances are you won’t get ten 3s. Despite the model actually being perfect it may appear invalidated by the results of a single observational run – which is, coming back to the topic, all we have with the Earth.
This is why it is argued that the spread of model results is a more meaningful comparison to observations than an ensemble mean.
Frank,
To the best of my knowledge, the models don’t predict any aspects of the seasonal cycle.
As far as the halving of the 3.7 W/m^2, I suggest you do some research on that before you assume it isn’t true. Even some of the AGW believers at skeptical science (and even one of the authors) believes it is halved – they just think the 3.7 W/m^2 represents to downward half and 7.4 W/m^2 is the incremental absorption or the reduction in ‘window’ transmittance.
BTW, the most important graphs are the last two that show as the energy in each hemisphere from the Sun increases the gain decreases and vice versa. This is exact opposite behavior predicted by the climate models. All he’s really doing there is plotting measured data to see how the system responds to changes in radiative forcing – changes far greater than what would come from the doubling of CO2.
RW
In your note to Frank, you state,
“Even some of the AGW believers at skeptical science (and even one of the authors) believes it is halved – they just think the 3.7 W/m^2 represents to downward half and 7.4 W/m^2 is the incremental absorption or the reduction in ‘window’ transmittance.”
Which Skeptical Sceince author are you referring to and from which Skeptical Science article do you derive your assertion?
Hugo Franzen:
Look specifically on pages 19 and 20 of his paper from the page:
Click to access GWPPT6.pdf
The diagram of the atmosphere depicts the exact same thing as G White’s paper does – that half what is absorbed by the atmosphere goes to space and half goes the surface.
See also from the thread, where one of the posters claims the incremental absorption from 2xCO2 is 7.4 W/m^2 and the referenced 3.7 W/m^2 of ‘forcing’ is the downward half :
See also here this specific post by Hugo Franzen (his response #3):
He basically says the same things I’ve been saying regarding half of the what’s absorbed by GHGs goes up out to space and half goes back to the surface (see also page 44 of his paper):
I’m quoting HFranzen:
“3.Clarification: The absorption is a process in which carbon dioxide is excited from some rotational level in the ground vibrational state to some rotational level in the first excited vibrational state. The short explanation of the fact that half is returmed to the earth is: absorbed radiation is then reemitted through any of a number of processes and this emission is in all directions, i.e. half up and half down. Thus half the reemitted absorbed radiation returns to the earth as GHG flux. A slightly longer explanation of the reemission follows.
Once this excitation has occurred the molecule either relaxes to the ground state or, more frequently, gives up the energy to the translational motion of another molecule (e.g. nitrogen) through collision.
In the more probable collisional dectivation case this energy then becomes part of the thermal bath in which the molcules reside, in other words the atmosphere is locally heated above its steady state temperature. This excess bath energy is then lost through any of a myriad of collisional processes, say with the ubiquitous water molecules. This excitation is then lost through emission.
In either case – direct emission or collisional deactivation follwed by remission from some other infrared active molecule the remmission is isottropic, i.e. nondirectional, and thus occurs with equal probability up or down.”
Paul S refers to IPCC report Figure SPM.4 found here http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-spm-4.html
Note, these models, minus any anthropogenic warming, show a slight negative trend. I just did a quick read of the summary for policy makers where SPM.4 is found and find no justification for modeling such a negative trend. I’ll review Working Group 1 to see if I can find any justification there. The temperature has been warming (unevenly) for three hundred years since the beginning of the instrumental record, so it appears farfetched to come up with models which project cooling (sans AGW). I tend to think it is just a ruse to make us believe that any and all recent warming is AGW and not, in part, a continuation of a 300 year trend. Wouldn’t it be astonishing (and convenient for the IPCC) if the 300 year warming trend stopped at exactly the same time that AGW warming kicked in!? Call me skeptical!
I just did a quick read of the summary for policy makers where SPM.4 is found and find no justification for modeling such a negative trend. I’ll review Working Group 1 to see if I can find any justification there.
This is how modelling studies work: The model(s) is fed with scenarios – input data representing what happened in the ‘world’. The scenario can be based on what actually happened in the past, in order to test the model against observations. It can then be fed with potential future scenarios to get projections.
The scenario can also be an alternative reality. This is what you can see for the blue lines in Figure SPM.4. They input the observed natural data (solar, volcanic etc.) for the 20th Century but leave out human-caused changes in forcing factors (greenhouse gases, aerosols etc.). The model then simply produces a result based on the scenario inputs. ‘Justification’ isn’t a word that makes sense in this context.
—————————————————-
300 year warming trend
You cite the ‘original IPCC 1000 year temperature chart’ as evidence but if you do find it you’ll see it is actually flat from about 1700 until the late 19th Century.
Very off topic –
I thought it would be interesting to view some reanalysis data. So I visited ECMWF and downloaded a sample of data – latent, sensible heat flux + 10m winds and surface temp for one month.
The data is available as “grib” or “nc” files, which seem to be some kind of binary, as a text editor (notepad++) reads gibberish.
There is a link on their site about software but it seems like you have to become a code developer to use it.
Does anyone know an easy way to take the output and turn it into something like comma separated text?
Anyone played around with reanalysis data before?
Thanks.
Just in case you haven’t found anything yet, Gavin at Realclimate just mentioned a NASA GISS-developed NetCDF viewer called Panoply, downloadable at http://www.giss.nasa.gov/tools/panoply/download_win.html
I haven’t used it all myself so no clues what to expect from it.
Doug Allen:
What is the basis for your assertion that there has been a 300-year warming trend? Citations please.
bagersouth,
Lots of information including-
Click to access globlwrmw99.pdf
and
http://myweb.wwu.edu/dbunny/research/global/glacialfluc.pdf.
Remember the original IPCC 1000 year temperature chart before it was replaced by the hockey stick, since removed. It’s got to be archived somewhere, but I couldn’t find it quickly.
Science of Doom- I’m asking for analyses of the IPCC models that project cooling (san AGW)- above. I don’t have the mathamatical/statistical background to conduct such analyses and appreciate your expertise. Jeffrey Kiehl, above, throws out several cautions including this one, “it is imperative that climate models be compared to a diverse set of observations… ”
How can a model which projects “what if” senarios- minus AGW forcings- ever be compared to observations? I should think that there must be very strong evidence to model the reverse of a 300 year old trend. What is that evidence? I can’t find it.
Doug Allen:
Your theory about a 300-year warming trend just doesn’t hold-up under scrutiny.
The main drivers of the Little Ice Age cooling were decreased solar activity and increased volcanic activity. These factors cannot account for the global warming observed over the past 50-100 years. Furthermore, it is physically incorrect to state that the planet is simply “recovering” from the Little Ice Age.
—-er Myth [moderator’s note, please check the Etiquettehttps://scienceofdoom.com/etiquette/]: “Were coming out of a Little Ice Age.”, Advanced rebuttal by Dana, Skeptical Science, Sep 29, 2010
http://www.skepticalscience.com/coming-out-of-little-ice-age-advanced.htm
For readers interested in RW’s comment of July 8, 2011 at 3:13 am, this has already been discussed in the comments following another article.
The reason why the atmosphere doesn’t radiate the same amount from the top as from the bottom is simple, but unfortunately not simple enough for everyone.
The measurements clearly demonstrate the downward radiation from the bottom of the atmosphere is not equal to the upward radiation from the top of the atmosphere. And theory, ultimately derived from measurements of the real world, says the same.
Not a discussion I will continue here – you can already see my failure to adequately explain this super-simple concept in that article.
SoD,
“The reason why the atmosphere doesn’t radiate the same amount from the top as from the bottom is simple, but unfortunately not simple enough for everyone.”
I know you don’t want to get into this any further, but this is NOT what is being claimed. The claim is half of what’s emitted radiatively from the surface that is absorbed by the atmosphere (by GHGs and clouds) ultimately ends up going to space and the other half ultimately goes back the surface. The half returned to the surface is the radiative equivalent of the half radiated out to space. It does not all return by radiation and is not returned by the bottom of the atmosphere radiating the same amount down as the TOA radiates up to space.
The atmosphere does not simply send directly up and directly down half of what it absorbs in one simple absorption and re-emission. The point is the net result at the boundaries of the surface and the TOA is equivalent to the atmosphere doing this, even though in reality there are multiple exchanges between radiative and non-radiative energy through the atmosphere, as well as multiple GHG absorptions and re-emissions.
RW,
Perhaps someone with better understanding will correct this but I think the ‘half up, half down’ pattern only applies to a single ‘layer’ of the atmosphere. The energy will then encounter further layers with the same pattern. The percentage that makes its way back to the surface is essentially a factor of the amount of layers and the amount of greenhouse gases – there isn’t a probablistic requirement for it to be half-and-half overall.
What you appear to be asserting is that the 2xCO2 no-feedback equilibrium forcing response (3.7 W/m^2, or 7.4 W/m^2) will send more energy upwards, out of the atmosphere, and downwards to warm the surface in equal measure (1.85 W/m^2 or 3.7 W/m^2 in each direction).
Let’s just look at some numbers to consider the implications. Let’s say hypothetically that the planet is in an equilibrium state, meaning outgoing energy is equal to incoming energy: There is 239 W/m^2 coming in from the Sun and 239 W/m^2 emitted at TOA.
Now we will double CO2 and magically arrive at equilibrium for the new state. Incoming solar remains at 239 W/m^2 but we now have 3.7 W/m^2 ‘extra’ to play with. According to your assertion this would mean an increase in TOA emission to 240.85 W/m^2. So we can see the Earth is now losing more energy than it is receiving. Either you’re suggesting greenhouse gases actually cool the planet or you’ve created energy from nothing.
Paul,
You’re forgetting that the 3.7 W/m^2 (or 7.4 W/m^2) from 2xCO2 is reducing the 239 W/m^2 outgoing LW to 235.3 W/m^2 (or 231.6 W/m^2) prior to “half up, half down.”
The ultimate point is only the downward returned half can affect the surface temperature because the upward emitted half ends up escaping to space just like the ‘window’ transmittance of 70 W/m^2 in Trenberth’s diagram.
When CO2 is doubled, the ‘window’ reduces by 3.7 W/m^2 (from 70 W/m^2 to 66.3 W/m^2). Half (1.85 W/m^2) goes out to space without ever reaching the surface again and the other half (1.85 W/m^2) goes to the surface to affect its temperature. The surface then has to warm up by about 3 W/m^2 to allow the additional 1.85 W/m^2 to leave the system to restore equilibrium (239 W/m^2 in and out).
The reference to the Held blog and resonance discussion is why climate prediction is so fraught; the potential effect of resonance on climate are discussed here:
http://landshape.org/enm/celestial-origins-of-climate-oscillations/#disqus_thread
And here:
http://arxiv.org/abs/1002.1024
Paul S. Thanks for the above reference on seasonality. Although I like Stainforth’s ensembles, I was more interested in seeing how well the IPCC’s models performed. Surprisingly, the introduction doesn’t provide any references to earlier, more basic work on how well seasonality is modeled.
Yeah, I can see now you’re looking for something a little more basic/fundamental. IPCC AR4 8.3 and 8.4.11 have some very brief discussions about this but I can’t find anything more substantial. You could see if you can glean anything from the references in that chapter.
It seems strange given that modellers tend to talk about seasonal cycles quite a bit. It could be that simply matching surface temperature changes through seasons is seen as too basic a topic for publication on its own.
Recommended reading:
“Climate Change: Still Worse Than You Think,” an excellent post by Kevin Drum on Mother Jones. It also includes an outstanding video.
http://motherjones.com/kevin-drum/2011/07/climate-change
Drum’s post is centered on the paper, “Geologic constraints on the glacial amplification of Phanerozoic climate sensitivity,” by Jeff Parker and Dana Royer, American Journal of Science, Vol. 311, January 2011, P.1-26; doi:10.2475/01.2011.01
http://www.ajsonline.org/cgi/content/abstract/311/1/1
badgersouth,
Talk about confusing cause and effect! CO2 is not the primary cause of waxing and waning of ice sheets. That’s caused by things like Milankovitch cycles and changes in ocean circulation caused by continental drift. CO2 increases the climate sensitivity of changes in ice cover, not the other way around.
Comparing radiative forcing at the tropopause for an instantaneous change in CO2 before equilibration of the lower atmosphere and surface to the change in surface emission after equilibration of the atmosphere and surface is comparing apples to oranges. It demonstrates a fundamental misunderstanding of the physics of the atmospheric ‘greenhouse’ effect. For starters, it fails completely to consider the change in downward atmospheric radiation at the surface after equilibration.
DeWitt Payne:
Did you read the Parker and Royer paper?
A free pdf of the Parker and Royer paper is available here:
Click to access climate_sensitivity_II_AJS.pdf
badgersouth,
From Parker and Royer:
There’s the fundamental problem. Parker and Royer, following Lunt and others, assume the politically correct position that CO2 is the only forcing and everything else is a feedback. That’s simply not true and certainly doesn’t explain why the CO2 rises and falls. Ice-sheet extent is the forcing and CO2 is a feedback in most cases. For example, CO2 lags temperature in the recent ice core record. Even in the case of the PETM, the temperature started up before the start of the carbon isotope injection.
@DeWitt Payne:
As explained below, changes in CO2 concentrations in the atmosphere both preceded and followed high temperatures during the last eight glacial cycles. Reputable climate scientists understand and readily acknowledge this fact.
“Over eight glacial cycles in 650,000 years, global temperature and the amount of CO2 in the atmosphere have gone hand in hand. When temperatures are high, so are CO2 amounts and vice versa. This obvious connection is part of a coupled system in which changes in climate affect CO2 levels, and CO2 levels also change climate. The pacing of these cycles is set by variations in the Earth’s orbit, but their magnitude is strongly affected by greenhouse gas changes and the waxing and waning of the ice sheets.
“Despite these large natural CO2 variations, atmospheric CO2 variations remained relatively stable over the 12,000 years from the end of the last ice age to the dawn of the industrial era, varying between 260 and 280 ppm. Methane, too, was stable during this period varying from 0.6 to 0.7 ppm. These trace-gas concentrations are well known from analyzing air bubbles trapped in ancient snowfall. This relative stability came to an abrupt end with the onset of the industrial era. At that point, we started transferring to the atmosphere carbon that had been stored in underground reservoirs for millions of years. These modern increases have occurred in a geologic blink of the eye, dwarfing the rate of increase coming out of the last ice age. Plotted on the same graph as the ice age change, the industrial era increases look like vertical lines.”
Source: “Climate Change: Picturing the Science,” Gavin Schmidt and Joshua Wolfe, W.W. Norton Company Ltd, 2009.
badgersouth,
And that quote from Schmidt and Wolfe is different from what I said how?
S&W:
[emphasis added]
Me:
[emphasis added]
Saying that ice-sheet extent is a feedback is classic begging the question. The conclusion is forced by the initial assumption.
And here’s another doctoral thesis on temperature change leading carbon isotope injection during the PETM by 5,000 years:
http://igitur-archive.library.uu.nl/dissertations/2006-0906-200913/index.htm
I don’t disagree that the combined sensitivity of ice-sheet extent and CO2 at maximum ice sheet extent is high. But ice-sheet extent is low right now so the combined sensitivity isn’t going to be anywhere near as large. And the CO2 contribution to the ice-sheet sensitivity is less than half, IMO, considerably less than half.
@DeWitt Payne
You state:
“I don’t disagree that the combined sensitivity of ice-sheet extent and CO2 at maximum ice sheet extent is high. But ice-sheet extent is low right now so the combined sensitivity isn’t going to be anywhere near as large. And the CO2 contribution to the ice-sheet sensitivity is less than half, IMO, considerably less than half.”
Can you cite any published, peer-reviewed papers that validate your opinion?
Also, please define what you mean by “ice-sheet sensitivity.”
“This level of complexity also implies the system is chaotic (Lorenz, 1996, Hansen et al., 1997), which means the representation of the Earth system is not deterministic.”
The word ‘complexity’ as used previously in the quoted material seems to be based solely on the presence of multiple-scale, multiple-physics physical phenomena and processess ( Non-local is also mentioned, but not defined. All of continuum mechanics is based on local interactions. There is a non-local continuum mechanics, but the range of interactions is still of limited range and are additionally constrained to be limited to a given mount of material. )
Multiple-scale, multiple-physics physical phenomena and processes are not sufficient to ensure chaotic response in the sense that ‘chaotic response’ is used in the mathematical literature. In fact, they are not even necessary.
Another excellent article on clmate sensitivity, including a discussion of the key findings of the Parker and Royer paper previously cited, is:
“Roy Spencer on Climate Sensitivity – Again” by Chris Colese, Skeptical Science, July 1, 2011
http://www.skepticalscience.com/spencer_ocean.html
I’m afraid the Colose effort at SC is just wrong. Colose criticises Spencer for using a “pure diffusion process” without an “upwelling term”. This misses the point that Spencer is concerned with ocean uptake of heat, a fundamental issue to do with official estimates of climate sensitivity which even Hansen concedes has problems; see this exchange for a discussion of that:
http://landshape.org/enm/rejoinder-to-geoff-davies-at-abc-unleashed/
In any event Spencer was responding to Hansen’s concession that his eddy diffusion coefficient [EDC] had been overestimated; that is Hansen’s parameter not Spencer’s.
Colose goes onto say “It is also worth noting that only the transient climate response can be affected by observations of ocean heat content change, since this has no bearing on equilibrium climate sensitivity”. This is the crux of the matter. If Hansen is wrong about aerosols mitigating ocean heat uptake then the distinction between equilibrium and transient climate sensitivity is reduced or negated. If Hansen is wrong about aerosols CO2 climate sensitivity is low and there is no long term equilibrium sensitivity; whatever heating has occurred through the increase in CO2 is all the heating which will occur.
Colose is also wrong about the ocean having no bearing on equilibrium sensitivity; in a new paper it is noted:
“system response to a forcing depends not only on (1) the size of a forcing, and (2) its duration (affecting the accumulation of heat), but also (3) the depth in a system that a forcing is applied. For example, long-wave forcing of the low AR, high loss atmospheric level by GHGs would differ from shortwave solar radiation forcing the surface layers of the land and ocean. Geothermal heating in the deep ocean would have the highest intrinsic gain, due to low losses.”
Spencer compares his model to the PCM1 climate model and GISS forcings; he shows their defects. Colose has got it wrong not Spencer.
Cohenite – Colose criticises Spencer for using a “pure diffusion process” without an “upwelling term”. This misses the point that Spencer is concerned with ocean uptake of heat
That doesn’t even make sense. If you want to understand the uptake of heat in the ocean, you need model it correctly, that includes the exchanges. His so-called ‘model’ just assumes heat flows downwards, whereas the real ocean doesn’t work that way. No simulation of the Thermohaline Circulation, nor oceanic gyres, which drive heat down to deeper layers, nor upwelling. How does Spencer suppose all that heat, we now observe, got to the bottom layers? Magic?
DeWitt Payne – And here’s another doctoral thesis on temperature change leading carbon isotope injection during the PETM
It’s well accepted that warming preceded the onset of the PETM. That’s how all those methane hydrates were supposed to have been released in the first place. The extremely high temperatures of the PETM suggest either very high climate sensitivity (unlikely) or some powerful feedback/s in the system. That climate models grossly underestimate the actual temperatures of the PETM and the vastly reduced equator-to-pole, and surface to deep ocean, temperature gradients is a big problem.
Did you read what I said; Spencer was responding to Hansen’s EDC; so I guess your complaint is about Hansen not Spencer.
badgersouth,
Let’s move the goalposts, shall we. No concession that you misinterpreted my point about what are feedbacks and what are forcings, but find something different to attack. But I’ll answer this one before you go down the memory hole.
You place far too much faith in peer review. Besides, I don’t need a peer reviewed article to validate my opinion. I can tell that Parker and Royer is worthless by reading it, just like I can tell that the G&T and Miskolczi papers on the greenhouse effect are best used to line bird cages. They have been peer reviewed as well. Ice-sheet sensitivity is no different than any other climate sensitivity, it’s units are C/W/m². But of course it isn’t a constant. It varies with ice-sheet extent.
Suppose the overall climate sensitivity is 0.4 C/W/m² (that’s an equilibrium climate ΔT2xCO2 of 1.5 C) and that the change in global temperature between the last glacial maximum and the present was 5 C. That means there was a forcing change of 12.5 W/m². The change in CO2 forcing would be 3.7*ln(280/180)/ln(2) or 2.4 W/m². That accounts 1 C of the temperature change so the overall ice-sheet sensitivity would be ~4/10 or 0.4 C/W/m² too. So in this case the forcing from CO2 amplified the temperature change from the 10 W/m² due to the change in ice-sheet area to 12.5 W/m².
According to Parker and Royer, that same change is all due to CO2 so the climate sensitivity is 5/2.4 or ~2 C/W/m² leading to a ΔT2xCO2 of 7.4 C. But it can’t be that high because the temperature went up and by extension the ice-sheet extent went down before the CO2 increased.
DeWitt Payne:
When can we expect to see your paper published in a reputable scientific journal?
Cohenite- Spencer has made up an ocean model with no real physics and then claimed that it explains OHC trends and that consequently climate sensitivity is low -because there’s no where else for the heat to go. Does that sound at all reasonable or logical to you?
His model cannot explain any of the physical processes that actually occur in the real world. It has no predictive power and is therefore useless. For example it is unable to explain the deep ocean warming in the last few decades, measured by Purkey & Johnson 2010 & Kouketsu 2011.
It appears you haven’t even understood Chris Colose’s post at all. If you want to play the role of the wizard from the Wizard of Oz, that’s your choice, but expect to get called out now and then.
Wizard of Oz? Ha, more like Toto. What deep ocean warming :
Click to access KD_InPress_final.pdf
Dapplewater,
Issac Held presents a post showing how the climate system may be reduced to a simple linear equation, no physics there.
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-forced-response/
This would seem to be a the heart of the idea that the climate can be predicted 100+ years into the future. It strikes me that simplifying the system in order to ask questions about what is going on seems like a valid and oft used method in climate science.
badgersouth
Perusing your comments throughout this post, it appears as though your arguments are framed predominantly from information obtained off sites that openly support ‘catastrophic’ climate scenarios, such as skepticalscience, grist and Mother Jones. While the first two reference sites are not particularly to my taste their is much in MJ’s, climate articles not withstanding, that interest me, so before my comment has time to colour your view, I am not ‘right-wing’ nor do I adhere to extreme libertarian dogma.
I rarely comment on this site because, frankly, the level of scientific discourse is above me. And hence to the reason behind one of my rare posts. I suspect your arguments would hold greater credibility if they were based in an obvious understanding of the science relating to the discussion at hand (and if you refrained from using certain antagonistic labels).This rabid ‘argument from blog authority’ is rife from both poles of the climate debate and appears to serve no other purpose than to increase rancor and imbue the poster with a perceived level of scientific cred. There is a big difference between being good at research and having actual scientific nous, FWIW.
Ian
Cohenite – it appears you don’t understand Ocean Heat Content research either. The Von Schuckmann paper mentioned by Douglass is the only one to measure down to 2000mtrs – the maximum depth of the ARGO floats. Note how Douglass calls it an “outlier”, because it indeed shows warming. The other papers only cover down to 700 mtrs. Time will tell which analysis is right, but a recent paper by Von Schuckmann & Le Traon (2011) shows a lot of .
Purkey and Johnson (2010) and href=tp://soest.hawaii.edu/coastal/Climate%20Articles/Ocean%20warming%20DEEP%20Kouketsu%202011.pdf”>Kouketsu (2011) measures the deep ocean, i.e beyond the scope of the ARGO measurements, using ship-based survey data. They find the deep ocean is warming.
How does the heat get down there? Well, one thing’s for certain Roy Spencer’s ‘model’ won’t provide any clues, because it doesn’t represent how the real ocean works – no thermohaline circulation for instance. In fact Spencer’s ‘model’ means a lot of real-world observations are impossible, such as year-to-year variability.
Cohenite – it appears you don’t understand Ocean Heat Content research either. The Von Schuckmann paper mentioned by Douglass is the only one to measure down to 2000mtrs – the maximum depth of the ARGO floats. Note how Douglass calls it an “outlier”, because it indeed shows warming. The other papers only cover down to 700 mtrs. Time will tell which analysis is right, but a recent paper by Von Schuckmann & Le Traon (2011) shows a lot of warming.
Purkey and Johnson (2010) and href=tp://soest.hawaii.edu/coastal/Climate%20Articles/Ocean%20warming%20DEEP%20Kouketsu%202011.pdf”>Kouketsu (2011) measures the deep ocean, i.e beyond the scope of the ARGO measurements, using ship-based survey data. They find the deep ocean is warming.
How does the heat get down there? Well, one thing’s for certain Roy Spencer’s ‘model’ won’t provide any clues, because it doesn’t represent how the real ocean works – no thermohaline circulation for instance. In fact Spencer’s ‘model’ means a lot of real-world observations are impossible, such as year-to-year variability.
Well I don’t understand a lot of things; one of those is how OHC content can be said to be overall increasing when it most certainly isn’t in the top 700 metres or at the surface since at least 2003 and the thermal expansion component of sea level rise has been declining for the same period:
Click to access os-5-193-2009.pdf
Click to access annurev-marine-120308-081105.pdf
cohenite,
to be clear it still looks as if OHC is still increasing just at a much lower rate than would be expected. I think one of the ways this uncomfortable fact is ignored is by simply dismissing the the short period of time and focussing on longer periods e.g. 1993-present. Even though as you state OHC (through ARGO) and SLR (through GRACE) seem to be giving consistent results.
Given you raised SLR I thought there were some interesting numbers generated in the most recent BAMS state of the Climate 2010. SLR starts page S98 and OHC on page S81. SLR of 1.5mm/yr for the past 5 years seems like something that deserves more comment than it gets.
HR,
I don’t think that 1.5mm/yr figure is as clear-cut as you imply. It only relates to open ocean areas > 200km from the nearest coast. In the same time frame coastal sea levels have accelerated.
See this paper for a comparison of coastal and open ocean up to 2007: ftp://soest.hawaii.edu/coastal/Climate%20Articles/Cazenave%20coastal%20sea%20level%20and%20altimetry%202009.pdf
Figure 2 shows coastal SLR acceleration at the point open ocean sea level dips.
Church & White 2011 (http://www.springerlink.com/content/h2575k28311g5146/) shows a simple average of tide gauges in Figure 6, which represents global coastal sea level up to 2009. Unless there was a big dip in 2010 (not beyond the realms of possibility) this data shows ~4mm/yr SLR over the same period.
To be sure the overall global data suggests a slowdown in recent years. Current altimeter data shows 1.9mm/yr from 2005 to present though the BAMS 2010 report does say ‘at least 10 years of data are required
to determine a reliable rate (Nerem et al. 1999).’
HR,
The problem with looking at 1993-present ocean heat content data is that there’s an obvious problem integrating pre-ARGO XBT data with ARGO data. The rapid increase in OHC from 2002-2004 is suspect as there is no equivalent change in rate of sea level increase during the same period. Even ignoring that, the implied radiative imbalance is about 0.6 W/m², about 2/3 the presumed 0.9 W/m² modeled imbalance.
Readers of this comment thread will also want to check out:
“Trenberth on Tracking Earth’s energy: A key to climate variability and change” an original article by Kevin Trenberth written for Skeptical Science and posted on July 12 (Australian time).
To access the article, go to:
http://www.skepticalscience.com/Tracking_Earths_Energy.html
badgersouth,
That article is mostly a rehash and is long on assertion and very short on data. In the end, neither Trenberth (heat is going into the deep ocean) nor Hansen (the heat was never there because it was reflected by aerosols) have any real data to back up their hypotheses.
Paul S, SOD, others interested in how well GCM’s predict the amplitude of seasonal temperature change.
As can be seen from what AR4 says below, WGI has disguised how effectively models predict seasonal temperature changes by reporting errors in terms of the standard deviation in monthly mean temperatures. If temperature followed a perfect sine curve over 12 30-day months with an amplitude of 5 degC (10 degC annual swing), my calculations show that the standard deviation of the monthly means would be 3.65 degC. Of course, observed and modeled monthly means do vary from year to year also. So when looking at the IPCC’s data, the annual seasonal temperature swing is probably roughly twice the standard deviation of the monthly means.
The data can be found on pages SM.8-11 to 8-13 at: http://www.ipcc-wg1.unibe.ch/publications/wg1-ar4/ar4-wg1-chapter8-supp-material.pdf
What can modeling seasonality tell us? a) The models give wildly different results. b) Models that overestimate seasonality might over-estimate feedbacks. c) Albedo feedback is more important to correctly modeling seasonality in the NH (esp land) than to modeling global warming. d) The standard deviation in many tropical areas may be dominated by phenomena like El Nino and monsoons and not by seasonal solar forcing+feedback. e) If one is interested in how well water vapor and cloud feedback are handled, one might look at the temperate zone of the SH. Some models (GISS-EH and PCM) appear to double the observed seasonal temperature swing in this region.
From AR4 WGI 8.3.1.1.1: An additional opportunity for evaluating models is afforded by the observed annual cycle of surface temperature. Figure 8.3 shows the standard deviation of monthly mean surface temperatures, which is dominated by contributions from the amplitudes of the annual and semi-annual components of the annual cycle. The difference between the mean of the model results and the observations is also shown. The absolute differences are in most regions less than 1°C. Even over extensive land areas of the NH where the standard deviation generally exceeds 10°C, the models agree with observations within 2°C almost everywhere. The models, as a group, clearly capture the differences between marine and continental environments and the larger magnitude of the annual cycle found at higher latitudes, but there is a general tendency to underestimate the annual temperature range over eastern Siberia. In general, the largest fractional errors are found over the oceans (e.g., over much of tropical South America and off the east coasts of North America and Asia). These exceptions to the overall good agreement illustrate a general characteristic of current climate models: the largest-scale features of climate are simulated more accurately than regional- and smaller-scale features.
Like the annual range of temperature, the diurnal range (the difference between daily maximum and minimum surface air temperature) is much smaller over oceans than over land, where it is also better observed, so the discussion here is restricted to continental regions. The diurnal temperature range, zonally and annually averaged over the continents, is generally too small in the models, in many regions by AS MUCH AS 50% (see Supplementary Material, Figure S8.3). Nevertheless, the models simulate the general pattern of this fi eld, with relatively high values over the clearer, drier regions. It is not yet known why models generally underestimate the diurnal temperature range; it is possible that in some models it is in part due to shortcomings of the boundary-layer parametrizations or in the simulation of freezing and thawing soil, and it is also known that the diurnal cycle of convective cloud, which interacts strongly with surface temperature, is rather poorly simulated.
What is “old news” to you may be “new news” to others reading this comment thread.
[snipped comment]
This is a forum for discussing the science of climate. Please read About this Blog and also The Etiquette.
A very readable and interesting paper about the development of ensemble forecasting in the light of the chaotic nature of weather:
Roots of ensemble forecasting, JM Lewis 2005
It is more about weather forecasting than climate but still worth a read (freely available via the link).
scienceofdoom,
See this article on the uncertainties in climate models, which may be due to the chaotic nature of climate system.
Click to access climate_of_belief.pdf
There is a great youtube video on the Lorentz system and some very high resolution Navier-Stokes models of it vs. Lorentz’s 3 dimensional model. It tells me that enormous complexity trying to model “all the physics” may be less informative than physically based and constrained simple models. It looks to me as if its only possible to understand the nonlinear structure of the system using the simpler model.
It seems to me that the question of the skill of climate simulations depends heavily on nonlinear properties of the system. We know its chaotic and thus the butterfly effect makes the initial value problem ill-posed in a mathematical sense, i.e., small changes in the initial conditions lead to unbounded changes in the final time solution in the usual norms such as L2. The question then boils down to whether the solution is better posed in some other norm, such as a norm involving integrals of the solution. This is a field where there are some results for the simple Navier-Stokes equations of viscous, compressible fluid flow. The estimates of the dimension of the attractor are very large and grow with the Reynold’s number. For the atmosphere, the Reynolds’ number is large. If the dimension of the attractor was small, there would be hope in that one could characterize these dimensions by some output quantities that could be used as the figure of merit for the results of a simulation.
CoD said elsewhere that the bottom line reason why climate modelers think their results have skill is roughly “every time I run the model, I get a reasonable looking climate.” Unfortunately, this is just an intuitive reason and not a rigorous or scientific reason.
Unfortunately, the job of validating the models then is very difficult. A microcosm of this can be seen in fluid dynamics where models are usually admitted by turbulence models to be “postdictive” rather than predictive. That’s a harsh judgment, but is a consensus position. The problem is that the literature of actual computational fluid dynamics in most of the application areas gives a different picture due to positive results bias. Perhaps if CoD is interested, I can go into details.
My bottom line is that GCM’s are perhaps useful, but not really validated in a formal sense and there is a lot of fundamental work to do.
For the models to make useful predictions (or projections) two requirements must be satisfied:
1) They must make predictions.
2) It must be possible to justify the usefulness of those predictions.
The first point can be studied through experimentation with the models. Making a large number of model runs using input that varies in the range deemed relevant tells, whether the results converge to a range narrow enough to be called a prediction.
For models like the climate models this requirement is far from trivial. The outcome could be so scattered that its uselessness is immediately evident. It could also result on a distribution that has a clear peak but so fat tails that ensemble averages diverge. In the latter case adding more runs tends to produce new outliers so far from the earlier distribution that those outliers make the statistical properties indeterminate. Alternatively the chaotic properties of the model might involve strange attractors that make quantities like equilibrium climate sensitivity undetermined. A third possibility is that there are quasiperiodic oscillations that are complex enough to have in practice an effect similar to that of strange attractors.
Assume now that the model succeeds in the tests of point (1). I consider only physical models built on the same laws of physics that are valid in the real world, but making simplifications and discretization. For physical models the success in point (1) contains by itself some evidence also on point (2), but only weak evidence if the simplifications are large and/or discretization is likely to have a major influence on the outcome.
In the case of GCM type models used to determine long term climate indicators (like decadal averages and variances of climate relevant variables) the only way the models can succeed is trough making many model runs and calculating those statistical indicators. Everything dependent on initial conditions must be averaged out in the analysis.
The GCMs can certainly produce many results known to be essentially correct. One example is seasonal variability. The models produce regionally reasonable summer and winter averages. They pass also a very large number of other similar tests. They have most certainly a lot of skill. The real question is then, whether they have skill also in determining the quantitative influence of added CO2. It’s obvious that they have some skill in that, but do they have enough skill to outperform estimates done by simpler means?
I do not believe that any generic argument can shed light on this most crucial question. It can be answered only by research performed using specific models. Part of that work might be a more thorough study of my point (1), i.e. of the properties of the model itself, but most of the work must involve comparisons with empirical data. The empirical data that could potentially be used is likely to come mostly from instrumental observations of latest decades, but it may involve also paleoclimatic data.
Climate modelers have done that kind of model evaluation for long. Each of them has surely her or his subjective judgment of the present state of understanding. The nature of the evidence and the research is, however, such that representing that evidence objectively to outsiders is extremely difficult, perhaps impossible.
Yes, Pekka, your summary is a good one I think. The problem here is that the models must be validated against accurate real world data to determine their range of applicability. That takes a lot of work and in many cases, actually gathering new data specifically to test some model predictions that might seem to be outliers.
However, I do think it should be possible for modelers to represent their validation evidence to outsiders successfully. That is especially true in the outsider is familiar with other fields such as fluid dynamics where the issues and methods are very similar.
I also believe that the climate model validation problem is very complex and it is implausible to me that the range of validity is really understood outside of a narrow range around the present climate.
I also believe that in such circumstances, simpler models that you can constrain with data may actually be more accurate. This is a controversial area in fluid dynamics right now for example.
Pekka: A useful model needs to provide a central value and an valid confidence interval for the central value. There is a big difference between projecting that the mean global temperature a century from now following some emission scenario will be +3.6 +/- 0.5 degC (70% ci) and projecting +3.6 +/- 1.8 degC (70% ci). The latter answer says that almost future is possible; while the former provides useful guidance for policymakers. The wider range is similar to that for climate sensitivity; the narrower range is typical of model output. Why do models show a narrower range? Is the narrower range meaningful?
If I do a linear fit to some data (that should be linear), I can develop a f(x) = mx + b model with confidence intervals for m and b. I can use the confidence intervals for m and b to produce confidence intervals for f(x)
The range of climate model output for a single model is determined by multiple runs with different initialization conditions (initialization uncertainty). However, climate models are populated with dozens (perhaps hundreds) of parameters that can effect climate sensitivity – and they create parameter uncertainty. Parameter uncertainty is probably much greater than initialization uncertainty. Many of these parameters are tuned by an arbitrary process that is unlikely to produce a set of optimum parameters that best represent current climate (if any set is actually capable of reproducing all aspects of today’s climate at least as well as any other set). Until modelers find a reasonable way to determined parameter uncertainty, they aren’t able to tell us full range of possible futures that are consistent with our understanding of climate physics.
The uncertainty in model output must be at least as great as the uncertainty in TCR or ECS, plus initialization uncertainty.
Frank,
Parameter uncertainty is only a fraction of model uncertainty. There are many points in model development, where a number of choices is possible concerning the equations and methods of solution. These go beyond parameter uncertainties.
The only practical way of getting some understanding on that is through the use of several models developed independently – or in some cases perhaps by using several variations of the same basic model.
Modelers hope (and believe to varying extent) that the present set of models tells roughly the range the models can have for projections to the future, when only such models are included that are:
1) based on physics
2) acceptably compatible with the history.
When they work long with models, and find out that their model set converges to certain range of projections, whenever they don’t contradict badly the history data, their trust in the models is likely to increase. Good modelers know that there’s still the risk that all the models are in error for some common reasons, but the practical experience with running the actual set of models does affect their conclusions.
Pekka: The little work with perturbed physics ensembles I’ve seen suggests that the majority of model uncertainty is parameter uncertainty – though ensembles allow one to study both at the same time.
You said: “Modelers hope (and believe to varying extent) that the present set of models tells roughly the range the models can have for projections to the future” Actually, AR4 has a statement explaining that the IPCCs models don’t systematically explore parameter space and that the output range of multi-model output shouldn’t be interpreted as a confidence interval. That warning, of course, is ignored elsewhere in the report.
You also imply that the valid models in an ensemble are only those that are “acceptably compatible with the history”. When you use history as your guide, you are assuming that unforced variability plays a negligible role in the historical record. Suppose unforced variability had actually suppressed or enhanced the record of 20th-century warming by 50%. You screen your ensembles for models that have a good fit to the 20th-century record. Then you use that ensemble to attribute 80-120% of warming to natural and anthropogenic forcing. Since natural forcing is small, you can attribute all warming to man. A great example of circular reasoning! If one postulates that a reasonable model that over-predicted or under-predicted 20th-century warming by 50% would have died by now (from lack of funding or people willing to work on it) or been “re-turned”, the current attribution statement is a product of such circular reasoning. Lorenz describes this problem very clearly in a 1991 paper: “Chaos, Climatic Variability, and Detection of the Greenhouse Effect”.
Click to access Chaos_spontaneous_greenhouse_1991.pdf
You must screen your ensemble only for models that describe all aspects of CURRENT climate as well as possible. Then you can use the best ones for attribution or projection – assuming that some subset of the ensemble actually performs better in describing all aspects of current climate than the rest. The climate-prediction group apparently finds that parameters that are good of some aspects of current climate (say rainfall and albedo) are inferior for others (say the meridional temperature gradient). Anytime you let the historical record influence the process (even unintentionally), you are assuming that the historical record contains no unforced variability and tuning parameters to “over-fit” whatever unforced variability. Modelers may have been “using” high sensitivity to aerosol cooling to “fit” the pause in warming during 1950-1970 and/or counter-balance high climate sensitivity, but Nic Lewis now claims that their values are now too high. Many climate scientists today are reprocessing recent noisy data looking for forcings that can explain the current pause – which may be part or all unforced variability. I just read Gavin Schmidt recent commentary: -1 degC from weak sun (doubled for “atmospheric chemistry”), -1 degC for volcanic aerosols (to weak to have been noticed earlier), -1 degC for unanticipated Asian aerosols, and -1 degC for eye of newt and toe of frog. (OK, he didn’t include the latter, but the abstract actually called the first three “conspiring factors”.) Of course, I am totally incapable of judging the scientific reliability of these alterations to forcing, but they should have been equally apparent a half decade ago when we were being assured the pause would end soon.
I really don’t understand the obsession with finding an explanation for every 0.1-0.3 degC of possible unforced variability. Is there a good reason unforced variability must be small?
Frank,
Comparing global average temperature histories is a small part of the testing. Much more is done using more detailed data. Climate models cannot describe well any really local effects but there’s a lot to do with larger scale regional data.
What’s the role of chaotic or other variability is always an important question, and a point I did discuss in my lengthier comment.
Acceptably compatible with history does not imply assuming that climate variability has been small, let alone “negligible”.
Each modeling group makes its own assumptions on many things, including the strength of aerosol forcing.
I wouldn’t pick Nic Lewis as an authority. He has found some real mistakes in scientific papers, but then he has also made totally unsupportable (and actually false) claims on the value of “objective Bayesian” methods.
Pekka: Please read the Lorenz’s paper I cited on how attribution studies in chaotic system MUST be done. The key section (two pages) clearly explains why you can’t tune your models using historical data and then use them for attribution studies. I assume you recognize Lorenz (unlike Lewis) as an authority on this subject, but his credibility is enhance by his addressing (in 1991) the attribution problem his peers would face after another decade of rapid warming. (Projections made with such models have similar problems.)
Ensembles with perturbed parameters have shown that models can have climate sensitivity which are half or double that of the IPCC’s models. Such models rely on conventional physics and presumably provide as good (or poor) a description of present climate as models with ECS around 3. Do you think such a model, if one ever existed before ensembles, could have survived and been used by the IPCC?
Pekka: I specifically said “Nic Lewis claims” to avoid endorsing his position. However, in my relative ignorance, I do find his discussion of objective and subjective Bayesian methods compelling: The choice of prior clearly can bias the Bayesian analysis of climate sensitivity (based on forcing change and observed warming). Unbiased (objective) methods exist and therefore should be preferred. What am I missing? The general principle seems unavoidable (but the details of how Lewis implemented it certainly can be challenged).
As for aerosols, Item 4 of the executive summary of Lewis’s GWPF report says: “estimates of the cooling efficacy of aerosol pollution have been cut” and “But the new evidence about aerosol cooling is not reflected in the computer climate models.” I interpret this to mean that the parameter(s) in CMIP5 models used to convert historical records (500 nm optical densities?) and projections of aerosol concentrations to radiation forcing. I couldn’t find any data on models in the body of the report supports this assertion. Any information on this subject would be interesting.
Frank,
I have discussed many times the problems of tuning. I’m pretty sure that I have done that also on this site. That’s an issue that i have looked at quite extensively in connections not related to climate models. This is one of those problems where purists present rules that are almost impossible to follow, and scientists who cannot follow such rules in full try to figure out, how severely that affects what they can conclude. It’s a typical issue, where subjective judgement cannot be avoided.
The problems with the “objective Bayesian method” are so severe that I would say that Jeffreys’ prior should not be given any special status at all in statistical analysis of scientific nature. In scientific work it’s fully as subjective as any other prior, and appropriate only, when it could be picked for other reasons as well. It may have special value in some non-scientific connections, because it’s based on formal rules that may make it good in resolving contract disputes or something of that nature. I have written recently several lengthier comments related to the problems of Jeffreys’ prior at Climate Audit and at http://julesandjames.blogspot.com. Other knowledgeable people have written similar comments on both threads, my contribution has been mainly in trying to explain the point in more detail and to answer some questions.
Pekka wrote: “Acceptably compatible with history does not imply assuming that climate variability has been small, let alone “negligible”.
As Neumann once said: “With four parameters, I can fit an elephant. WIth five, I can make his trunk wiggle.” Anytime one is dealing with many adjustable parameter, one risks overfitting the data – fitting the noise/unforced variability in the data. Now, I know that model developers don’t statistically fit historical record, but the parameters of their models do gradually evolve with time and every model has converged on the historical record. Model output matches every low frequency wiggle in the historical record and the AR5 SPM says both unforced variability and natural variability are likely less than 0.1 degC. (I’m not sure precisely what this statement means.) The consensus is clearly asserting that unforced variability is negligible.
At the same time, the IPCC says ECS may range from 1.5 to 4.5 degC. Projections made in 2000 are off by 0.35 degC a decade later. We know that models don’t do a good job of reproducing many forms of high-frequency unforced variability (ENSO, MJO, ?). We suspect that low frequency unforced variability (such as PDO and AMO) exists but don’t have a long enough record to properly characterize those oscillations. The record for the Holocene seems to suggest even lower frequency variability (CWP, LIA, MWP, ?, RWP, etc), but they could be forced. The 5 degC temperature changes between glacials and interglacials can’t be explained by forcing.
(Sorry I’m so disagreeable today. I do appreciate your expertise even when I don’t agree with it.)
Frank,
I’m not trying to defend any specific results. My point is only that generic arguments cannot tell, whether the given uncertainty ranges are appropriate or not. They are not based on objective arguments that are simply right or wrong but on subjective estimates, and it’s not possible to avoid that situation.
Fully objective arguments are very weak in providing limits for climate sensitivity and other parameters of interest. Almost all historical data has had a change of affecting model building or tuning. Therefore none of that data satisfies fully the requirements of independence that formal validity of testing would require. In spite of that there’s a lot of information in the data. Therefore dismissing all that data is not a reasonable alternative. We are left with the situation that there’s a lot of data, that data is sparse (i.e., it tells about many things but leaves gaps all around), and the data has already affected model building and tuning. Therefore formal rules cannot tell, what the value of the data is as evidence on the correctness of the models or parameter estimates.
In brief: We have a lot of evidence, but we cannot determine objectively its evidential value.
The practical significance of knowing the climate sensitivity means that the scientists are expected to tell their best estimate, whether it can be justified formally or not, and that’s what they have done. Criticizing the values they have given must be done presenting specific arguments for some different estimates. That must be done based on comparable expertize in the issues. That’s the only proper way of criticizing subjective estimates.
Pekka: Above you wrote: “The GCMs can certainly produce many results known to be essentially correct. One example is seasonal variability. The models produce regionally reasonable summer and winter averages. They pass also a very large number of other similar tests.”
Do you have any references to seasonable variability? (I’ve seen the maps in AR4). Intuitively, it seems like models with a climate sensitivity that is too high would produce too much seasonal warming in response to season forcing. I suspect seasonal warming of the oceans is limited by the heat capacity of the mixed layer and a models ability to represent heat flux into and out of the mixed layer
The survival of the fittest. Models as evolutionary organisms. Can give some creepy feeling of robots gaining power. Can science evolve into human alienation and social Control? These are my nightmares. Perhaps there are some reality in it. I have heard scientists complain over the models taking over, that models are seen as the most important. At least I think it can be a democratic problem when the process toward knowledge lies in the dark. So there is a task to present the ideas behind in a understandable way. I think this discussion and SoD site are a good contribution.
On the topic of ensembles of models, I’ve recently been reading a bunch of papers to try and understand this better.
I’m travelling a lot and have a lot less time than a few months ago so it’s difficult to get focused and write a decent article but I will give it a try soon.
In brief, there is a strong and sensible background to ensemble forecasting in NWP (numerical weather prediction). I’ll try and explain that and how it might or might not relate to justification for ensemble climate modeling.
For people interested, have a read of:
Roots of Ensemble Forecasting, John M Lewis (2005)
REPRESENTING MODEL UNCERTAINTY IN WEATHER AND CLIMATE PREDICTION, T.N. Palmer, G.J. Shutts, R. Hagedorn, F.J. Doblas-Reyes,
T. Jung, and M. Leutbecher (2005)
The role of initial uncertainties in prediction, Edward Epstein (1969)
Stochastic dynamic prediction, Edward Epstein (1969)
Stochastic climate models, K Hasselmann (1976)
Not sure which are freely available. Search in Google Scholar and if anyone wants one of these papers that isn’t freely available, email me at scienceofdoom – you know what goes here- gmail.com.
I have red this explanation https://www.carbonbrief.org/qa-how-do-climate-models-work
And have some questions regarding test runs.
They test the model with conditions from 1850 or so and keep the forcings constant (GHG’s) to see if it drifts away.
First off all, is that really nescessary, should’n it be stable by design?
Second why don’t they check it with a GHG content more like now, and would it also be stable for these conditions if it is stable wiyh the older conditions.
Third with that testing they are guarantied that GHG will give a signal. It is a bit like a circular agument.
Svend asks: “They test the model with conditions from 1850 or so and keep the forcings constant (GHG’s) to see if it drifts away. First of all, is that really necessary, shouldn’t it be stable by design?”
Someone who understands computational fluid dynamics and turbulent mixing probably can give Svend a better answer than I can. But I’ll give it a try.
AOGCMs can’t be “designed” to give a stable temperature. Early AOGCMs needed “flux adjustments” (fudge factors) to produce realistic stable temperature. Today, clouds in AOGCMs are tuned to produce the absorbed SWR and emitted OLR we observe from space. However, that still isn’t enough to guarantee a stable temperature.
AOGCMs are based on grid cells. The kinetic energy contained in large flows that cover many grid cells, such as the Gulf Stream, can be accurately modeled. However, the kinetic energy in vortices within a single grid cell on the edge of the Gulf Stream can’t be modeled and therefore must be summarized by a parameter. The kinetic energy of large scale flow dissipates into smaller scale turbulence. IF I UNDERSTAND CORRECTLY, this means that climate models may not obey the law of conservation of energy. Therefore GMST may not be stable under pre-industrial conditions, even when there is no radiative imbalance at the TOA.
However, climate can also drift because the deep ocean temperature hasn’t reached a steady state with respect to surface temperature. The thermohaline circulation takes about a millennium to circulate water through the deep ocean, so even a 500-year pre-industrial control doesn’t produce a steady state. Normally the ocean and atmosphere portions of a GCM are “spun up” separately with large time steps for the ocean portion of the model, so that many millennia of ocean circulation can be simulated. Then the two are combined and “spun up” together (for a century? long enough for the atmosphere to equilibrate with the upper ocean, but not the deeper ocean?)
Consequently, running a long (500 year) control run under pre-industrial conditions (“PI control”) is an essential final step in characterizing an AOGCM after tuning. If a model shows a stable 0.2 K/century drift, that 0.2 K/century MAY be subtracted from the output of forcing experiments.
The paper linked below discusses temperature “drift” during pre-industrial control runs in the context of tuning climate models and discusses several possible causes in Section 2.3, including possible “non-conservation of energy”:
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2012MS000154
Pages 494-5 in this review article below on CMIP5 experiments also discusses “drift”:
https://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-11-00094.1
Several posts here at SOD discuss the problems encountered when one attempts to model turbulent flow using grid cells.
If I have made any gross mistakes, please correct me.
Hi Frank. The references have not made me more confident in GCMs.
I would still want to know if control runs have been made with more present conditions. I am not sure a stable model (PI conditions) would be stable with more present conditions unless it has been tried.
Svend: At present, our climate system is not in equilibrium with incoming and outgoing radiation. The deep ocean is still warming. There is a net inward imbalance at the TOA that is causing the planet to warm. So if scientists properly “spun up” an AOGCM with the present atmosphere (so it had a negligible imbalance at the TOA), they wouldn’t expect the model’s climate to be exactly like it is today. Therefore, modelers spin up a pre-industrial equilibrium and then raise GHGs to simulate today’s transient warming and radiative imbalance. (Technically ECS is the temperature change from one equilibrium state to another, and I should refer to a steady-state rather than an equilibrium state.) This strategy makes some sense to me, but that doesn’t mean you are required to like it.
Of course, you could complain that most models spin up into a wide range of pre-industrial steady-state temperatures – that average too cold (IIRC). And those disagreements may be bigger than the difference between transient and steady-state with today’s atmosphere.
IIRC, scientists also no longer believe that we were at a steady-state in 1860, which is the usual pre-industrial starting date for historic runs. IIRC, the latest energy balance paper from Lewis and Curry modifies ARGO ocean heat uptake to account for the heat uptake that was still occurring in 1860 due to the end of the LIA!
Svend wrote: “I am not sure a stable model (PI conditions) would be stable with more present conditions unless it has been tried.”
One standard test of a model is to abruptly double or (more recently) quadruple CO2 levels and watch the climate system approach a new steady state while monitoring the radiative imbalance at the TOA. These experiments run for 150 years. A plot of the TOA imbalance vs temperature is called a Gregory plot and the y-intercept and extrapolated x-intercepts afford F_2x and ECS. IIRC, the initial imbalance shrinks by about 90% over this period. This brings the planet closer to steady state than we are today: The increased radiative cooling to space driven by current warming (about 1 degC) has reduced the current radiative forcing of 2.5 W/m2 to a radiative imbalance of about 0.7 W/m2, about 70% of the way to steady state. So a 4X experiment spans the range from a 1XCO2 steady state to very near a 4XCO2 steady state.
However, after a few millennia, both the Greenland and Antarctica ice sheets could have melted (decreasing surface albedo) as Hansen feared. However, Hansen didn’t consider that within a few millennia most of the CO2 we have emitted will have been taken up by the deep ocean. IIRC, “at equilibrium”, the “airborne faction” for emitted CO2 should drop below 20%!
Perhaps you to worry too much about whether a climate simulation produces a stable simulation. As an alarmist might say: “the Anthropocene has begun and the consequences will destabilize our climate system millennia”.
Svend wrote: “The references have not made me more confident in GCMs.”
My intent was not to make you feel more confident or less confident about models. Instead, I hope I provided you with some reliable information about how models actually work.
Do modelers themselves have confidence in their models? The IPCC projects that RCP 6.0 (roughly equivalent to a doubling of CO2) will LIKELY lead to 1.4-3.1 degC of warming around 2100. They are admitting a 30% chance of this range being wrong. And the bottom end of that likely range agrees with the Lewis and Curry’s central estimate for TCR using energy balance models. 21st century warming depends more on TCR than ECS.