Feeds:
Posts
Comments

Archive for May, 2023

In #1 we looked at natural variability – how the climate changes over decades and centuries before we started burning fossil fuels in large quantities. So clearly many past trends were not caused by burning fossil fuels. We need some method to attribute (or not) a recent trend to human activity. This is where climate models come in.

In #3 we looked at an example of a climate model producing the right value of 20th century temperature trends for the wrong reason.

The Art and Science of Climate Model Tuning is an excellent paper by Frederic Hourdin and a number of co-authors. It got a brief mention in Models, On – and Off – the Catwalk – Part Six – Tuning and Seasonal Contrasts. One of the co-authors is Thorsten Mauritsen who was the lead author of Tuning the Climate of a Global Model, looked at in another old article, and another co-author is Jean-Christophe Golaz, lead author of the paper we looked at in #3.

They explain that there are lots of choices to make when building a model:

Each parameterization relies on a set of internal equations and often depends on parameters, the values of which are often poorly constrained by observations. The process of estimating these uncertain parameters in order to reduce the mismatch between specific observations and model results is usually referred to as tuning in the climate modeling community.

Anyone who has dealt with mathematical modeling understands this – some parameters are unknown, or they might have a broad range of plausible values

An interesting comment:

There may also be some concern that explaining that models are tuned may strengthen the arguments of those claiming to question the validity of climate change projections. Tuning may be seen indeed as an unspeakable way to compensate for model errors.

The authors are advocating for more transparency on this topic..

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In #1 we took a brief look at Natural Variation – climate varies from decade to decade, century to century. In #2 we took a brief look at attribution from “simple” models and from climate models (GCMs).

Here’s an example of the problem of “what do we make of climate models?”

I wrote about it on the original blog – Opinions and Perspectives – 6 – Climate Models, Consensus Myths and Fudge Factors. I noticed the paper I used in that article came up in Hourdin et al 2017, which in turn was referenced from the most recent IPCC report, AR6.

So this is the idea from the paper by Golaz and co-authors in 2013.

They ran a climate model over the 20th century – this is a standard thing to do to test a climate model on lots of different metrics. How well does the model reproduce our observations of trends?

In this case it was temperature change from 1900 to present.

In one version of the model they used a parameter value (related to aerosols and clouds) that is traditional but wrong, in another version they used the best value based on recent studies, and they added another alternate value.

What happens?

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »