In #1 we looked at natural variability – how the climate changes over decades and centuries before we started burning fossil fuels in large quantities. So clearly many past trends were not caused by burning fossil fuels. We need some method to attribute (or not) a recent trend to human activity. This is where climate models come in.
In #3 we looked at an example of a climate model producing the right value of 20th century temperature trends for the wrong reason.
The Art and Science of Climate Model Tuning is an excellent paper by Frederic Hourdin and a number of co-authors. It got a brief mention in Models, On – and Off – the Catwalk – Part Six – Tuning and Seasonal Contrasts. One of the co-authors is Thorsten Mauritsen who was the lead author of Tuning the Climate of a Global Model, looked at in another old article, and another co-author is Jean-Christophe Golaz, lead author of the paper we looked at in #3.
They explain that there are lots of choices to make when building a model:
Each parameterization relies on a set of internal equations and often depends on parameters, the values of which are often poorly constrained by observations. The process of estimating these uncertain parameters in order to reduce the mismatch between specific observations and model results is usually referred to as tuning in the climate modeling community.
Anyone who has dealt with mathematical modeling understands this – some parameters are unknown, or they might have a broad range of plausible values
An interesting comment:
There may also be some concern that explaining that models are tuned may strengthen the arguments of those claiming to question the validity of climate change projections. Tuning may be seen indeed as an unspeakable way to compensate for model errors.
The authors are advocating for more transparency on this topic..
To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.
Leave a Reply