In #5 we examined a statement in the 6th Assessment Report (AR6) and some comments from their main reference, Imbers and co-authors from 2014.
Imbers experimented with a couple of simple models of natural variability and drew some conclusions about attribution studies.
We’ll have a look at their models. I’ll try and explain them in simple terms as well as some technical details.
Autoregressive or AR(1) Model
One model for natural variability they looked at goes by the name of first-order autoregressive or AR(1). In principle it’s pretty simple.
Let’s suppose the temperature tomorrow in London was random. Obviously, it wouldn’t be 1000°C. It wouldn’t be 100°C. There’s a range that you expect.
But if it were random, there would be no correlation between yesterday’s temperature and today’s. Like two spins of a roulette wheel or two dice rolls. The past doesn’t influence the present or the future.
We know from personal experience, and we can also see it in climate records, that the temperature today is correlated with the temperature from yesterday. The same applies for this year and last year.
If the temperature yesterday was 15°C, you expect that today it will be closer to 15°C than to the entire range of temperatures in London for this month for the past 50 years.
Essentially, we know that there is some kind of persistence of temperatures (and other climate variables). Yesterday influences today.
AR(1) is a simple model of random variation but includes persistence. It’s possibly the simplest model of random noise with persistence.
To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.


Leave a comment