Feeds:
Posts
Comments

Archive for July, 2023

In #7 we looked at Huybers & Curry 2006 and Pelletier 1998 and saw “power law relationships” when we look at past climate variation over longer timescales.

Pelletier also wrote a very similar paper in 1997 that I went through, and in searching for who cited it I came across “The Structure of Climate Variability Across Scales”, a review paper from Christian Franzke and co-authors from 2020:

To summarize, many climatological time series exhibit a power law behavior in their amplitudes or their autocorrelations or both. This behavior is an imprint of scaling, which is a fundamental property of many physical and biological systems and has also been discovered in financial and socioeconomic data as well as in information networks. While the power law has no preferred scale, the exponential function, also ubiquitous in physical and biological systems, does have a preferred scale, namely, the e-folding scale, that is, the amount by which its magnitude has decayed by a factor of e. For example, the average height of humans is a good predictor for the height of the next person you meet as there are no humans that are 10 times larger or smaller than you.

However, the average wealth of people is not a good predictor for the wealth of the next person you meet as there are people who can be more than a 1,000 times richer or poorer than you are. Hence, the height of people is well described by a Gaussian distribution, while the wealth of people follows a power law.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In #6 we looked in a bit more detail at Imbers and co-authors from 2014. Natural variability is a big topic.

In this article we’ll look at papers that try to assess natural variability over long timescales – Peter Huybers & William Curry from 2006 who also cited an interesting paper from Jon Pelletier from 1998.

Here’s Jon Pelletier:

Understanding more about the natural variability of climate is essential for an accurate assessment of the human influence on climate. For example, an accurate model of natural variability would enable climatologists to make quantitative estimates of the likelihood that the observed warming trend is anthropogenically induced.

He notes another paper with this comment (explained in simpler terms below):

However, their stochastic model for the natural variability of climate was an autoregressive model which had an exponential autocorrelation dependence on time lag. We present evidence for a power-law autocorrelation function, implying larger low-frequency fluctuations than those produced by an autoregressive stochastic model. This evidence suggests that the statistical likelihood of the observed warming trend being larger than that expected from natural variations of the climate system must be reexamined.

In plain language, the paper he refers to used the simplest model of random noise with persistence, the AR(1) model we looked at in the last article.

He is saying that this simple model is “too kind” when trying to weigh up anthropogenic vs natural variations in temperature.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In #5 we examined a statement in the 6th Assessment Report (AR6) and some comments from their main reference, Imbers and co-authors from 2014.

Imbers experimented with a couple of simple models of natural variability and drew some conclusions about attribution studies.

We’ll have a look at their models. I’ll try and explain them in simple terms as well as some technical details.

Autoregressive or AR(1) Model

One model for natural variability they looked at goes by the name of first-order autoregressive or AR(1). In principle it’s pretty simple.

Let’s suppose the temperature tomorrow in London was random. Obviously, it wouldn’t be 1000°C. It wouldn’t be 100°C. There’s a range that you expect.

But if it were random, there would be no correlation between yesterday’s temperature and today’s. Like two spins of a roulette wheel or two dice rolls. The past doesn’t influence the present or the future.

We know from personal experience, and we can also see it in climate records, that the temperature today is correlated with the temperature from yesterday. The same applies for this year and last year.

If the temperature yesterday was 15°C, you expect that today it will be closer to 15°C than to the entire range of temperatures in London for this month for the past 50 years.

Essentially, we know that there is some kind of persistence of temperatures (and other climate variables). Yesterday influences today.

AR(1) is a simple model of random variation but includes persistence. It’s possibly the simplest model of random noise with persistence.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »