Feeds:
Posts
Comments

What controls the frequency of tropical cyclones?

Here’s an interesting review paper from 2021 on one aspect of tropical cyclone research, by a cast of luminaries in the field – Tropical Cyclone Frequency by Adam Sobel and co-authors.

Plain language summaries are a great idea and this paper has one:

In this paper, the authors review the state of the science regarding what is known about tropical cyclone frequency. The state of the science is not great. There are around 80 tropical cyclones in a typical year, and we do not know why it is this number and not a much larger or smaller one.

We also do not know much about whether this number should increase or decrease as the planet warms – thus far, it has not done much of either on the global scale, though there are larger changes in some particular regions.

No existing theory predicts tropical cyclone frequency.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

One Look at the Effect of Higher Resolution Models

In #15 we looked at one issue in modeling tropical cyclones (TCs). Current climate models have actual biases in their simulation of ocean temperature. When we run simulations with and without these errors there are large changes in the total energy of TCs.

In this article we’ll look at another issue – model resolution. Because TCs are fast and small scale, climate models at current resolution struggle to model them.

It’s a well known problem in climate modeling, and not at all a surprise to anyone who understands the basics of mathematical modeling.

This is another paper referenced by the 6th Assessment Report (AR6): “Impact of Model Resolution on Tropical Cyclone Simulation Using the HighResMIP–PRIMAVERA Multimodel Ensemble”, by Malcolm Roberts and co-authors from 2020.

The key science questions addressed in this study are the following:

1) Are there robust impacts of higher resolution on explicit tropical cyclone simulation across the multi- model ensemble using different tracking algorithms?

2) What are the possible processes responsible for any changes with resolution?

3) How many ensemble members are needed to assess the skill in the interannual variability of tropical cyclones?

In plain English:

  • They review the results of a number of climate models each at their standard resolution and then at a higher resolution
  • When they find a difference, what is the physics responsible? What’s missing from the lower resolution model that “kicks in” with the higher resolution model?
  • How many runs of the same model with slightly different initial conditions are needed before we start to see the year to year variability that we see in reality?

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

In #1-#6 of the “Extreme Weather” series we looked at trends in Tropical Cyclones (TCs) from the perspective of chapter 11 of the 6th assessment report of the IPCC (AR6). The six parts were summarized here.

The report breaks up each type of extreme weather, reviews recent trends and then covers attribution and future projections.

Both attribution and future projections rely primarily on climate models. We looked at some of the ideas of attribution in the “Natural Variability, Attribution and Climate Models” series.

AR6 has a section: Model Evaluation, on p. 1587, before it moves into Detection and Attribution, Event Attribution.

How good are models at reproducing Tropical Cyclones?

Accurate projections of future TC activity have two principal requirements: accurate representation of changes in the relevant environmental factors (e.g., sea surface temperatures) that can affect TC activity, and accurate representation of actual TC activity in given environmental conditions.

Suppose in the future we had a model that was amazing at reproducing tropical cyclones when a variety of climate metrics were accurately reproduced. However, if the climate model didn’t reproduce these metrics reliably we still wouldn’t get a reliable answer about future trends in tropical cyclones.

As a result tropical cyclones are a major modeling challenge.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Overview of Chapter 3 of the IPCC 6th Assessment Report

The periodic IPCC assessment reports are generally good value for covering the state of climate science. I’m taking about “Working group 1 – the Physical Science Basis”, which in the case of the 6th assessment report (AR6) is 12 chapters.

They are quite boring compared with news headlines. Boring and dull is good if you want to find out about real climate.

If you prefer reading about the end of days then you’ll need to stick to press releases.

Here’s a quick summary of Chapter 3 – “Human Influence on the Climate System”. Chapter 3 naturally follows on from Chapter 2 – “Changing State of the Climate System”.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

In #9 we looked at an interesting paper (van Oldenborgh and co-authors from 2013) assessing climate models. They concluded that climate models were over-confident in projecting the future, at least from one perspective which wouldn’t be obvious to a newcomer to climate.

Their perspective was to assess spatial variability of climate models’ simulations and compare them to reality. If they got the spatial variation reasonably close then maybe we can rely on their assessment of how the climate might change over time.

Why is that?

One idea behind this thinking is to consider a coin toss:

  • If you flip 100 coins at the same time you expect around 50 heads and 50 tails. Spatial.
  • If you flip one coin 100 times you expect 50 heads and 50 tails. Time.

There’s no strong reason to make this parallel with climate models on spatial and time dimensions but climate is full of challenging problems where we have limited visibility. We could give up, but we just have the one planet so all ideas are welcome.

In the paper they touched on ideas that often come up in modeling studies:

  • assessing natural variability by doing lots of runs of the same climate model and seeing how they vary
  • comparing the results of different climate models

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

In #1 we saw an example of natural variability in floods in Europe over 500 years. Clearly the large ups and downs prior to the 1900s can’t be explained by “climate change”, i.e. from burning fossil fuels.

If you learnt about climate change via the media then you’ve probably heard very little about natural variability, but it’s at the top of climate scientists’ minds when they look at the past, even if it doesn’t get mentioned much in press releases.

Here’s another example, this time of droughts in the western USA. This is a reconstruction of the pre-instrument period.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Originally, I thought we would have a brief look at the subject of attribution before we went back to the IPCC 6th Assessment Report (AR6). However, it’s a big subject.

In #8, and the few articles preceding, we saw various attempts to characterize “natural variability” from the few records we have. It’s a challenge. I recommend reading the conclusion of #8.

In this article we’ll look at a paper by G J van Oldenborgh and colleagues from 2013. They introduce the concept of assessing natural variability using climate models, but that’s not the principle idea of the paper. However, it’s interesting to see what they say.

Their basic idea – we can compare weather models against reality because we make repeated weather forecasts and then can see whether we were overconfident or underconfident.

For example, one time we said there was a 10% chance of a severe storm. The storm didn’t happen. That doesn’t mean we were wrong. It was a probability. But if we have 100 examples of this 10% chance we can see – did we get approximately 10 instances of severe storms? If we got 0-3 maybe we were wildly overconfident. If we got 30 maybe we were very underconfident.

Now we can’t compare climate models outputs of the future vs observations because the future hasn’t happened yet – there’s only one planet and climate forecasts are over decades to a century, not one week.

We can, however, compare the spatial variation of models with reality.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

In #7 we looked at Huybers & Curry 2006 and Pelletier 1998 and saw “power law relationships” when we look at past climate variation over longer timescales.

Pelletier also wrote a very similar paper in 1997 that I went through, and in searching for who cited it I came across “The Structure of Climate Variability Across Scales”, a review paper from Christian Franzke and co-authors from 2020:

To summarize, many climatological time series exhibit a power law behavior in their amplitudes or their autocorrelations or both. This behavior is an imprint of scaling, which is a fundamental property of many physical and biological systems and has also been discovered in financial and socioeconomic data as well as in information networks. While the power law has no preferred scale, the exponential function, also ubiquitous in physical and biological systems, does have a preferred scale, namely, the e-folding scale, that is, the amount by which its magnitude has decayed by a factor of e. For example, the average height of humans is a good predictor for the height of the next person you meet as there are no humans that are 10 times larger or smaller than you.

However, the average wealth of people is not a good predictor for the wealth of the next person you meet as there are people who can be more than a 1,000 times richer or poorer than you are. Hence, the height of people is well described by a Gaussian distribution, while the wealth of people follows a power law.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

In #6 we looked in a bit more detail at Imbers and co-authors from 2014. Natural variability is a big topic.

In this article we’ll look at papers that try to assess natural variability over long timescales – Peter Huybers & William Curry from 2006 who also cited an interesting paper from Jon Pelletier from 1998.

Here’s Jon Pelletier:

Understanding more about the natural variability of climate is essential for an accurate assessment of the human influence on climate. For example, an accurate model of natural variability would enable climatologists to make quantitative estimates of the likelihood that the observed warming trend is anthropogenically induced.

He notes another paper with this comment (explained in simpler terms below):

However, their stochastic model for the natural variability of climate was an autoregressive model which had an exponential autocorrelation dependence on time lag. We present evidence for a power-law autocorrelation function, implying larger low-frequency fluctuations than those produced by an autoregressive stochastic model. This evidence suggests that the statistical likelihood of the observed warming trend being larger than that expected from natural variations of the climate system must be reexamined.

In plain language, the paper he refers to used the simplest model of random noise with persistence, the AR(1) model we looked at in the last article.

He is saying that this simple model is “too kind” when trying to weigh up anthropogenic vs natural variations in temperature.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

In #5 we examined a statement in the 6th Assessment Report (AR6) and some comments from their main reference, Imbers and co-authors from 2014.

Imbers experimented with a couple of simple models of natural variability and drew some conclusions about attribution studies.

We’ll have a look at their models. I’ll try and explain them in simple terms as well as some technical details.

Autoregressive or AR(1) Model

One model for natural variability they looked at goes by the name of first-order autoregressive or AR(1). In principle it’s pretty simple.

Let’s suppose the temperature tomorrow in London was random. Obviously, it wouldn’t be 1000°C. It wouldn’t be 100°C. There’s a range that you expect.

But if it were random, there would be no correlation between yesterday’s temperature and today’s. Like two spins of a roulette wheel or two dice rolls. The past doesn’t influence the present or the future.

We know from personal experience, and we can also see it in climate records, that the temperature today is correlated with the temperature from yesterday. The same applies for this year and last year.

If the temperature yesterday was 15°C, you expect that today it will be closer to 15°C than to the entire range of temperatures in London for this month for the past 50 years.

Essentially, we know that there is some kind of persistence of temperatures (and other climate variables). Yesterday influences today.

AR(1) is a simple model of random variation but includes persistence. It’s possibly the simplest model of random noise with persistence.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.