Feeds:
Posts
Comments

Archive for the ‘Climate Models’ Category

What controls the frequency of tropical cyclones?

Here’s an interesting review paper from 2021 on one aspect of tropical cyclone research, by a cast of luminaries in the field – Tropical Cyclone Frequency by Adam Sobel and co-authors.

Plain language summaries are a great idea and this paper has one:

In this paper, the authors review the state of the science regarding what is known about tropical cyclone frequency. The state of the science is not great. There are around 80 tropical cyclones in a typical year, and we do not know why it is this number and not a much larger or smaller one.

We also do not know much about whether this number should increase or decrease as the planet warms – thus far, it has not done much of either on the global scale, though there are larger changes in some particular regions.

No existing theory predicts tropical cyclone frequency.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

One Look at the Effect of Higher Resolution Models

In #15 we looked at one issue in modeling tropical cyclones (TCs). Current climate models have actual biases in their simulation of ocean temperature. When we run simulations with and without these errors there are large changes in the total energy of TCs.

In this article we’ll look at another issue – model resolution. Because TCs are fast and small scale, climate models at current resolution struggle to model them.

It’s a well known problem in climate modeling, and not at all a surprise to anyone who understands the basics of mathematical modeling.

This is another paper referenced by the 6th Assessment Report (AR6): “Impact of Model Resolution on Tropical Cyclone Simulation Using the HighResMIP–PRIMAVERA Multimodel Ensemble”, by Malcolm Roberts and co-authors from 2020.

The key science questions addressed in this study are the following:

1) Are there robust impacts of higher resolution on explicit tropical cyclone simulation across the multi- model ensemble using different tracking algorithms?

2) What are the possible processes responsible for any changes with resolution?

3) How many ensemble members are needed to assess the skill in the interannual variability of tropical cyclones?

In plain English:

  • They review the results of a number of climate models each at their standard resolution and then at a higher resolution
  • When they find a difference, what is the physics responsible? What’s missing from the lower resolution model that “kicks in” with the higher resolution model?
  • How many runs of the same model with slightly different initial conditions are needed before we start to see the year to year variability that we see in reality?

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In #1-#6 of the “Extreme Weather” series we looked at trends in Tropical Cyclones (TCs) from the perspective of chapter 11 of the 6th assessment report of the IPCC (AR6). The six parts were summarized here.

The report breaks up each type of extreme weather, reviews recent trends and then covers attribution and future projections.

Both attribution and future projections rely primarily on climate models. We looked at some of the ideas of attribution in the “Natural Variability, Attribution and Climate Models” series.

AR6 has a section: Model Evaluation, on p. 1587, before it moves into Detection and Attribution, Event Attribution.

How good are models at reproducing Tropical Cyclones?

Accurate projections of future TC activity have two principal requirements: accurate representation of changes in the relevant environmental factors (e.g., sea surface temperatures) that can affect TC activity, and accurate representation of actual TC activity in given environmental conditions.

Suppose in the future we had a model that was amazing at reproducing tropical cyclones when a variety of climate metrics were accurately reproduced. However, if the climate model didn’t reproduce these metrics reliably we still wouldn’t get a reliable answer about future trends in tropical cyclones.

As a result tropical cyclones are a major modeling challenge.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

Overview of Chapter 3 of the IPCC 6th Assessment Report

The periodic IPCC assessment reports are generally good value for covering the state of climate science. I’m taking about “Working group 1 – the Physical Science Basis”, which in the case of the 6th assessment report (AR6) is 12 chapters.

They are quite boring compared with news headlines. Boring and dull is good if you want to find out about real climate.

If you prefer reading about the end of days then you’ll need to stick to press releases.

Here’s a quick summary of Chapter 3 – “Human Influence on the Climate System”. Chapter 3 naturally follows on from Chapter 2 – “Changing State of the Climate System”.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In #9 we looked at an interesting paper (van Oldenborgh and co-authors from 2013) assessing climate models. They concluded that climate models were over-confident in projecting the future, at least from one perspective which wouldn’t be obvious to a newcomer to climate.

Their perspective was to assess spatial variability of climate models’ simulations and compare them to reality. If they got the spatial variation reasonably close then maybe we can rely on their assessment of how the climate might change over time.

Why is that?

One idea behind this thinking is to consider a coin toss:

  • If you flip 100 coins at the same time you expect around 50 heads and 50 tails. Spatial.
  • If you flip one coin 100 times you expect 50 heads and 50 tails. Time.

There’s no strong reason to make this parallel with climate models on spatial and time dimensions but climate is full of challenging problems where we have limited visibility. We could give up, but we just have the one planet so all ideas are welcome.

In the paper they touched on ideas that often come up in modeling studies:

  • assessing natural variability by doing lots of runs of the same climate model and seeing how they vary
  • comparing the results of different climate models

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

Originally, I thought we would have a brief look at the subject of attribution before we went back to the IPCC 6th Assessment Report (AR6). However, it’s a big subject.

In #8, and the few articles preceding, we saw various attempts to characterize “natural variability” from the few records we have. It’s a challenge. I recommend reading the conclusion of #8.

In this article we’ll look at a paper by G J van Oldenborgh and colleagues from 2013. They introduce the concept of assessing natural variability using climate models, but that’s not the principle idea of the paper. However, it’s interesting to see what they say.

Their basic idea – we can compare weather models against reality because we make repeated weather forecasts and then can see whether we were overconfident or underconfident.

For example, one time we said there was a 10% chance of a severe storm. The storm didn’t happen. That doesn’t mean we were wrong. It was a probability. But if we have 100 examples of this 10% chance we can see – did we get approximately 10 instances of severe storms? If we got 0-3 maybe we were wildly overconfident. If we got 30 maybe we were very underconfident.

Now we can’t compare climate models outputs of the future vs observations because the future hasn’t happened yet – there’s only one planet and climate forecasts are over decades to a century, not one week.

We can, however, compare the spatial variation of models with reality.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In #5 we examined a statement in the 6th Assessment Report (AR6) and some comments from their main reference, Imbers and co-authors from 2014.

Imbers experimented with a couple of simple models of natural variability and drew some conclusions about attribution studies.

We’ll have a look at their models. I’ll try and explain them in simple terms as well as some technical details.

Autoregressive or AR(1) Model

One model for natural variability they looked at goes by the name of first-order autoregressive or AR(1). In principle it’s pretty simple.

Let’s suppose the temperature tomorrow in London was random. Obviously, it wouldn’t be 1000°C. It wouldn’t be 100°C. There’s a range that you expect.

But if it were random, there would be no correlation between yesterday’s temperature and today’s. Like two spins of a roulette wheel or two dice rolls. The past doesn’t influence the present or the future.

We know from personal experience, and we can also see it in climate records, that the temperature today is correlated with the temperature from yesterday. The same applies for this year and last year.

If the temperature yesterday was 15°C, you expect that today it will be closer to 15°C than to the entire range of temperatures in London for this month for the past 50 years.

Essentially, we know that there is some kind of persistence of temperatures (and other climate variables). Yesterday influences today.

AR(1) is a simple model of random variation but includes persistence. It’s possibly the simplest model of random noise with persistence.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

How do we know that a change in the climate, for example, rainfall trends in recent decades, is due to human activity like burning fossil fuels? How do we know it’s not natural variability?

This is the question we’ve started to look at in the first four parts of this series.

When it comes to attribution there are 1000s of papers considering the attribution of various trends in climate metrics to human activity (primarily burning fossil fuels).

Here’s what the 6th assessment report (AR6) says about attribution in plain English:

We need to make some assumptions: observed changes are due to a simple addition of forced changes (e.g effects from more CO2 in the atmosphere) and natural variability; and we can work out natural variability

Another way to write the second part:

If we don’t understand natural variability then our assessment could well be wrong.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In #1 we looked at natural variability – how the climate changes over decades and centuries before we started burning fossil fuels in large quantities. So clearly many past trends were not caused by burning fossil fuels. We need some method to attribute (or not) a recent trend to human activity. This is where climate models come in.

In #3 we looked at an example of a climate model producing the right value of 20th century temperature trends for the wrong reason.

The Art and Science of Climate Model Tuning is an excellent paper by Frederic Hourdin and a number of co-authors. It got a brief mention in Models, On – and Off – the Catwalk – Part Six – Tuning and Seasonal Contrasts. One of the co-authors is Thorsten Mauritsen who was the lead author of Tuning the Climate of a Global Model, looked at in another old article, and another co-author is Jean-Christophe Golaz, lead author of the paper we looked at in #3.

They explain that there are lots of choices to make when building a model:

Each parameterization relies on a set of internal equations and often depends on parameters, the values of which are often poorly constrained by observations. The process of estimating these uncertain parameters in order to reduce the mismatch between specific observations and model results is usually referred to as tuning in the climate modeling community.

Anyone who has dealt with mathematical modeling understands this – some parameters are unknown, or they might have a broad range of plausible values

An interesting comment:

There may also be some concern that explaining that models are tuned may strengthen the arguments of those claiming to question the validity of climate change projections. Tuning may be seen indeed as an unspeakable way to compensate for model errors.

The authors are advocating for more transparency on this topic..

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In #1 we took a brief look at Natural Variation – climate varies from decade to decade, century to century. In #2 we took a brief look at attribution from “simple” models and from climate models (GCMs).

Here’s an example of the problem of “what do we make of climate models?”

I wrote about it on the original blog – Opinions and Perspectives – 6 – Climate Models, Consensus Myths and Fudge Factors. I noticed the paper I used in that article came up in Hourdin et al 2017, which in turn was referenced from the most recent IPCC report, AR6.

So this is the idea from the paper by Golaz and co-authors in 2013.

They ran a climate model over the 20th century – this is a standard thing to do to test a climate model on lots of different metrics. How well does the model reproduce our observations of trends?

In this case it was temperature change from 1900 to present.

In one version of the model they used a parameter value (related to aerosols and clouds) that is traditional but wrong, in another version they used the best value based on recent studies, and they added another alternate value.

What happens?

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

Older Posts »