Feeds:
Posts
Comments

Archive for the ‘Climate Models’ Category

In #1 we looked at some examples of natural variability – the climate changes from decade to decade, century to century and out to much longer timescales.

How sure are we that any recent changes are from burning fossil fuels, or other human activity?

In some scientific fields we can run controlled experiments but we just have the one planet. So instead we need to use our knowledge of physics.

In an attempt to avoid a lengthy article I’m going to massively over-simplify.

“Simple Physics”

Some concepts in climate can be modeled by what I’ll call “simple physics”. It often doesn’t look simple.

Let’s take adding CO2 to the atmosphere. We can do this in a mathematical model. If we “keep everything else the same” in a given location we can calculate the change in energy the planet emits to space for more CO2. Less energy is emitted to space with more CO2 in the atmosphere.

The value varies in different locations, but we just calculate it in lots of places and take the average.

As less energy is leaving the planet (but the same amount is still being absorbed by the sun) the planet warms up.

In our model, we can keep increasing the temperature of the planet in our model until the energy emitted to space is back to what it was before. The planetary energy budget is back in balance.

So we’ve calculated a new surface temperature for, say, a doubling of CO2.

To see the whole article, visit the new Science of Doom on Substack page and please consider suscribing, for notifications on new articles.

Read Full Post »

In VI – Australia CanESM2, CSIRO, Miroc and MRI compared vs history we looked at how each model thought rainfall had changed in Australia over about 100 years, and we compared that to observations. We did this for annual rainfall, also for Australian summer (Dec, Jan, Feb) and Australian winter (Jun, Jul, Aug).

Here we will look at two of the four emissions scenarios. We compare 2081-2100 vs 1979-2005.

Note that we are not comparing the end of the 21st century from the model with observations at the end of the 20th century. That produces much different results – the model’s view of recent history doesn’t match observations very well. We are comparing the model future with the model past. So we are asking the model to say how it sees rainfall changing as a result of different amounts of CO2 being emitted.

The two scenarios are:

  • RCP4.5 – with current trends continuing we are something like RCP6. I think of RCP4.5 as being “what we are doing now” but with some substantial reductions in CO2 emissions. But it’s nothing like RCP2.6, which is more “project Greta” where emissions basically stop in a decade
  • RCP8.5 – extreme CO2 emissions. Often described as “business as usual” perhaps to get people’s attention. Think – most of Africa moving out of abject poverty, not passing through the demographic transition (so population going very high) and burning coal like crazy with the efficiency of 19th century Europe.

Each pair of graphs is future RCP4.5 as % of recent past, and RCP8.5 as % of recent past. The four models, clockwise from top left – MPI (Germany), Miroc (Japan), CSIRO (Australia) and CAN (Canada):

Figure 1 – Click to expand

And now the same, but only looking at Australian summer, DJF:

Figure 2 – Click to expand

Depending on which model you like, things could be really bad, or really good, or about the same with “climate change”.

Note that the color scale I’m using here is the same as the last article, but different from all the earlier articles, the % range is from 50% to 150% (rather than 0% to 200%).

References

An overview of CMIP5 and the experiment design, Taylor, Stouffer & Meehl, AMS (2012)

GPCP data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

GPCC data provided from https://psl.noaa.gov/data/gridded/data.gpcc.html

CMIP5 data provided by the portal at https://esgf-data.dkrz.de/search/cmip5-dkrz/

Read Full Post »

In V – CanESM2, CSIRO, Miroc and MRI compared we compared four models among themselves for two future scenarios of CO2 emissions, and also the four models compared with historical observations.

Here we zero in on Australia. Let’s compare all months 1979-2005, i.e. recent history with around 100 years before that, all months 1891-1910 (note 1).

This first figure is a % comparison. Each map is annual data: average 1979-2005 % of average 1891-1910. Note that the color scale I’m using here is different from previous articles, the % range is from 50% to 150% (rather than 0% to 200%).

The left-most map is observations, GPCC, and on the right the four different models. Each of the four maps is one model, 1979-2005 as a % of that model for 1891-1910 – clockwise from top left, MPI, MIROC, CSIRO, CanESM2 (note 2):

Figure 1 – Click to expand

So we are seeing how well the models compare among themselves, and with observations, for a century or so change. All of the models are run with the identical set of conditions (the best estimate of forcings like CO2, aerosols, etc) – that’s what CMIP5 is all about.

This second graphic is % comparison over Australian summer: December, January, February (DJF). It is otherwise exactly the same as the figure 1:

Figure 2 – Click to expand

The annual model comparisons look “better” than the summer (DJF) comparisons.

With the DJF comparisons, Australian summer observations across a century have the western half of Australia wetter, and coastal Queensland (that’s the right edge from halfway up) drier. Also some inland NSW regions drier.

MPI and CSIRO show the western edge drier. Miroc and CAN show the western edge wetter. CSIRO has the Adelaide region and west much drier, observations show much wetter, CAN and MPI show this area a little wetter while Miroc has it about the same.

It’s difficult to claim the summer model comparisons demonstrate any insight – given that we can check them against observations. And overall, these four models don’t demonstrate any particular biases, i.e., they don’t all agree with each other against the observations. Apart from inland western Australia where they fail to predict the much higher rainfall seen in observations.

Place yourself back in 1900. You have these models, how useful are they for predicting 100 years ahead what would happen to summer rainfall?

References

An overview of CMIP5 and the experiment design, Taylor, Stouffer & Meehl, AMS (2012)

GPCP data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

GPCC data provided from https://psl.noaa.gov/data/gridded/data.gpcc.html

CMIP5 data provided by the portal at https://esgf-data.dkrz.de/search/cmip5-dkrz/

Notes

Note 1: The choice of dates is constrained by:

  • 1891 being the start of the GPCC observational dataset
  • 1979 being the start of the satellite era
  • 2005 being the end date that this class of models ran to for their “historical” simulation – CMIP5 historical simulations were from 1850-2005

As a result, lots of comparisons in climate papers involve 1979-2005, so even though we aren’t using satellite data here, I have been using that 27-year period.

Note 2: Each model output is the median of all of the simulations

Read Full Post »

In the last article we looked at a comparison between Miroc (Japanese climate mode) and MPI (German climate model). See that article for more details.

Now we add CanESM2 and CSIRO-Mk3-6-0 to the comparison.

CanESM2 is a Canadian climate model, with an ESM component – this is an earth system model, basically it means that CO2 emissions are explicitly controlled, but not the atmospheric CO2 concentration (so the model simulates aspects of the carbon cycle). Their model has 5 historical simulations and 5 each each of three RCPs (skipping RCP6 like many other CMIP5 contributors)

CSIRO-Mk3-6-0 is an Australian model. Their model has 3 historical simulations and 10 each of the four RCPs.

As in the previous article, MPI, Miroc, CAN and CSIRO for RCP4.5 for 2081-2100. Each graphic – the median of all of the simulations as % of the median of that model’s historical 1979-2005 simulations:

Figure 1 – MPI, Miroc, CAN & CSIRO for RCP4.5 (%) – Click to expand

And for RCP8.5 for 2081-2100

Figure 2 – MPI, Miroc, CAN & CSIRO for RCP8.5 (%) – Click to expand

 

And comparisons of each models’ historical runs (the median of multiple runs): % of observations (GPCC) over 1979-2005. So blue means the model over-estimates actual rainfall, whereas red means the model under-estimates:

Figure 3 – MPI, Miroc, CAN & CSIRO historical runs compared with GPCC over the same 1979-2005 period – Click to expand

Clearly a strong consensus.

Read Full Post »

In Models and Rainfall – III – MPI Seasonal and Models and Rainfall – II – MPI we looked at one model, MPI from Germany, from a variety of perspectives.

In this article we’ll look at another model that took part in the last Climate Model Intercomparison Project (CMIP5) – Miroc5 from Japan and compare it with MPI.

A reminder from an earlier article – the scenarios (Representative Concentration Pathways) in brief (and see van Vuuren reference below):

Miroc5 (just called Miroc in the rest of the article) did five simulations of historical and three simulations of each RCP through to 2100.

The first graphic has five maps: first, the median Miroc simulation of 1979-2005, followed by simulations of 2081-2100 for rcp2.6 to rcp8.5 (each one is the median of the three simulations):

Figure 1 – Miroc simulations of historical 1979-2005 and the 4 RCPs in 2081-2100 – Click to expand

The % change of the median Miroc simulation for each scenario from the median historical simulation:

We can see a consistent theme through increasing CO2 concentrations.

Figure 2 – Miroc simulations for RCPs 2081-2100 as % of Miroc historical 1979-2005 – Click to expand

As the previous figure, but difference (future – historical):

Figure 3 – Miroc simulations for RCPs 2081-2100 less Miroc historical 1979-2005 – Click to expand

Side by Side Comparisons of MPI and Miroc Predictions

And now some comparisons side by side. On the left MPI, on the right Miroc. Both are comparing RCP4.5 as a percentage of their own historical simulation (and both are the medians of the simulations):

Figure 4 – MPI compared with Miroc for RCP4.5 (%) – Click to expand

I think seeing the future less historical (as a difference rather than %) is also useful – in areas with very low rain the % difference can appear extreme even though the impact is very low. Overall, % graphs are more useful – if you live in an area with say 20mm of rainfall per month on average then -10mm might not show up very well on a difference chart, but it can be critical. But for reference, the difference:

Figure 5 – MPI compared with Miroc for RCP4.5 (difference) – Click to expand

Now the same two graphs for RCP8.5. On the left MPI, on the right Miroc. % of their historical simulation in each case:

Figure 6 – MPI compared with Miroc for RCP8.5 (%) – Click to expand

And now difference (future less historical) in each case:

Figure 7 – MPI compared with Miroc for RCP8.5 (difference) – Click to expand

Side by Side Comparisons of Models vs Observations

In Part II we saw some comparisons of the MPI model with GPCC observations, both over the same 1979-2005 time period. Here is MPI (left) and MIROC (right) each as a % of GPCC:

Figure 8 – MPI compared with Miroc for GPCC observations (%) – Click to expand

It’s clear that different models, at least for now MPI and Miroc, have significant differences between them.

References

An overview of CMIP5 and the experiment design, Taylor, Stouffer & Meehl, AMS (2012)

GPCP data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

GPCC data provided from https://psl.noaa.gov/data/gridded/data.gpcc.html

CMIP5 data provided by the portal at https://esgf-data.dkrz.de/search/cmip5-dkrz/

The representative concentration pathways: an overview, van Vuuren et al, Climatic Change (2011)

 

Read Full Post »

In the last article we looked at the MPI model – comparisons of 2081-2100 for different atmospheric CO2 concentrations/emissions with 1979-2005. And comparisons between the MPI historical simulation and observations. These were all on an annual basis.

This article has a lot of graphics – I found it necessary because no one or two perspectives really help to capture the situation. At the end there are some perspectives for people who want to skip through.

In this article we look at similar comparisons to the last article, but seasonal. Mostly winter (northern hemisphere winter), i.e. December, January, February. Then a few comparisons of northern hemisphere summer: June, July, August. The graphics can all be expanded to see the detail better by clicking on them.

Future scenarios vs modeled history

Here we see the historical simulation over DJF 1979-2005 (1st graph) followed by the three scenarios, RCP2.6, RCP4.5, RCP8.5 over DJF 2080-2099:

Figure 1 – DJF Simulations from MPI-ESM-LR for historical 1979-2005 & 3 RCPs 2080-2099 – Click to expand

Now the results are displayed as a difference from the historical simulation. Positive is more rainfall in the future simulation, negative is less rainfall:

Figure 2 – DJF Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 minus simulation of historical 1979-2005 – Click to expand

And the % change. The Saharan changes look dramatic, but it’s very low rainfall turning to zero, at least in the model. For example, I picked one grid square, 20ºN, 0ºE, and the historical simulated rainfall was 0.2mm/month, under RCP2.6 0.05mm/month and for RCP8.6 0mm/month.

Figure 3 – DJF Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 as % of simulation of historical 1979-2005 – Click to expand

I zoomed in on Australia – each graph is absolute values. The first is the historical simulation, then the 2nd, 3rd, 4th are the 3 RCPs as before:

Figure 4 – DJF Australia – simulations from MPI-ESM-LR for historical 1979-2005 & 3 RCPs 2080-2099 – Click to expand

Then differences from the historical simulation:

Figure 5 – DJF Australia – Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 minus simulation of historical 1979-2005 – Click to expand

Then percentage changes from the historical simulation:

Figure 6 – DJF Australia – Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 as % of simulation of historical 1979-2005 – Click to expand

And the same for Europe – each graph is absolute values. The first is the historical simulation, then the 2nd, 3rd, 4th are the 3 RCPs as before:

Figure 7 – DJF Europe – simulations from MPI-ESM-LR for historical 1979-2005 & 3 RCPs 2080-2099 – Click to expand

Then differences from the historical simulation:

Figure 8 – DJF Europe – Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 minus simulation of historical 1979-2005 – Click to expand

Then percentage changes from the historical simulation:

Figure 9 – DJF Europe – Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 as % of simulation of historical 1979-2005 – Click to expand

Now the global picture for northern hemisphere summer, June July August. First, absolute for the model for historical, then absolute for each RCP:

Figure 10 – JJA Simulations from MPI-ESM-LR for historical 1979-2005 & 3 RCPs 2080-2099 – Click to expand

Now the results are displayed as a difference from the historical simulation. Positive is more rainfall in the future simulation, negative is less rainfall:

Figure 11 – JJA Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 minus simulation of historical 1979-2005 – Click to expand

And the % change:

Figure 12 – JJA Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 as % of simulation of historical 1979-2005 – Click to expand

Modeled History vs Observational History

As in the last article, how the historical model compares with observations over the same period but for DJF. The GPCC observational data on the left and the median of all the historical simulations from the three MPI models (8 simulations total) on the right:

Figure 13 – DJF 1979-2005 GPCC Observational data & Median of all MPI historical simulations – Click to expand

The difference, so blue means the model produces more rain than reality, while red means the model produces less rain:

Figure 14 – DJF 1979-2005 Median of all MPI historical simulations less GPCC Observational data – Click to expand

And percentage change:

Figure 15 – DJF 1979-2005 Median of all MPI historical simulations as % of GPCC Observational data – Click to expand

Some Perspectives

Now let’s look at annual, DJF and JJA for how simulation compare with observations, this is median MPI less GPCC – like figure 13. You can click to expand the image:

Figure 16 – Annual/seasons 1979-2005 Median of all MPI historical simulations less GPCC Observational data – Click to expand

Another perspective, compare projections of climate change with model skill. Top is skill (MPI simulation of DJF 1979-2005 less GPCC observation), bottom left is 2081-2100 RCP2.6 less MPI simulation, bottom right is RCP8.5 less MPI simulation:

Figure 17 – DJF Compare model skill with projections of climate change for RCP2.6 & RCP8.5 – Click to expand

So let’s look at it another way.

Let’s look at the projected rainfall change for RCP2.6 and RCP8.5 vs actual observations. That is, MPI median DJF 2081-2099 less GPCC DJF 1979-2005:

Figure 18 – DJF Compare model projections with actual historical – Click to expand

And the same for annual:

Figure 19 – Annual Compare model projections with actual historical – Click to expand

Let’s just compare the same two RCPs with model projections of climate change (as they are usually displayed, future less model historical):

Figure 20 – For contrast, as figure 19 but compare with model historical – Click to expand

If we look at SW Africa, for example, we see a progressive drying from RCP2.6 (drastic cuts in CO2 emissions) to RCP8.5 (very high emissions). But if we look at figure 19 then the model projections at the end of the century for that region have more rainfall than current.

If we look at California we see the same kind of progressive drying. But compare model projections with observations and we see more rainfall in California under both those scenarios.

Of course, this just reflects the fact that climate models have issues with simulating rainfall, something that everyone in climate modeling knows. But it’s intriguing.

In the next article we’ll look at another model.

References

An overview of CMIP5 and the experiment design, Taylor, Stouffer & Meehl, AMS (2012)

GPCP data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

GPCC data provided from https://psl.noaa.gov/data/gridded/data.gpcc.html

CMIP5 data provided by the portal at https://esgf-data.dkrz.de/search/cmip5-dkrz/

The representative concentration pathways: an overview, van Vuuren et al, Climatic Change (2011)

Read Full Post »

If you look at model outputs for rainfall in the last IPCC report, or in most papers, it’s difficult to get a feel for what models produce, how they compare with each other, and how they compare with observational data. It’s common to just show the median of all models.

In this, and some subsequent articles, I’ll try and provide some level of detail.

Here are some comparisons from a set of models from the Max Planck Institute for Meteorology. MPI is just one of about 20 climate modeling centers around the world. They took part in the Climate Model Intercomparison Project (CMIP5). As part of that project, for the IPCC 5th assessment report (AR5), they ran a number of simulations. Details of CMIP5 in the Taylor et al reference below.

Future scenarios vs modeled history

Here is the % change in rainfall – 2081-2100 vs 1979-2005 from one of the MPI models (MPI-ESM-LR) for 3 scenarios. The median of 3 runs for each scenario is compared with the median of 3 runs for the historical period, and we see the % change:

Figure 1 – Simulations from MPI-ESM-LR for 3 RCPs vs simulation of historical – Click to expand

The scenarios (Representative Concentration Pathways) in brief (and see van Vuuren reference below):

We can see that rcp 2.6 has some small reductions in rainfall in northern Africa, Middle East and a few other regions. RCP 8.5 has large areas of greatly reduced rainfall in northern Africa, Middle East , SW Africa, the Amazon, and SW Australia.

So from a model only point of view the less emissions the better.

It’s common to find that RCP6 is not modeled, something that I find difficult to understand. I understand that computing time is valuable but RCP6 seems like the emissions pathway we are currently on.

Perhaps it should be explicitly stated that the simulation results of RCP4.5 and RCP6 are effectively identical – if that is in fact the case. That by itself would be useful information given that there is a substantial difference in CO2 emissions between them.

I had a look at a couple of regions of interest – Australia:

Figure 2 – Australia – Simulations from MPI-ESM-LR for 3 RCPs vs simulation of historical – Click to expand

And Europe:

Figure 3 – Europe – Simulations from MPI-ESM-LR for 3 RCPs vs simulation of historical – Click to expand

Modeled History vs Observational History

Here we compare the historical MPI model runs with observations (GPCC). MPI has 3 models and a total of 8 runs:

  • MPI-ESM-LR (3 simulations)
  • MPI-ESM-MR (3 simulations)
  • MPI-ESM-P (2 simulations)

Each model that takes part in CMIP5 produces one or more simulations over identical ‘historical’ conditions (our best estimate of them) from 1850-2005.

I compared the median of each model with GPCC over the last 27 years of the ‘historical’ period, 1979-2005:

Figure 4 – The median of simulations from each MPI model vs observation 1979-2005 – Click to expand

And the % difference of each MPI model vs GPCC over the same period:

Figure 5 – The median of simulations from each MPI model, % change over observation 1979-2005 – Click to expand

The different models appear quite similar. So let’s take the median of all 8 runs across the 3 models and compare with observations (GPCC) for clarity (the graph title isn’t quite correct, this is across the 3 models):

Figure 6 – The median of simulations from all MPI models, % change over observation 1979-2005 – Click to expand

The same, highlighting Australia:

Figure 7 – Australia – median of simulations from all MPI models, % change over observation 1979-2005 – Click to expand

And highlighting Europe:

 

Figure 8 – Europe – median of simulations from all MPI models, % change over observation 1979-2005 – Click to expand

I’m not trying to draw any big conclusions here, more interested in showing what model results look like.

But the one thing that stands out in a first look, at least to me – the difference between the MPI model and observations (over the same time period) is more substantial than the difference between the MPI model for 2080-2100 and the MPI model for recent history, even for an extreme CO2 scenario (RCP8.5).

If you want to draw conclusions from a climate model on rainfall, should you compare the future simulations with the simulation of the recent past? Or future simulations with actual observations? Or should you compare past simulations with actual and then decide whether to compare future simulations with anything?

References

An overview of CMIP5 and the experiment design, Taylor, Stouffer & Meehl, AMS (2012)

GPCP data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

GPCC data provided from https://psl.noaa.gov/data/gridded/data.gpcc.html

CMIP5 data provided by the portal at https://esgf-data.dkrz.de/search/cmip5-dkrz/

The representative concentration pathways: an overview, van Vuuren et al, Climatic Change (2011)

Read Full Post »

Here’s an extract from a paper by Mehran et al 2014, comparing climate models with observations, over the same 1979-2005 time period:

From Mehran et al 2014

Click to enlarge

The graphs show the ratios of models to observations. Therefore, green is optimum, red means the model is producing too much rain, while blue means the model is producing too little rain (slightly counter-intuitive for rainfall and I’ll be showing data with colors reversed).

You can easily see that as well as models struggling to reproduce reality, models can be quite different from each other, for example the MPI model has very low rainfall for lots of Australia, whereas the NorESM model has very high rainfall. In other regions sometimes the models mostly lean the same way, for example NW US and W Canada.

For people who understand some level of detail about how models function it’s not a surprise that rainfall is more challenging than temperature (see Opinions and Perspectives – 6 – Climate Models, Consensus Myths and Fudge Factors).

But this challenge makes me wonder about drawing a solid black line through the median and expecting something useful to appear.

Here is an extract from the recent IPCC 1.5 report:

Global Warming of 1.5°C. An IPCC Special Report

I’ll try to shine some light on the outputs of rainfall in climate models in subsequent articles.

References

Note: these papers should be easily accessible without a paywall, just use scholar.google.com and type in the title.

Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations, Mehran, AghaKouchak, & Phillips, Journal of Geophysical Research: Atmospheres (2014)

The Version-2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979–Present), Adler et al, American Meteorological Society (2003)

Hoegh-Guldberg, O., D. Jacob, M. Taylor, M. Bindi, S. Brown, I. Camilloni, A. Diedhiou, R. Djalante, K.L. Ebi, F. Engelbrecht, J. Guiot, Y. Hijioka, S. Mehrotra, A. Payne, S.I. Seneviratne, A. Thomas, R. Warren, and G. Zhou, 2018: Impacts of 1.5ºC Global Warming on Natural and Human Systems. In: Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty [Masson-Delmotte, V., P. Zhai, H.-O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J.B.R. Matthews, Y. Chen, X. Zhou, M.I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, and T. Waterfield (eds.)].

The datasets are accessible in websites below – there are options to plot specific regions, within specific dates, and to download the whole dataset as a .nc file.

GPCC – https://psl.noaa.gov/data/gridded/data.gpcc.html

GPCP – https://psl.noaa.gov/data/gridded/data.gpcp.html

Read Full Post »

I have just been looking at the GPCC dataset, using Matlab to extract and plot monthly data for different time periods including comparisons. I’d like to compare actual with the output of various climate models over similar time periods – and against future simulations under different scenarios.

Have any readers of the blog done this? If so I’d appreciate a few tips having run into a few dead ends.

What I’m looking for – monthly gridded surface precipitation.

GPCC has 0.5ºx0.5º and 2.5ºx2.5º datasets that I’ve downloaded so the same gridded output from models would be wonderful.

I have found:

–  The CMIP5 Data is now available through the new portal, the Earth System Grid – Center for Enabling Technologies (ESG-CET), on the page http://esgf-node.llnl.gov/

–  https://www.wcrp-climate.org/wgcm/references/IPCC_standard_output.pdf

Table A1a: Monthly-mean 2-d atmosphere or land surface data (longitude, latitude, time:month).

CF standard_name; output; variable name;  units;  notes  –
precipitation_flux; pr; kg m-2 s-1;   includes both liquid and solid phases.

So I think this is what I am looking for.

–  https://www.ipcc-data.org/sim/gcm_monthly/AR5/Reference-Archive.html gives a list of different experiments within each climate model. For example – the MPI model, I expect that historical and rcp.. are the ones I want. I would have to dig into MPI-ESM-LR and -MR which I assume are different model resolutions.

But when I work my way through the portal, e.g. https://esgf-data.dkrz.de/search/cmip5-dkrz/ I find a bewildering array of options and after hopefully culling it down to just monthly rainfall from the MPI-LR model, there are 213 files:

I can easily imagine spending 100+ hours trying to establish which files are correct, trying to verify.. So, if any readers have the knowledge it would be much appreciated.

————

Just for interest, here are a few graphs produced from GPCC using Matlab. I checked a couple of outputs against samples produced from their website and they seemed correct.

I set the max monthly rainfall on the color axis to increase contrast for most places in the world – 4 different 10-year periods:

GPCC Precipitation data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

And a delta, % difference:

GPCC Precipitation data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

Read Full Post »

In Part Seven – Resolution & Convection we looked at some examples of how model resolution and domain size had big effects on modeled convection.

One commenter highlighted some presentations on issues in GCMs. As there were already a lot of comments on that article the relevant points appear a long way down. The issue deserves at least a short article of its own.

The presentations, by Paul Williams, Department of Meteorology, University of Reading, UK – all freely available:

The impacts of stochastic noise on climate models

The importance of numerical time-stepping errors

The leapfrog is dead. Long live the leapfrog!

Various papers are highlighted in these presentations (often without a full reference).

Time-Step Dependence

One of the papers cited: Time Step Sensitivity of Nonlinear Atmospheric Models: Numerical Convergence, Truncation Error Growth, and Ensemble Design, Teixeira, Reynolds & Judd 2007 comments first on the Lorenz equations (see Natural Variability and Chaos – Two – Lorenz 1963):

Figure 3a shows the evolution of X for r =19 for three different time steps (10-2, 10-3, and 10-4 LTU).

In this regime the solutions exhibit what is often referred to as transient chaotic behavior (Strogatz 1994), but after some time all solutions converge to a stable fixed point.

Depending on the time step used to integrate the equations, the values for the fixed points can be different, which means that the climate of the model is sensitive to the time step.

In this particular case, the solution obtained with 0.01 LTU converges to a positive fixed point while the other two solutions converge to a negative value.

To conclude the analysis of the sensitivity to parameter r, Fig. 3b shows the time evolution (with r =21.3) of X for three different time steps. For time steps 0.01 LTU and 0.0001 LTU the solution ceases to have a chaotic behavior and starts converging to a stable fixed point.

However, for 0.001 LTU the solution stays chaotic, which shows that different time steps may not only lead to uncertainty in the predictions after some time, but may also lead to fundamentally different regimes of the solution.

These results suggest that time steps may have an important impact in the statistics of climate models in the sense that something relatively similar may happen to more complex and realistic models of the climate system for time steps and parameter values that are currently considered to be reasonable.

[Emphasis added]

For people unfamiliar with chaotic systems, it is worth reading Natural Variability and Chaos – One – Introduction and Natural Variability and Chaos – Two – Lorenz 1963. The Lorenz system of three equations creates a very simple system of convection where we humans have the advantage of god-like powers. Although, as this paper shows, it seems that even with our god-like powers, under certain circumstances, we aren’t able to confirm

  1. the average value of the “climate”, or even
  2. if the climate is a deterministic or chaotic system

The results depend on the time step we have used to solve the set of equations.

Then the paper then goes on to consider a couple of models, including a weather forecasting model. In their summary:

In the weather and climate prediction community, when thinking in terms of model predictability, there is a tendency to associate model error with the physical parameterizations.

In this paper, it is shown that time truncation error in nonlinear models behaves in a more complex way than in linear or mildly nonlinear models and that it can be a substantial part of the total forecast error.

The fact that it is relatively simple to test the sensitivity of a model to the time step, allowed us to study the implications of time step sensitivity in terms of numerical convergence and error growth in some depth. The simple analytic model proposed in this paper illustrates how the evolution of truncation error in nonlinear models can be understood as a combination of the typical linear truncation error and of the initial condition error associated with the error committed in the first time step integration (proportional to some power of the time step).

A relevant question is how much of this simple study of time step truncation error could help in understanding the behavior of more complex forms of model error associated with the parameterizations in weather and climate prediction models, and its interplay with initial condition error.

Another reference from the presentations is Dependence of aqua-planet simulations on time step, Willamson & Olsen 2003.

What is an aquaplanet simulation?

In an aqua-planet the earth is covered with water and has no mountains. The sea surface temperature (SST) is speciŽed, usually with rather simple geometries such as zonal symmetry. The ‘correct’ solutions of aqua-planet tests are not known.

However, it is thought that aqua-planet studies might help us gain insight into model differences, understand physical processes in individual models, understand the impact of changing parametrizations and dynamical cores, and understand the interaction between dynamical cores and parametrization packages. There is a rich history of aqua-planet experiments, from which results relevant to this paper are discussed below.

They found that running different “mechanisms” for the same parameterizations produced quite different precipitation results. In investigating further it appeared that the time step was the key change.


Figure 1 – Click to enlarge

Their conclusion:

When running the Neale and Hoskins (2000a) standard aqua-planet test suite with two versions of the CCM3, which differed in the formulation of the dynamical cores, we found a strong sensitivity in the morphology of the time averaged, zonal averaged precipitation.

The two dynamical cores were candidates for the successor model to CCM3; one was Eulerian and the other semi-Lagrangian.

They were each con􏰜figured as proposed for climate simulation application, and believed to be of comparable accuracy.

The major difference was computational ef􏰜ficiency. In general, simulations with the Eulerian core formed a narrow single precipitation peak centred on the equator, while those with the semi-Lagrangian core produced more precipitation farther from the equator accompanied by a double peak straddling the equator with a minimum centred on the equator..

..We do not know which simulation is ‘correct’. Although a single peak forms with smaller time steps, the simulations do not converge with the smallest time step considered here. The maximum precipitation rate at the equator continues to increase..

..The significance of the time truncation error of parametrizations deserves further consideration in AGCMs forced by real-world conditions.

Stochastic Noise

From Global Thermohaline Circulation. Part I: Sensitivity to Atmospheric Moisture Transport, Xiaoli Wang et al 1999, the strength of the North Atlantic overturning current (the thermohaline circulation) changed significantly with noise:

From Wang et al 1999

Figure 2

The idea behind the experiment is that increasing freshwater fluxes at high latitudes from melting ice (in a warmer world) appear to impact the strength of the Atlantic “conveyor” which brings warm water from nearer the equator to northern Europe (there is a long history of consideration of this question). How sensitive is this to random effects?

In these experiments we also include random variations in the zonal wind stress field north of 46ºN. The variations are uniform in space and have a Gaussian distribution, with zero mean and standard deviation of 1 dyn/cm² , based on European Centre for MediumRange Weather Forecasts (ECMWF) analyses (D. Stammer 1996, personal communication).

Our motivation in applying these random variations in wind stress is illustrated by two experiments, one with random wind variations, the other without, in which μN increases according to the above prescription. Figure 12 shows the time series of the North Atlantic overturning strength in these two experiments. The random wind variations give rise to interannual variations in the strength of the overturning, which are comparable in magnitude to those found in experiments with coupled GCMs (e.g., Manabe and Stouffer 1994), whereas interannual variations are almost absent without them. The variations also accelerate the collapse of the overturning, therefore speeding up the response time of the model to the freshwater flux perturbation (see Fig. 12). The reason for the acceleration of the collapse is that the variations make it harder for the convection to sustain itself.

The convection tends to maintain itself, because of a positive feedback with the overturning circulation (Lenderink and Haarsma 1994). Once the convection is triggered, it creates favorable conditions for further convection there. This positive feedback is so powerful that in the case without random variations the convection does not shut off until the freshening is virtually doubled at the convection site (around year 1000). When the random variations are present, they generate perturbations in the Ekman currents, which are propagated downward to the deep layers, and cause variations in the overturning strength. This weakens the positive feedback.

In general, the random wind stress variations lead to a more realistic variability in the convection sites, and in the strength of the overturning circulation.

We note that, even though the transitions are speeded up by the technique, the character of the model behavior is not fundamentally altered by including the random wind variations.

The presentation on stochastic noise also highlighted a coarse resolution GCM that didn’t show El-Nino features – but after the introduction of random noise it did.

I couldn’t track down the reference – Joshi, Williams & Smith 2010  – and emailed Paul Williams who replied very quickly, and helpfully – the paper is still “in preparation” so that means it probably won’t ever be finished, but instead Paul pointed me to two related papers that had been published:  Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies – Paul D Williams et al, AMS (2016) and Climatic impacts of stochastic fluctuations in air–sea fluxes, Paul D Williams et al, GRL (2012).

From the 2012 paper:

In this study, stochastic fluctuations have been applied to the air–sea buoyancy fluxes in a comprehensive climate model. Unlike related previous work, which has employed an ocean general circulation model coupled only to a simple empirical model of atmospheric dynamics, the present work has employed a full coupled atmosphere–ocean general circulation model. This advance allows the feedbacks in the coupled system to be captured as comprehensively as is permitted by contemporary high-performance computing, and it allows the impacts on the atmospheric circulation to be studied.

The stochastic fluctuations were introduced as a crude attempt to capture the variability of rapid, sub-grid structures otherwise missing from the model. Experiments have been performed to test the response of the climate system to the stochastic noise.

In two experiments, the net fresh water flux and the net heat flux were perturbed separately. Significant changes were detected in the century-mean oceanic mixed-layer depth, sea-surface temperature, atmospheric Hadley circulation, and net upward water flux at the sea surface. Significant changes were also detected in the ENSO variability. The century-mean changes are summarized schematically in Figure 4. The above findings constitute evidence that noise-induced drift and noise-enhanced variability, which are familiar concepts from simple models, continue to apply in comprehensive climate models with millions of degrees of freedom..

The graph below shows the control experiment (top) followed by the difference between two experiments and the control (note change in vertical axis scale for the two anomaly experiments) where two different methods of adding random noise were included:

From Williams et al 2012

Figure 3

A key element of the paper is that adding random noise changes the mean values.

From Williams et al 2012

Figure 4

From the 2016 paper:

Faster computers are constantly permitting the development of climate models of greater complexity and higher resolution. Therefore, it might be argued that the need for parameterization is being gradually reduced over time.

However, it is difficult to envisage any model ever being capable of explicitly simulating all of the climatically important components on all of the relevant time scales. Furthermore, it is known that the impact of the subgrid processes cannot necessarily be made vanishingly small simply by increasing the grid resolution, because information from arbitrarily small scales within the inertial subrange (down to the viscous dissipation scale) will always be able to contaminate the resolved scales in finite time.

This feature of the subgrid dynamics perhaps explains why certain systematic errors are common to many different models and why numerical simulations are apparently not asymptoting as the resolution increases. Indeed, the Intergovernmental Panel on Climate Change (IPCC) has noted that the ultimate source of most large-scale errors is that ‘‘many important small- scale processes cannot be represented explicitly in models’’.

And they continue with an excellent explanation:

The major problem with conventional, deterministic parameterization schemes is their assumption that the impact of the subgrid scales on the resolved scales is uniquely determined by the resolved scales. This assumption can be made to sound plausible by invoking an analogy with the law of large numbers in statistical mechanics.

According to this analogy, the subgrid processes are essentially random and of sufficiently large number per grid box that their integrated effect on the resolved scales is predictable. In reality, however, the assumption is violated because the most energetic subgrid processes are only just below the grid scale, placing them far from the limit in which the law of large numbers applies. The implication is that the parameter values that would make deterministic parameterization schemes exactly correct are not simply uncertain; they are in fact indeterminate.

Later:

The question of whether stochastic closure schemes outperform their deterministic counterparts was listed by Williams et al. (2013) as a key outstanding challenge in the field of mathematics applied to the climate system.

Adding noise with a mean zero doesn’t create a mean zero effect?

The changes to the mean climatological state that were identified in section 3 are a manifestation of what, in the field of stochastic dynamical systems, is called noise-induced drift or noise-induced rectification. This effect arises from interactions between the noise and nonlinearities in the model equations. It permits zero- mean noise to have non-zero-mean effects, as seen in our stochastic simulations.

The paper itself aims..

..to investigate whether climate simulations can be improved by implementing a simple stochastic parameterization of ocean eddies in a coupled atmosphere–ocean general circulation model.

The idea is whether adding noise can improve model results more effectively than increasing model resolution:

We conclude that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost.

In this latter respect, our findings are consistent with those of Berner et al. (2012), who studied the model error in an atmospheric general circulation model. They reported that, although the impact of adding stochastic noise is not universally beneficial in terms of model bias reduction, it is nevertheless beneficial across a range of variables and diagnostics. They also reported that, in terms of improving the magnitudes and spatial patterns of model biases, the impact of adding stochastic noise can be similar to the impact of increasing the resolution. Our results are consistent with these findings. We conclude that oceanic stochastic parameterizations join atmospheric stochastic parameterizations in having the potential to significantly improve climate simulations.

And for people who’ve been educated on the basics of fluids on a rotating planet via experiments on the rotating annulus (a 2d model – along with equations – providing great insights into our 3d planet), Testing the limits of quasi-geostrophic theory: application to observed laboratory flows outside the quasi-geostrophic regime, Paul D Williams et al 2010 might be interesting.

Conclusion

Some systems have a lot of non-linearity. This is true of climate and generally of turbulent flows.

In a textbook that I read some time ago on (I think) chaos, the author made the great comment that usually you start out being taught “linear models” and much later come into contact with “non-linear models”. He proposed that a better terminology would be “real world systems” (non-linear) while “simplistic non-real-world teaching models” were the alternative (linear models). I’m paraphrasing.

The point is that most real world systems are non-linear. And many (not all) non-linear systems have difficult properties. The easy stuff you learn – linear systems, aka “simplistic non-real-world teaching models” – isn’t actually relevant to most real world problems, it’s just a stepping stone in giving you the tools to solve the hard problems.

Solving these difficult systems requires numerical methods (there is mostly no analytical solution) and once you start playing around with time-steps, parameter values and model resolution you find that the results can be significantly – and sometimes dramatically – affected by the arbitrary choices. With relatively simple systems (like the Lorenz three-equation convection system) and massive computing power you can begin to find the dependencies. But there isn’t a clear path to see where the dependencies lie (of course, many people have done great work in systematizing (simple) chaotic systems to provide some insights).

GCMs provide insights into climate that we can’t get otherwise.

One way to think about GCMs is that once they mostly agree on the direction of an effect that provides “high confidence”, and anyone who doesn’t agree with that confidence is at best a cantankerous individual and at worst has a hidden agenda.

Another way to think about GCMs is that climate models are mostly at the mercy of unverified parameterizations and numerical methods and anyone who does accept their conclusions is naive and doesn’t appreciate the realities of non-linear systems.

Life is complex and either of these propositions could be true, along with anything inbetween.

More about Turbulence: Turbulence, Closure and Parameterization

References

Time Step Sensitivity of Nonlinear Atmospheric Models: Numerical Convergence, Truncation Error Growth, and Ensemble Design, Teixeira, Reynolds & Judd, Journal of the Atmospheric Sciences (2007) – free paper

Dependence of aqua-planet simulations on time step, Willamson & Olsen, Q. J. R. Meteorol. Soc. (2003) – free paper

Global Thermohaline Circulation. Part I: Sensitivity to Atmospheric Moisture Transport, Xiaoli Wang, Peter H Stone, and Jochem Marotzke, American Meteorological Society (1999) – free paper

Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies – Paul D Williams et al, AMS (2016) – free paper

Climatic impacts of stochastic fluctuations in air–sea fluxes, Paul D Williams et al, GRL (2012) – free paper

Testing the limits of quasi-geostrophic theory: application to observed laboratory flows outside the quasi-geostrophic regime, Paul Williams, Peter Read & Thomas Haine, J. Fluid Mech. (2010) – free paper

Read Full Post »

« Newer Posts - Older Posts »