Feeds:
Posts
Comments

Archive for the ‘Climate Models’ Category

In Models and Rainfall – III – MPI Seasonal and Models and Rainfall – II – MPI we looked at one model, MPI from Germany, from a variety of perspectives.

In this article we’ll look at another model that took part in the last Climate Model Intercomparison Project (CMIP5) – Miroc5 from Japan and compare it with MPI.

A reminder from an earlier article – the scenarios (Representative Concentration Pathways) in brief (and see van Vuuren reference below):

Miroc5 (just called Miroc in the rest of the article) did five simulations of historical and three simulations of each RCP through to 2100.

The first graphic has five maps: first, the median Miroc simulation of 1979-2005, followed by simulations of 2081-2100 for rcp2.6 to rcp8.5 (each one is the median of the three simulations):

Figure 1 – Miroc simulations of historical 1979-2005 and the 4 RCPs in 2081-2100 – Click to expand

The % change of the median Miroc simulation for each scenario from the median historical simulation:

We can see a consistent theme through increasing CO2 concentrations.

Figure 2 – Miroc simulations for RCPs 2081-2100 as % of Miroc historical 1979-2005 – Click to expand

As the previous figure, but difference (future – historical):

Figure 3 – Miroc simulations for RCPs 2081-2100 less Miroc historical 1979-2005 – Click to expand

Side by Side Comparisons of MPI and Miroc Predictions

And now some comparisons side by side. On the left MPI, on the right Miroc. Both are comparing RCP4.5 as a percentage of their own historical simulation (and both are the medians of the simulations):

Figure 4 – MPI compared with Miroc for RCP4.5 (%) – Click to expand

I think seeing the future less historical (as a difference rather than %) is also useful – in areas with very low rain the % difference can appear extreme even though the impact is very low. Overall, % graphs are more useful – if you live in an area with say 20mm of rainfall per month on average then -10mm might not show up very well on a difference chart, but it can be critical. But for reference, the difference:

Figure 5 – MPI compared with Miroc for RCP4.5 (difference) – Click to expand

Now the same two graphs for RCP8.5. On the left MPI, on the right Miroc. % of their historical simulation in each case:

Figure 6 – MPI compared with Miroc for RCP8.5 (%) – Click to expand

And now difference (future less historical) in each case:

Figure 7 – MPI compared with Miroc for RCP8.5 (difference) – Click to expand

Side by Side Comparisons of Models vs Observations

In Part II we saw some comparisons of the MPI model with GPCC observations, both over the same 1979-2005 time period. Here is MPI (left) and MIROC (right) each as a % of GPCC:

Figure 8 – MPI compared with Miroc for GPCC observations (%) – Click to expand

It’s clear that different models, at least for now MPI and Miroc, have significant differences between them.

References

An overview of CMIP5 and the experiment design, Taylor, Stouffer & Meehl, AMS (2012)

GPCP data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

GPCC data provided from https://psl.noaa.gov/data/gridded/data.gpcc.html

CMIP5 data provided by the portal at https://esgf-data.dkrz.de/search/cmip5-dkrz/

The representative concentration pathways: an overview, van Vuuren et al, Climatic Change (2011)

 

Read Full Post »

In the last article we looked at the MPI model – comparisons of 2081-2100 for different atmospheric CO2 concentrations/emissions with 1979-2005. And comparisons between the MPI historical simulation and observations. These were all on an annual basis.

This article has a lot of graphics – I found it necessary because no one or two perspectives really help to capture the situation. At the end there are some perspectives for people who want to skip through.

In this article we look at similar comparisons to the last article, but seasonal. Mostly winter (northern hemisphere winter), i.e. December, January, February. Then a few comparisons of northern hemisphere summer: June, July, August. The graphics can all be expanded to see the detail better by clicking on them.

Future scenarios vs modeled history

Here we see the historical simulation over DJF 1979-2005 (1st graph) followed by the three scenarios, RCP2.6, RCP4.5, RCP8.5 over DJF 2080-2099:

Figure 1 – DJF Simulations from MPI-ESM-LR for historical 1979-2005 & 3 RCPs 2080-2099 – Click to expand

Now the results are displayed as a difference from the historical simulation. Positive is more rainfall in the future simulation, negative is less rainfall:

Figure 2 – DJF Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 minus simulation of historical 1979-2005 – Click to expand

And the % change. The Saharan changes look dramatic, but it’s very low rainfall turning to zero, at least in the model. For example, I picked one grid square, 20ºN, 0ºE, and the historical simulated rainfall was 0.2mm/month, under RCP2.6 0.05mm/month and for RCP8.6 0mm/month.

Figure 3 – DJF Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 as % of simulation of historical 1979-2005 – Click to expand

I zoomed in on Australia – each graph is absolute values. The first is the historical simulation, then the 2nd, 3rd, 4th are the 3 RCPs as before:

Figure 4 – DJF Australia – simulations from MPI-ESM-LR for historical 1979-2005 & 3 RCPs 2080-2099 – Click to expand

Then differences from the historical simulation:

Figure 5 – DJF Australia – Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 minus simulation of historical 1979-2005 – Click to expand

Then percentage changes from the historical simulation:

Figure 6 – DJF Australia – Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 as % of simulation of historical 1979-2005 – Click to expand

And the same for Europe – each graph is absolute values. The first is the historical simulation, then the 2nd, 3rd, 4th are the 3 RCPs as before:

Figure 7 – DJF Europe – simulations from MPI-ESM-LR for historical 1979-2005 & 3 RCPs 2080-2099 – Click to expand

Then differences from the historical simulation:

Figure 8 – DJF Europe – Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 minus simulation of historical 1979-2005 – Click to expand

Then percentage changes from the historical simulation:

Figure 9 – DJF Europe – Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 as % of simulation of historical 1979-2005 – Click to expand

Now the global picture for northern hemisphere summer, June July August. First, absolute for the model for historical, then absolute for each RCP:

Figure 10 – JJA Simulations from MPI-ESM-LR for historical 1979-2005 & 3 RCPs 2080-2099 – Click to expand

Now the results are displayed as a difference from the historical simulation. Positive is more rainfall in the future simulation, negative is less rainfall:

Figure 11 – JJA Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 minus simulation of historical 1979-2005 – Click to expand

And the % change:

Figure 12 – JJA Simulations from MPI-ESM-LR for 3 RCPs in 2080-2099 as % of simulation of historical 1979-2005 – Click to expand

Modeled History vs Observational History

As in the last article, how the historical model compares with observations over the same period but for DJF. The GPCC observational data on the left and the median of all the historical simulations from the three MPI models (8 simulations total) on the right:

Figure 13 – DJF 1979-2005 GPCC Observational data & Median of all MPI historical simulations – Click to expand

The difference, so blue means the model produces more rain than reality, while red means the model produces less rain:

Figure 14 – DJF 1979-2005 Median of all MPI historical simulations less GPCC Observational data – Click to expand

And percentage change:

Figure 15 – DJF 1979-2005 Median of all MPI historical simulations as % of GPCC Observational data – Click to expand

Some Perspectives

Now let’s look at annual, DJF and JJA for how simulation compare with observations, this is median MPI less GPCC – like figure 13. You can click to expand the image:

Figure 16 – Annual/seasons 1979-2005 Median of all MPI historical simulations less GPCC Observational data – Click to expand

Another perspective, compare projections of climate change with model skill. Top is skill (MPI simulation of DJF 1979-2005 less GPCC observation), bottom left is 2081-2100 RCP2.6 less MPI simulation, bottom right is RCP8.5 less MPI simulation:

Figure 17 – DJF Compare model skill with projections of climate change for RCP2.6 & RCP8.5 – Click to expand

So let’s look at it another way.

Let’s look at the projected rainfall change for RCP2.6 and RCP8.5 vs actual observations. That is, MPI median DJF 2081-2099 less GPCC DJF 1979-2005:

Figure 18 – DJF Compare model projections with actual historical – Click to expand

And the same for annual:

Figure 19 – Annual Compare model projections with actual historical – Click to expand

Let’s just compare the same two RCPs with model projections of climate change (as they are usually displayed, future less model historical):

Figure 20 – For contrast, as figure 19 but compare with model historical – Click to expand

If we look at SW Africa, for example, we see a progressive drying from RCP2.6 (drastic cuts in CO2 emissions) to RCP8.5 (very high emissions). But if we look at figure 19 then the model projections at the end of the century for that region have more rainfall than current.

If we look at California we see the same kind of progressive drying. But compare model projections with observations and we see more rainfall in California under both those scenarios.

Of course, this just reflects the fact that climate models have issues with simulating rainfall, something that everyone in climate modeling knows. But it’s intriguing.

In the next article we’ll look at another model.

References

An overview of CMIP5 and the experiment design, Taylor, Stouffer & Meehl, AMS (2012)

GPCP data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

GPCC data provided from https://psl.noaa.gov/data/gridded/data.gpcc.html

CMIP5 data provided by the portal at https://esgf-data.dkrz.de/search/cmip5-dkrz/

The representative concentration pathways: an overview, van Vuuren et al, Climatic Change (2011)

Read Full Post »

If you look at model outputs for rainfall in the last IPCC report, or in most papers, it’s difficult to get a feel for what models produce, how they compare with each other, and how they compare with observational data. It’s common to just show the median of all models.

In this, and some subsequent articles, I’ll try and provide some level of detail.

Here are some comparisons from a set of models from the Max Planck Institute for Meteorology. MPI is just one of about 20 climate modeling centers around the world. They took part in the Climate Model Intercomparison Project (CMIP5). As part of that project, for the IPCC 5th assessment report (AR5), they ran a number of simulations. Details of CMIP5 in the Taylor et al reference below.

Future scenarios vs modeled history

Here is the % change in rainfall – 2081-2100 vs 1979-2005 from one of the MPI models (MPI-ESM-LR) for 3 scenarios. The median of 3 runs for each scenario is compared with the median of 3 runs for the historical period, and we see the % change:

Figure 1 – Simulations from MPI-ESM-LR for 3 RCPs vs simulation of historical – Click to expand

The scenarios (Representative Concentration Pathways) in brief (and see van Vuuren reference below):

We can see that rcp 2.6 has some small reductions in rainfall in northern Africa, Middle East and a few other regions. RCP 8.5 has large areas of greatly reduced rainfall in northern Africa, Middle East , SW Africa, the Amazon, and SW Australia.

So from a model only point of view the less emissions the better.

It’s common to find that RCP6 is not modeled, something that I find difficult to understand. I understand that computing time is valuable but RCP6 seems like the emissions pathway we are currently on.

Perhaps it should be explicitly stated that the simulation results of RCP4.5 and RCP6 are effectively identical – if that is in fact the case. That by itself would be useful information given that there is a substantial difference in CO2 emissions between them.

I had a look at a couple of regions of interest – Australia:

Figure 2 – Australia – Simulations from MPI-ESM-LR for 3 RCPs vs simulation of historical – Click to expand

And Europe:

Figure 3 – Europe – Simulations from MPI-ESM-LR for 3 RCPs vs simulation of historical – Click to expand

Modeled History vs Observational History

Here we compare the historical MPI model runs with observations (GPCC). MPI has 3 models and a total of 8 runs:

  • MPI-ESM-LR (3 simulations)
  • MPI-ESM-MR (3 simulations)
  • MPI-ESM-P (2 simulations)

Each model that takes part in CMIP5 produces one or more simulations over identical ‘historical’ conditions (our best estimate of them) from 1850-2005.

I compared the median of each model with GPCC over the last 27 years of the ‘historical’ period, 1979-2005:

Figure 4 – The median of simulations from each MPI model vs observation 1979-2005 – Click to expand

And the % difference of each MPI model vs GPCC over the same period:

Figure 5 – The median of simulations from each MPI model, % change over observation 1979-2005 – Click to expand

The different models appear quite similar. So let’s take the median of all 8 runs across the 3 models and compare with observations (GPCC) for clarity (the graph title isn’t quite correct, this is across the 3 models):

Figure 6 – The median of simulations from all MPI models, % change over observation 1979-2005 – Click to expand

The same, highlighting Australia:

Figure 7 – Australia – median of simulations from all MPI models, % change over observation 1979-2005 – Click to expand

And highlighting Europe:

 

Figure 8 – Europe – median of simulations from all MPI models, % change over observation 1979-2005 – Click to expand

I’m not trying to draw any big conclusions here, more interested in showing what model results look like.

But the one thing that stands out in a first look, at least to me – the difference between the MPI model and observations (over the same time period) is more substantial than the difference between the MPI model for 2080-2100 and the MPI model for recent history, even for an extreme CO2 scenario (RCP8.5).

If you want to draw conclusions from a climate model on rainfall, should you compare the future simulations with the simulation of the recent past? Or future simulations with actual observations? Or should you compare past simulations with actual and then decide whether to compare future simulations with anything?

References

An overview of CMIP5 and the experiment design, Taylor, Stouffer & Meehl, AMS (2012)

GPCP data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

GPCC data provided from https://psl.noaa.gov/data/gridded/data.gpcc.html

CMIP5 data provided by the portal at https://esgf-data.dkrz.de/search/cmip5-dkrz/

The representative concentration pathways: an overview, van Vuuren et al, Climatic Change (2011)

Read Full Post »

Here’s an extract from a paper by Mehran et al 2014, comparing climate models with observations, over the same 1979-2005 time period:

From Mehran et al 2014

Click to enlarge

The graphs show the ratios of models to observations. Therefore, green is optimum, red means the model is producing too much rain, while blue means the model is producing too little rain (slightly counter-intuitive for rainfall and I’ll be showing data with colors reversed).

You can easily see that as well as models struggling to reproduce reality, models can be quite different from each other, for example the MPI model has very low rainfall for lots of Australia, whereas the NorESM model has very high rainfall. In other regions sometimes the models mostly lean the same way, for example NW US and W Canada.

For people who understand some level of detail about how models function it’s not a surprise that rainfall is more challenging than temperature (see Opinions and Perspectives – 6 – Climate Models, Consensus Myths and Fudge Factors).

But this challenge makes me wonder about drawing a solid black line through the median and expecting something useful to appear.

Here is an extract from the recent IPCC 1.5 report:

Global Warming of 1.5°C. An IPCC Special Report

I’ll try to shine some light on the outputs of rainfall in climate models in subsequent articles.

References

Note: these papers should be easily accessible without a paywall, just use scholar.google.com and type in the title.

Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations, Mehran, AghaKouchak, & Phillips, Journal of Geophysical Research: Atmospheres (2014)

The Version-2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979–Present), Adler et al, American Meteorological Society (2003)

Hoegh-Guldberg, O., D. Jacob, M. Taylor, M. Bindi, S. Brown, I. Camilloni, A. Diedhiou, R. Djalante, K.L. Ebi, F. Engelbrecht, J. Guiot, Y. Hijioka, S. Mehrotra, A. Payne, S.I. Seneviratne, A. Thomas, R. Warren, and G. Zhou, 2018: Impacts of 1.5ºC Global Warming on Natural and Human Systems. In: Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty [Masson-Delmotte, V., P. Zhai, H.-O. Pörtner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J.B.R. Matthews, Y. Chen, X. Zhou, M.I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, and T. Waterfield (eds.)].

The datasets are accessible in websites below – there are options to plot specific regions, within specific dates, and to download the whole dataset as a .nc file.

GPCC – https://psl.noaa.gov/data/gridded/data.gpcc.html

GPCP – https://psl.noaa.gov/data/gridded/data.gpcp.html

Read Full Post »

I have just been looking at the GPCC dataset, using Matlab to extract and plot monthly data for different time periods including comparisons. I’d like to compare actual with the output of various climate models over similar time periods – and against future simulations under different scenarios.

Have any readers of the blog done this? If so I’d appreciate a few tips having run into a few dead ends.

What I’m looking for – monthly gridded surface precipitation.

GPCC has 0.5ºx0.5º and 2.5ºx2.5º datasets that I’ve downloaded so the same gridded output from models would be wonderful.

I have found:

–  The CMIP5 Data is now available through the new portal, the Earth System Grid – Center for Enabling Technologies (ESG-CET), on the page http://esgf-node.llnl.gov/

–  https://www.wcrp-climate.org/wgcm/references/IPCC_standard_output.pdf

Table A1a: Monthly-mean 2-d atmosphere or land surface data (longitude, latitude, time:month).

CF standard_name; output; variable name;  units;  notes  –
precipitation_flux; pr; kg m-2 s-1;   includes both liquid and solid phases.

So I think this is what I am looking for.

–  https://www.ipcc-data.org/sim/gcm_monthly/AR5/Reference-Archive.html gives a list of different experiments within each climate model. For example – the MPI model, I expect that historical and rcp.. are the ones I want. I would have to dig into MPI-ESM-LR and -MR which I assume are different model resolutions.

But when I work my way through the portal, e.g. https://esgf-data.dkrz.de/search/cmip5-dkrz/ I find a bewildering array of options and after hopefully culling it down to just monthly rainfall from the MPI-LR model, there are 213 files:

I can easily imagine spending 100+ hours trying to establish which files are correct, trying to verify.. So, if any readers have the knowledge it would be much appreciated.

————

Just for interest, here are a few graphs produced from GPCC using Matlab. I checked a couple of outputs against samples produced from their website and they seemed correct.

I set the max monthly rainfall on the color axis to increase contrast for most places in the world – 4 different 10-year periods:

GPCC Precipitation data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

And a delta, % difference:

GPCC Precipitation data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site at https://psl.noaa.gov/

Read Full Post »

In Part Seven – Resolution & Convection we looked at some examples of how model resolution and domain size had big effects on modeled convection.

One commenter highlighted some presentations on issues in GCMs. As there were already a lot of comments on that article the relevant points appear a long way down. The issue deserves at least a short article of its own.

The presentations, by Paul Williams, Department of Meteorology, University of Reading, UK – all freely available:

The impacts of stochastic noise on climate models

The importance of numerical time-stepping errors

The leapfrog is dead. Long live the leapfrog!

Various papers are highlighted in these presentations (often without a full reference).

Time-Step Dependence

One of the papers cited: Time Step Sensitivity of Nonlinear Atmospheric Models: Numerical Convergence, Truncation Error Growth, and Ensemble Design, Teixeira, Reynolds & Judd 2007 comments first on the Lorenz equations (see Natural Variability and Chaos – Two – Lorenz 1963):

Figure 3a shows the evolution of X for r =19 for three different time steps (10-2, 10-3, and 10-4 LTU).

In this regime the solutions exhibit what is often referred to as transient chaotic behavior (Strogatz 1994), but after some time all solutions converge to a stable fixed point.

Depending on the time step used to integrate the equations, the values for the fixed points can be different, which means that the climate of the model is sensitive to the time step.

In this particular case, the solution obtained with 0.01 LTU converges to a positive fixed point while the other two solutions converge to a negative value.

To conclude the analysis of the sensitivity to parameter r, Fig. 3b shows the time evolution (with r =21.3) of X for three different time steps. For time steps 0.01 LTU and 0.0001 LTU the solution ceases to have a chaotic behavior and starts converging to a stable fixed point.

However, for 0.001 LTU the solution stays chaotic, which shows that different time steps may not only lead to uncertainty in the predictions after some time, but may also lead to fundamentally different regimes of the solution.

These results suggest that time steps may have an important impact in the statistics of climate models in the sense that something relatively similar may happen to more complex and realistic models of the climate system for time steps and parameter values that are currently considered to be reasonable.

[Emphasis added]

For people unfamiliar with chaotic systems, it is worth reading Natural Variability and Chaos – One – Introduction and Natural Variability and Chaos – Two – Lorenz 1963. The Lorenz system of three equations creates a very simple system of convection where we humans have the advantage of god-like powers. Although, as this paper shows, it seems that even with our god-like powers, under certain circumstances, we aren’t able to confirm

  1. the average value of the “climate”, or even
  2. if the climate is a deterministic or chaotic system

The results depend on the time step we have used to solve the set of equations.

Then the paper then goes on to consider a couple of models, including a weather forecasting model. In their summary:

In the weather and climate prediction community, when thinking in terms of model predictability, there is a tendency to associate model error with the physical parameterizations.

In this paper, it is shown that time truncation error in nonlinear models behaves in a more complex way than in linear or mildly nonlinear models and that it can be a substantial part of the total forecast error.

The fact that it is relatively simple to test the sensitivity of a model to the time step, allowed us to study the implications of time step sensitivity in terms of numerical convergence and error growth in some depth. The simple analytic model proposed in this paper illustrates how the evolution of truncation error in nonlinear models can be understood as a combination of the typical linear truncation error and of the initial condition error associated with the error committed in the first time step integration (proportional to some power of the time step).

A relevant question is how much of this simple study of time step truncation error could help in understanding the behavior of more complex forms of model error associated with the parameterizations in weather and climate prediction models, and its interplay with initial condition error.

Another reference from the presentations is Dependence of aqua-planet simulations on time step, Willamson & Olsen 2003.

What is an aquaplanet simulation?

In an aqua-planet the earth is covered with water and has no mountains. The sea surface temperature (SST) is speciŽed, usually with rather simple geometries such as zonal symmetry. The ‘correct’ solutions of aqua-planet tests are not known.

However, it is thought that aqua-planet studies might help us gain insight into model differences, understand physical processes in individual models, understand the impact of changing parametrizations and dynamical cores, and understand the interaction between dynamical cores and parametrization packages. There is a rich history of aqua-planet experiments, from which results relevant to this paper are discussed below.

They found that running different “mechanisms” for the same parameterizations produced quite different precipitation results. In investigating further it appeared that the time step was the key change.


Figure 1 – Click to enlarge

Their conclusion:

When running the Neale and Hoskins (2000a) standard aqua-planet test suite with two versions of the CCM3, which differed in the formulation of the dynamical cores, we found a strong sensitivity in the morphology of the time averaged, zonal averaged precipitation.

The two dynamical cores were candidates for the successor model to CCM3; one was Eulerian and the other semi-Lagrangian.

They were each con􏰜figured as proposed for climate simulation application, and believed to be of comparable accuracy.

The major difference was computational ef􏰜ficiency. In general, simulations with the Eulerian core formed a narrow single precipitation peak centred on the equator, while those with the semi-Lagrangian core produced more precipitation farther from the equator accompanied by a double peak straddling the equator with a minimum centred on the equator..

..We do not know which simulation is ‘correct’. Although a single peak forms with smaller time steps, the simulations do not converge with the smallest time step considered here. The maximum precipitation rate at the equator continues to increase..

..The significance of the time truncation error of parametrizations deserves further consideration in AGCMs forced by real-world conditions.

Stochastic Noise

From Global Thermohaline Circulation. Part I: Sensitivity to Atmospheric Moisture Transport, Xiaoli Wang et al 1999, the strength of the North Atlantic overturning current (the thermohaline circulation) changed significantly with noise:

From Wang et al 1999

Figure 2

The idea behind the experiment is that increasing freshwater fluxes at high latitudes from melting ice (in a warmer world) appear to impact the strength of the Atlantic “conveyor” which brings warm water from nearer the equator to northern Europe (there is a long history of consideration of this question). How sensitive is this to random effects?

In these experiments we also include random variations in the zonal wind stress field north of 46ºN. The variations are uniform in space and have a Gaussian distribution, with zero mean and standard deviation of 1 dyn/cm² , based on European Centre for MediumRange Weather Forecasts (ECMWF) analyses (D. Stammer 1996, personal communication).

Our motivation in applying these random variations in wind stress is illustrated by two experiments, one with random wind variations, the other without, in which μN increases according to the above prescription. Figure 12 shows the time series of the North Atlantic overturning strength in these two experiments. The random wind variations give rise to interannual variations in the strength of the overturning, which are comparable in magnitude to those found in experiments with coupled GCMs (e.g., Manabe and Stouffer 1994), whereas interannual variations are almost absent without them. The variations also accelerate the collapse of the overturning, therefore speeding up the response time of the model to the freshwater flux perturbation (see Fig. 12). The reason for the acceleration of the collapse is that the variations make it harder for the convection to sustain itself.

The convection tends to maintain itself, because of a positive feedback with the overturning circulation (Lenderink and Haarsma 1994). Once the convection is triggered, it creates favorable conditions for further convection there. This positive feedback is so powerful that in the case without random variations the convection does not shut off until the freshening is virtually doubled at the convection site (around year 1000). When the random variations are present, they generate perturbations in the Ekman currents, which are propagated downward to the deep layers, and cause variations in the overturning strength. This weakens the positive feedback.

In general, the random wind stress variations lead to a more realistic variability in the convection sites, and in the strength of the overturning circulation.

We note that, even though the transitions are speeded up by the technique, the character of the model behavior is not fundamentally altered by including the random wind variations.

The presentation on stochastic noise also highlighted a coarse resolution GCM that didn’t show El-Nino features – but after the introduction of random noise it did.

I couldn’t track down the reference – Joshi, Williams & Smith 2010  – and emailed Paul Williams who replied very quickly, and helpfully – the paper is still “in preparation” so that means it probably won’t ever be finished, but instead Paul pointed me to two related papers that had been published:  Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies – Paul D Williams et al, AMS (2016) and Climatic impacts of stochastic fluctuations in air–sea fluxes, Paul D Williams et al, GRL (2012).

From the 2012 paper:

In this study, stochastic fluctuations have been applied to the air–sea buoyancy fluxes in a comprehensive climate model. Unlike related previous work, which has employed an ocean general circulation model coupled only to a simple empirical model of atmospheric dynamics, the present work has employed a full coupled atmosphere–ocean general circulation model. This advance allows the feedbacks in the coupled system to be captured as comprehensively as is permitted by contemporary high-performance computing, and it allows the impacts on the atmospheric circulation to be studied.

The stochastic fluctuations were introduced as a crude attempt to capture the variability of rapid, sub-grid structures otherwise missing from the model. Experiments have been performed to test the response of the climate system to the stochastic noise.

In two experiments, the net fresh water flux and the net heat flux were perturbed separately. Significant changes were detected in the century-mean oceanic mixed-layer depth, sea-surface temperature, atmospheric Hadley circulation, and net upward water flux at the sea surface. Significant changes were also detected in the ENSO variability. The century-mean changes are summarized schematically in Figure 4. The above findings constitute evidence that noise-induced drift and noise-enhanced variability, which are familiar concepts from simple models, continue to apply in comprehensive climate models with millions of degrees of freedom..

The graph below shows the control experiment (top) followed by the difference between two experiments and the control (note change in vertical axis scale for the two anomaly experiments) where two different methods of adding random noise were included:

From Williams et al 2012

Figure 3

A key element of the paper is that adding random noise changes the mean values.

From Williams et al 2012

Figure 4

From the 2016 paper:

Faster computers are constantly permitting the development of climate models of greater complexity and higher resolution. Therefore, it might be argued that the need for parameterization is being gradually reduced over time.

However, it is difficult to envisage any model ever being capable of explicitly simulating all of the climatically important components on all of the relevant time scales. Furthermore, it is known that the impact of the subgrid processes cannot necessarily be made vanishingly small simply by increasing the grid resolution, because information from arbitrarily small scales within the inertial subrange (down to the viscous dissipation scale) will always be able to contaminate the resolved scales in finite time.

This feature of the subgrid dynamics perhaps explains why certain systematic errors are common to many different models and why numerical simulations are apparently not asymptoting as the resolution increases. Indeed, the Intergovernmental Panel on Climate Change (IPCC) has noted that the ultimate source of most large-scale errors is that ‘‘many important small- scale processes cannot be represented explicitly in models’’.

And they continue with an excellent explanation:

The major problem with conventional, deterministic parameterization schemes is their assumption that the impact of the subgrid scales on the resolved scales is uniquely determined by the resolved scales. This assumption can be made to sound plausible by invoking an analogy with the law of large numbers in statistical mechanics.

According to this analogy, the subgrid processes are essentially random and of sufficiently large number per grid box that their integrated effect on the resolved scales is predictable. In reality, however, the assumption is violated because the most energetic subgrid processes are only just below the grid scale, placing them far from the limit in which the law of large numbers applies. The implication is that the parameter values that would make deterministic parameterization schemes exactly correct are not simply uncertain; they are in fact indeterminate.

Later:

The question of whether stochastic closure schemes outperform their deterministic counterparts was listed by Williams et al. (2013) as a key outstanding challenge in the field of mathematics applied to the climate system.

Adding noise with a mean zero doesn’t create a mean zero effect?

The changes to the mean climatological state that were identified in section 3 are a manifestation of what, in the field of stochastic dynamical systems, is called noise-induced drift or noise-induced rectification. This effect arises from interactions between the noise and nonlinearities in the model equations. It permits zero- mean noise to have non-zero-mean effects, as seen in our stochastic simulations.

The paper itself aims..

..to investigate whether climate simulations can be improved by implementing a simple stochastic parameterization of ocean eddies in a coupled atmosphere–ocean general circulation model.

The idea is whether adding noise can improve model results more effectively than increasing model resolution:

We conclude that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost.

In this latter respect, our findings are consistent with those of Berner et al. (2012), who studied the model error in an atmospheric general circulation model. They reported that, although the impact of adding stochastic noise is not universally beneficial in terms of model bias reduction, it is nevertheless beneficial across a range of variables and diagnostics. They also reported that, in terms of improving the magnitudes and spatial patterns of model biases, the impact of adding stochastic noise can be similar to the impact of increasing the resolution. Our results are consistent with these findings. We conclude that oceanic stochastic parameterizations join atmospheric stochastic parameterizations in having the potential to significantly improve climate simulations.

And for people who’ve been educated on the basics of fluids on a rotating planet via experiments on the rotating annulus (a 2d model – along with equations – providing great insights into our 3d planet), Testing the limits of quasi-geostrophic theory: application to observed laboratory flows outside the quasi-geostrophic regime, Paul D Williams et al 2010 might be interesting.

Conclusion

Some systems have a lot of non-linearity. This is true of climate and generally of turbulent flows.

In a textbook that I read some time ago on (I think) chaos, the author made the great comment that usually you start out being taught “linear models” and much later come into contact with “non-linear models”. He proposed that a better terminology would be “real world systems” (non-linear) while “simplistic non-real-world teaching models” were the alternative (linear models). I’m paraphrasing.

The point is that most real world systems are non-linear. And many (not all) non-linear systems have difficult properties. The easy stuff you learn – linear systems, aka “simplistic non-real-world teaching models” – isn’t actually relevant to most real world problems, it’s just a stepping stone in giving you the tools to solve the hard problems.

Solving these difficult systems requires numerical methods (there is mostly no analytical solution) and once you start playing around with time-steps, parameter values and model resolution you find that the results can be significantly – and sometimes dramatically – affected by the arbitrary choices. With relatively simple systems (like the Lorenz three-equation convection system) and massive computing power you can begin to find the dependencies. But there isn’t a clear path to see where the dependencies lie (of course, many people have done great work in systematizing (simple) chaotic systems to provide some insights).

GCMs provide insights into climate that we can’t get otherwise.

One way to think about GCMs is that once they mostly agree on the direction of an effect that provides “high confidence”, and anyone who doesn’t agree with that confidence is at best a cantankerous individual and at worst has a hidden agenda.

Another way to think about GCMs is that climate models are mostly at the mercy of unverified parameterizations and numerical methods and anyone who does accept their conclusions is naive and doesn’t appreciate the realities of non-linear systems.

Life is complex and either of these propositions could be true, along with anything inbetween.

More about Turbulence: Turbulence, Closure and Parameterization

References

Time Step Sensitivity of Nonlinear Atmospheric Models: Numerical Convergence, Truncation Error Growth, and Ensemble Design, Teixeira, Reynolds & Judd, Journal of the Atmospheric Sciences (2007) – free paper

Dependence of aqua-planet simulations on time step, Willamson & Olsen, Q. J. R. Meteorol. Soc. (2003) – free paper

Global Thermohaline Circulation. Part I: Sensitivity to Atmospheric Moisture Transport, Xiaoli Wang, Peter H Stone, and Jochem Marotzke, American Meteorological Society (1999) – free paper

Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies – Paul D Williams et al, AMS (2016) – free paper

Climatic impacts of stochastic fluctuations in air–sea fluxes, Paul D Williams et al, GRL (2012) – free paper

Testing the limits of quasi-geostrophic theory: application to observed laboratory flows outside the quasi-geostrophic regime, Paul Williams, Peter Read & Thomas Haine, J. Fluid Mech. (2010) – free paper

Read Full Post »

A couple of recent articles covered ground related to clouds, but under Models –Models, On – and Off – the Catwalk – Part Seven – Resolution & Convection & Part Five – More on Tuning & the Magic Behind the Scenes. In the first article Andrew Dessler, day job climate scientist, made a few comments and in one comment provided some great recent references. One of these was by Paulo Ceppi and colleagues published this year and freely accessible. Another paper with some complementary explanations is from Mark Zelinka and colleagues, also published this year (but behind a paywall).

In this article we will take a look at the breakdown these papers provide. There is a lot to the Ceppi paper so we’re not going to review it all in this article, hopefully in a followup article.

Globally and annually averaged, clouds cool the planet by around 18W/m² – that’s large compared with the radiative effect of doubling CO2, a value of 3.7W/m². The net effect is made up of two larger opposite effects:

  • cooling from reflecting sunlight (albedo effect) of about 46W/m²
  • warming from the radiative effect of about 28W/m² – clouds absorb terrestrial radiation and reemit from near the top of the cloud where it is colder, this is like the “greenhouse” effect

In this graphic, Zelinka and colleagues show the geographical breakdown of cloud radiative effect averaged over 15 years from CERES measurements:

From Zelinka et al 2017

Figure 1 – Click to enlarge

Note that the cloud radiative effect shown above isn’t feedbacks from warming, it is simply the current effect of clouds. The big question is how this will change with warming.

In the next graphic, the inset in the top shows cloud feedback (note 1) vs ECS from 28 GCMs. ECS is the steady state temperature resulting from doubling CO2. Two models are picked out – red and blue – and in the main graph we see simulated warming under RCP8.5 (an unlikely future world confusing described by many as the “business as usual” scenario).

In the bottom graphic, cloud feedbacks from models are decomposed into the effect from low cloud amount, from changing high cloud altitude and from low cloud opacity. We see that the amount of low cloud is the biggest feedback with the widest spread, followed by the changing altitude of high clouds. And both of them have a positive feedback. The gray lines extending out cover the range of model responses.

From Zelinka et al 2017

Figure 2 – Click to enlarge

In the next figure – click to enlarge – they show the progression in each IPCC report, helpfully color coded around the breakdown above:

From Zelinka et al 2017

Figure 3 – Click to enlarge

On AR5:

Notably, the high cloud altitude feedback was deemed positive with high confidence due to supporting evidence from theory, observations, and high-resolution models. On the other hand, continuing low confidence was expressed in the sign of low cloud feedback because of a lack of strong observational constraints. However, the AR5 authors noted that high-resolution process models also tended to produce positive low cloud cover feedbacks. The cloud opacity feedback was deemed highly uncertain due to the poor representation of cloud phase and microphysics in models, limited observations with which to evaluate models, and lack of physical understanding. The authors noted that no robust mechanisms contribute a negative cloud feedback.

And on work since:

In the four years since AR5, evidence has increased that the overall cloud feedback is positive. This includes a number of high-resolution modelling studies of low cloud cover that have illuminated the competing processes that govern changes in low cloud coverage and thickness, and studies that constrain long-term cloud responses using observed short-term sensitivities of clouds to changes in their local environment. Both types of analyses point toward positive low cloud feedbacks. There is currently no evidence for strong negative cloud feedbacks..

Onto Ceppi et al 2017. In the graph below we see climate feedback from models broken out into a few parameters

  • WV+LR – the combination of water vapor and lapse rate changes (lapse rate is the temperature profile with altitude)
  • Albedo – e.g. melting sea ice
  • Cloud total
  • LW cloud – this is longwave effects, i.e., how clouds change terrestrial radiation emitted to space
  • SW cloud- this is shortwave effects, i.e., how clouds reflect solar radiation back to space

From Ceppi et al 2017

Figure 4 – Click to enlarge

Then they break down the cloud feedback further. This graph is well worth understanding. For example, in the second graph (b) we are looking at higher altitude clouds. We see that the increasing altitude of high clouds causes a positive feedback. The red dots are LW (longwave = terrestrial radiation). If high clouds increase in altitude the radiation from these clouds to space is lower because the cloud tops are colder. This is a positive feedback (more warming retained in the climate system). The blue dots are SW (shortwave = solar radiation). If high clouds increase in altitude it has no effect on the reflection of solar radiation – and so the blue dots are on zero.

Looking at the low clouds – bottom graph (c) – we see that the feedback is almost all from increasing reflection of solar radiation from increasing amounts of low clouds.

From Ceppi et al 2017

Figure 5 

Now a couple more graphs from Ceppi et al – the spatial distribution of cloud feedback from models (note this is different from our figure 1 which showed current cloud radiative effect):

From Ceppi et al 2017

Figure 6

And the cloud feedback by latitude broken down into: altitude effects; amount of cloud; and optical depth (higher optical depth primarily increases the reflection to space of solar radiation but also has an effect on terrestrial radiation).

From Ceppi et al 2017

Figure 7

They state:

The patterns of cloud amount and optical depth changes suggest the existence of distinct physical processes in different latitude ranges and climate regimes, as discussed in the next section. The results in Figure 4 allow us to further refine the conclusions drawn from Figure 2. In the multi- model mean, the cloud feedback in current GCMs mainly results from:

  • globally rising free-tropospheric clouds
  • decreasing low cloud amount at low to middle latitudes, and
  • increasing low cloud optical depth at middle to high latitudes

Cloud feedback is the main contributor to intermodel spread in climate sensitivity, ranging from near zero to strongly positive (−0.13 to 1.24 W/m²K) in current climate models.

It is a combination of three effects present in nearly all GCMs: rising free- tropospheric clouds (a LW heating effect); decreasing low cloud amount in tropics to midlatitudes (a SW heating effect); and increasing low cloud optical depth at high latitudes (a SW cooling effect). Low cloud amount in tropical subsidence regions dominates the intermodel spread in cloud feedback.

Happy Christmas to all Science of Doom readers.

Note – if anyone wants to debate the existence of the “greenhouse” effect, please add your comments to Two Basic Foundations or The “Greenhouse” Effect Explained in Simple Terms or any of the other tens of articles on that subject. Comments here on the existence of the “greenhouse” effect will be deleted.

References

Cloud feedback mechanisms and their representation in global climate models, Paulo Ceppi, Florent Brient, Mark D Zelinka & Dennis Hartmann, IREs Clim Change 2017 – free paper

Clearing clouds of uncertainty, Mark D Zelinka, David A Randall, Mark J Webb & Stephen A Klein, Nature 2017 – paywall paper

Notes

Note 1: From Ceppi et al 2017: CLOUD-RADIATIVE EFFECT AND CLOUD FEEDBACK:

The radiative impact of clouds is measured as the cloud-radiative effect (CRE), the difference between clear-sky and all-sky radiative flux at the top of atmosphere. Clouds reflect solar radiation (negative SW CRE, global-mean effect of −45W/m²) and reduce outgoing terrestrial radiation (positive LW CRE, 27W/m²−2), with an overall cooling effect estimated at −18W/m² (numbers from Henderson et al.).

CRE is proportional to cloud amount, but is also determined by cloud altitude and optical depth.

The magnitude of SW CRE increases with cloud optical depth, and to a much lesser extent with cloud altitude.

By contrast, the LW CRE depends primarily on cloud altitude, which determines the difference in emission temperature between clear and cloudy skies, but also increases with optical depth. As the cloud properties change with warming, so does their radiative effect. The resulting radiative flux response at the top of atmosphere, normalized by the global-mean surface temperature increase, is known as cloud feedback.

This is not strictly equal to the change in CRE with warming, because the CRE also responds to changes in clear-sky radiation—for example, due to changes in surface albedo or water vapor. The CRE response thus underestimates cloud feedback by about 0.3W/m² on average. Cloud feedback is therefore the component of CRE change that is due to changing cloud properties only. Various methods exist to diagnose cloud feedback from standard GCM output. The values presented in this paper are either based on CRE changes corrected for noncloud effects, or estimated directly from changes in cloud properties, for those GCMs providing appropriate cloud output. The most accurate procedure involves running the GCM radiation code offline—replacing instantaneous cloud fields from a control climatology with those from a perturbed climatology, while keeping other fields unchanged—to obtain the radiative perturbation due to changes in clouds. This method is computationally expensive and technically challenging, however.

Read Full Post »

Older Posts »