Feeds:
Posts
Comments

Archive for the ‘Climate Models’ Category

In Part Seven – Resolution & Convection we looked at some examples of how model resolution and domain size had big effects on modeled convection.

One commenter highlighted some presentations on issues in GCMs. As there were already a lot of comments on that article the relevant points appear a long way down. The issue deserves at least a short article of its own.

The presentations, by Paul Williams, Department of Meteorology, University of Reading, UK – all freely available:

The impacts of stochastic noise on climate models

The importance of numerical time-stepping errors

The leapfrog is dead. Long live the leapfrog!

Various papers are highlighted in these presentations (often without a full reference).

Time-Step Dependence

One of the papers cited: Time Step Sensitivity of Nonlinear Atmospheric Models: Numerical Convergence, Truncation Error Growth, and Ensemble Design, Teixeira, Reynolds & Judd 2007 comments first on the Lorenz equations (see Natural Variability and Chaos – Two – Lorenz 1963):

Figure 3a shows the evolution of X for r =19 for three different time steps (10-2, 10-3, and 10-4 LTU).

In this regime the solutions exhibit what is often referred to as transient chaotic behavior (Strogatz 1994), but after some time all solutions converge to a stable fixed point.

Depending on the time step used to integrate the equations, the values for the fixed points can be different, which means that the climate of the model is sensitive to the time step.

In this particular case, the solution obtained with 0.01 LTU converges to a positive fixed point while the other two solutions converge to a negative value.

To conclude the analysis of the sensitivity to parameter r, Fig. 3b shows the time evolution (with r =21.3) of X for three different time steps. For time steps 0.01 LTU and 0.0001 LTU the solution ceases to have a chaotic behavior and starts converging to a stable fixed point.

However, for 0.001 LTU the solution stays chaotic, which shows that different time steps may not only lead to uncertainty in the predictions after some time, but may also lead to fundamentally different regimes of the solution.

These results suggest that time steps may have an important impact in the statistics of climate models in the sense that something relatively similar may happen to more complex and realistic models of the climate system for time steps and parameter values that are currently considered to be reasonable.

[Emphasis added]

For people unfamiliar with chaotic systems, it is worth reading Natural Variability and Chaos – One – Introduction and Natural Variability and Chaos – Two – Lorenz 1963. The Lorenz system of three equations creates a very simple system of convection where we humans have the advantage of god-like powers. Although, as this paper shows, it seems that even with our god-like powers, under certain circumstances, we aren’t able to confirm

  1. the average value of the “climate”, or even
  2. if the climate is a deterministic or chaotic system

The results depend on the time step we have used to solve the set of equations.

Then the paper then goes on to consider a couple of models, including a weather forecasting model. In their summary:

In the weather and climate prediction community, when thinking in terms of model predictability, there is a tendency to associate model error with the physical parameterizations.

In this paper, it is shown that time truncation error in nonlinear models behaves in a more complex way than in linear or mildly nonlinear models and that it can be a substantial part of the total forecast error.

The fact that it is relatively simple to test the sensitivity of a model to the time step, allowed us to study the implications of time step sensitivity in terms of numerical convergence and error growth in some depth. The simple analytic model proposed in this paper illustrates how the evolution of truncation error in nonlinear models can be understood as a combination of the typical linear truncation error and of the initial condition error associated with the error committed in the first time step integration (proportional to some power of the time step).

A relevant question is how much of this simple study of time step truncation error could help in understanding the behavior of more complex forms of model error associated with the parameterizations in weather and climate prediction models, and its interplay with initial condition error.

Another reference from the presentations is Dependence of aqua-planet simulations on time step, Willamson & Olsen 2003.

What is an aquaplanet simulation?

In an aqua-planet the earth is covered with water and has no mountains. The sea surface temperature (SST) is speciŽed, usually with rather simple geometries such as zonal symmetry. The ‘correct’ solutions of aqua-planet tests are not known.

However, it is thought that aqua-planet studies might help us gain insight into model differences, understand physical processes in individual models, understand the impact of changing parametrizations and dynamical cores, and understand the interaction between dynamical cores and parametrization packages. There is a rich history of aqua-planet experiments, from which results relevant to this paper are discussed below.

They found that running different “mechanisms” for the same parameterizations produced quite different precipitation results. In investigating further it appeared that the time step was the key change.


Figure 1 – Click to enlarge

Their conclusion:

When running the Neale and Hoskins (2000a) standard aqua-planet test suite with two versions of the CCM3, which differed in the formulation of the dynamical cores, we found a strong sensitivity in the morphology of the time averaged, zonal averaged precipitation.

The two dynamical cores were candidates for the successor model to CCM3; one was Eulerian and the other semi-Lagrangian.

They were each con􏰜figured as proposed for climate simulation application, and believed to be of comparable accuracy.

The major difference was computational ef􏰜ficiency. In general, simulations with the Eulerian core formed a narrow single precipitation peak centred on the equator, while those with the semi-Lagrangian core produced more precipitation farther from the equator accompanied by a double peak straddling the equator with a minimum centred on the equator..

..We do not know which simulation is ‘correct’. Although a single peak forms with smaller time steps, the simulations do not converge with the smallest time step considered here. The maximum precipitation rate at the equator continues to increase..

..The significance of the time truncation error of parametrizations deserves further consideration in AGCMs forced by real-world conditions.

Stochastic Noise

From Global Thermohaline Circulation. Part I: Sensitivity to Atmospheric Moisture Transport, Xiaoli Wang et al 1999, the strength of the North Atlantic overturning current (the thermohaline circulation) changed significantly with noise:

From Wang et al 1999

Figure 2

The idea behind the experiment is that increasing freshwater fluxes at high latitudes from melting ice (in a warmer world) appear to impact the strength of the Atlantic “conveyor” which brings warm water from nearer the equator to northern Europe (there is a long history of consideration of this question). How sensitive is this to random effects?

In these experiments we also include random variations in the zonal wind stress field north of 46ºN. The variations are uniform in space and have a Gaussian distribution, with zero mean and standard deviation of 1 dyn/cm² , based on European Centre for MediumRange Weather Forecasts (ECMWF) analyses (D. Stammer 1996, personal communication).

Our motivation in applying these random variations in wind stress is illustrated by two experiments, one with random wind variations, the other without, in which μN increases according to the above prescription. Figure 12 shows the time series of the North Atlantic overturning strength in these two experiments. The random wind variations give rise to interannual variations in the strength of the overturning, which are comparable in magnitude to those found in experiments with coupled GCMs (e.g., Manabe and Stouffer 1994), whereas interannual variations are almost absent without them. The variations also accelerate the collapse of the overturning, therefore speeding up the response time of the model to the freshwater flux perturbation (see Fig. 12). The reason for the acceleration of the collapse is that the variations make it harder for the convection to sustain itself.

The convection tends to maintain itself, because of a positive feedback with the overturning circulation (Lenderink and Haarsma 1994). Once the convection is triggered, it creates favorable conditions for further convection there. This positive feedback is so powerful that in the case without random variations the convection does not shut off until the freshening is virtually doubled at the convection site (around year 1000). When the random variations are present, they generate perturbations in the Ekman currents, which are propagated downward to the deep layers, and cause variations in the overturning strength. This weakens the positive feedback.

In general, the random wind stress variations lead to a more realistic variability in the convection sites, and in the strength of the overturning circulation.

We note that, even though the transitions are speeded up by the technique, the character of the model behavior is not fundamentally altered by including the random wind variations.

The presentation on stochastic noise also highlighted a coarse resolution GCM that didn’t show El-Nino features – but after the introduction of random noise it did.

I couldn’t track down the reference – Joshi, Williams & Smith 2010  – and emailed Paul Williams who replied very quickly, and helpfully – the paper is still “in preparation” so that means it probably won’t ever be finished, but instead Paul pointed me to two related papers that had been published:  Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies – Paul D Williams et al, AMS (2016) and Climatic impacts of stochastic fluctuations in air–sea fluxes, Paul D Williams et al, GRL (2012).

From the 2012 paper:

In this study, stochastic fluctuations have been applied to the air–sea buoyancy fluxes in a comprehensive climate model. Unlike related previous work, which has employed an ocean general circulation model coupled only to a simple empirical model of atmospheric dynamics, the present work has employed a full coupled atmosphere–ocean general circulation model. This advance allows the feedbacks in the coupled system to be captured as comprehensively as is permitted by contemporary high-performance computing, and it allows the impacts on the atmospheric circulation to be studied.

The stochastic fluctuations were introduced as a crude attempt to capture the variability of rapid, sub-grid structures otherwise missing from the model. Experiments have been performed to test the response of the climate system to the stochastic noise.

In two experiments, the net fresh water flux and the net heat flux were perturbed separately. Significant changes were detected in the century-mean oceanic mixed-layer depth, sea-surface temperature, atmospheric Hadley circulation, and net upward water flux at the sea surface. Significant changes were also detected in the ENSO variability. The century-mean changes are summarized schematically in Figure 4. The above findings constitute evidence that noise-induced drift and noise-enhanced variability, which are familiar concepts from simple models, continue to apply in comprehensive climate models with millions of degrees of freedom..

The graph below shows the control experiment (top) followed by the difference between two experiments and the control (note change in vertical axis scale for the two anomaly experiments) where two different methods of adding random noise were included:

From Williams et al 2012

Figure 3

A key element of the paper is that adding random noise changes the mean values.

From Williams et al 2012

Figure 4

From the 2016 paper:

Faster computers are constantly permitting the development of climate models of greater complexity and higher resolution. Therefore, it might be argued that the need for parameterization is being gradually reduced over time.

However, it is difficult to envisage any model ever being capable of explicitly simulating all of the climatically important components on all of the relevant time scales. Furthermore, it is known that the impact of the subgrid processes cannot necessarily be made vanishingly small simply by increasing the grid resolution, because information from arbitrarily small scales within the inertial subrange (down to the viscous dissipation scale) will always be able to contaminate the resolved scales in finite time.

This feature of the subgrid dynamics perhaps explains why certain systematic errors are common to many different models and why numerical simulations are apparently not asymptoting as the resolution increases. Indeed, the Intergovernmental Panel on Climate Change (IPCC) has noted that the ultimate source of most large-scale errors is that ‘‘many important small- scale processes cannot be represented explicitly in models’’.

And they continue with an excellent explanation:

The major problem with conventional, deterministic parameterization schemes is their assumption that the impact of the subgrid scales on the resolved scales is uniquely determined by the resolved scales. This assumption can be made to sound plausible by invoking an analogy with the law of large numbers in statistical mechanics.

According to this analogy, the subgrid processes are essentially random and of sufficiently large number per grid box that their integrated effect on the resolved scales is predictable. In reality, however, the assumption is violated because the most energetic subgrid processes are only just below the grid scale, placing them far from the limit in which the law of large numbers applies. The implication is that the parameter values that would make deterministic parameterization schemes exactly correct are not simply uncertain; they are in fact indeterminate.

Later:

The question of whether stochastic closure schemes outperform their deterministic counterparts was listed by Williams et al. (2013) as a key outstanding challenge in the field of mathematics applied to the climate system.

Adding noise with a mean zero doesn’t create a mean zero effect?

The changes to the mean climatological state that were identified in section 3 are a manifestation of what, in the field of stochastic dynamical systems, is called noise-induced drift or noise-induced rectification. This effect arises from interactions between the noise and nonlinearities in the model equations. It permits zero- mean noise to have non-zero-mean effects, as seen in our stochastic simulations.

The paper itself aims..

..to investigate whether climate simulations can be improved by implementing a simple stochastic parameterization of ocean eddies in a coupled atmosphere–ocean general circulation model.

The idea is whether adding noise can improve model results more effectively than increasing model resolution:

We conclude that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost.

In this latter respect, our findings are consistent with those of Berner et al. (2012), who studied the model error in an atmospheric general circulation model. They reported that, although the impact of adding stochastic noise is not universally beneficial in terms of model bias reduction, it is nevertheless beneficial across a range of variables and diagnostics. They also reported that, in terms of improving the magnitudes and spatial patterns of model biases, the impact of adding stochastic noise can be similar to the impact of increasing the resolution. Our results are consistent with these findings. We conclude that oceanic stochastic parameterizations join atmospheric stochastic parameterizations in having the potential to significantly improve climate simulations.

And for people who’ve been educated on the basics of fluids on a rotating planet via experiments on the rotating annulus (a 2d model – along with equations – providing great insights into our 3d planet), Testing the limits of quasi-geostrophic theory: application to observed laboratory flows outside the quasi-geostrophic regime, Paul D Williams et al 2010 might be interesting.

Conclusion

Some systems have a lot of non-linearity. This is true of climate and generally of turbulent flows.

In a textbook that I read some time ago on (I think) chaos, the author made the great comment that usually you start out being taught “linear models” and much later come into contact with “non-linear models”. He proposed that a better terminology would be “real world systems” (non-linear) while “simplistic non-real-world teaching models” were the alternative (linear models). I’m paraphrasing.

The point is that most real world systems are non-linear. And many (not all) non-linear systems have difficult properties. The easy stuff you learn – linear systems, aka “simplistic non-real-world teaching models” – isn’t actually relevant to most real world problems, it’s just a stepping stone in giving you the tools to solve the hard problems.

Solving these difficult systems requires numerical methods (there is mostly no analytical solution) and once you start playing around with time-steps, parameter values and model resolution you find that the results can be significantly – and sometimes dramatically – affected by the arbitrary choices. With relatively simple systems (like the Lorenz three-equation convection system) and massive computing power you can begin to find the dependencies. But there isn’t a clear path to see where the dependencies lie (of course, many people have done great work in systematizing (simple) chaotic systems to provide some insights).

GCMs provide insights into climate that we can’t get otherwise.

One way to think about GCMs is that once they mostly agree on the direction of an effect that provides “high confidence”, and anyone who doesn’t agree with that confidence is at best a cantankerous individual and at worst has a hidden agenda.

Another way to think about GCMs is that climate models are mostly at the mercy of unverified parameterizations and numerical methods and anyone who does accept their conclusions is naive and doesn’t appreciate the realities of non-linear systems.

Life is complex and either of these propositions could be true, along with anything inbetween.

More about Turbulence: Turbulence, Closure and Parameterization

References

Time Step Sensitivity of Nonlinear Atmospheric Models: Numerical Convergence, Truncation Error Growth, and Ensemble Design, Teixeira, Reynolds & Judd, Journal of the Atmospheric Sciences (2007) – free paper

Dependence of aqua-planet simulations on time step, Willamson & Olsen, Q. J. R. Meteorol. Soc. (2003) – free paper

Global Thermohaline Circulation. Part I: Sensitivity to Atmospheric Moisture Transport, Xiaoli Wang, Peter H Stone, and Jochem Marotzke, American Meteorological Society (1999) – free paper

Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies – Paul D Williams et al, AMS (2016) – free paper

Climatic impacts of stochastic fluctuations in air–sea fluxes, Paul D Williams et al, GRL (2012) – free paper

Testing the limits of quasi-geostrophic theory: application to observed laboratory flows outside the quasi-geostrophic regime, Paul Williams, Peter Read & Thomas Haine, J. Fluid Mech. (2010) – free paper

Advertisements

Read Full Post »

A couple of recent articles covered ground related to clouds, but under Models –Models, On – and Off – the Catwalk – Part Seven – Resolution & Convection & Part Five – More on Tuning & the Magic Behind the Scenes. In the first article Andrew Dessler, day job climate scientist, made a few comments and in one comment provided some great recent references. One of these was by Paulo Ceppi and colleagues published this year and freely accessible. Another paper with some complementary explanations is from Mark Zelinka and colleagues, also published this year (but behind a paywall).

In this article we will take a look at the breakdown these papers provide. There is a lot to the Ceppi paper so we’re not going to review it all in this article, hopefully in a followup article.

Globally and annually averaged, clouds cool the planet by around 18W/m² – that’s large compared with the radiative effect of doubling CO2, a value of 3.7W/m². The net effect is made up of two larger opposite effects:

  • cooling from reflecting sunlight (albedo effect) of about 46W/m²
  • warming from the radiative effect of about 28W/m² – clouds absorb terrestrial radiation and reemit from near the top of the cloud where it is colder, this is like the “greenhouse” effect

In this graphic, Zelinka and colleagues show the geographical breakdown of cloud radiative effect averaged over 15 years from CERES measurements:

From Zelinka et al 2017

Figure 1 – Click to enlarge

Note that the cloud radiative effect shown above isn’t feedbacks from warming, it is simply the current effect of clouds. The big question is how this will change with warming.

In the next graphic, the inset in the top shows cloud feedback (note 1) vs ECS from 28 GCMs. ECS is the steady state temperature resulting from doubling CO2. Two models are picked out – red and blue – and in the main graph we see simulated warming under RCP8.5 (an unlikely future world confusing described by many as the “business as usual” scenario).

In the bottom graphic, cloud feedbacks from models are decomposed into the effect from low cloud amount, from changing high cloud altitude and from low cloud opacity. We see that the amount of low cloud is the biggest feedback with the widest spread, followed by the changing altitude of high clouds. And both of them have a positive feedback. The gray lines extending out cover the range of model responses.

From Zelinka et al 2017

Figure 2 – Click to enlarge

In the next figure – click to enlarge – they show the progression in each IPCC report, helpfully color coded around the breakdown above:

From Zelinka et al 2017

Figure 3 – Click to enlarge

On AR5:

Notably, the high cloud altitude feedback was deemed positive with high confidence due to supporting evidence from theory, observations, and high-resolution models. On the other hand, continuing low confidence was expressed in the sign of low cloud feedback because of a lack of strong observational constraints. However, the AR5 authors noted that high-resolution process models also tended to produce positive low cloud cover feedbacks. The cloud opacity feedback was deemed highly uncertain due to the poor representation of cloud phase and microphysics in models, limited observations with which to evaluate models, and lack of physical understanding. The authors noted that no robust mechanisms contribute a negative cloud feedback.

And on work since:

In the four years since AR5, evidence has increased that the overall cloud feedback is positive. This includes a number of high-resolution modelling studies of low cloud cover that have illuminated the competing processes that govern changes in low cloud coverage and thickness, and studies that constrain long-term cloud responses using observed short-term sensitivities of clouds to changes in their local environment. Both types of analyses point toward positive low cloud feedbacks. There is currently no evidence for strong negative cloud feedbacks..

Onto Ceppi et al 2017. In the graph below we see climate feedback from models broken out into a few parameters

  • WV+LR – the combination of water vapor and lapse rate changes (lapse rate is the temperature profile with altitude)
  • Albedo – e.g. melting sea ice
  • Cloud total
  • LW cloud – this is longwave effects, i.e., how clouds change terrestrial radiation emitted to space
  • SW cloud- this is shortwave effects, i.e., how clouds reflect solar radiation back to space

From Ceppi et al 2017

Figure 4 – Click to enlarge

Then they break down the cloud feedback further. This graph is well worth understanding. For example, in the second graph (b) we are looking at higher altitude clouds. We see that the increasing altitude of high clouds causes a positive feedback. The red dots are LW (longwave = terrestrial radiation). If high clouds increase in altitude the radiation from these clouds to space is lower because the cloud tops are colder. This is a positive feedback (more warming retained in the climate system). The blue dots are SW (shortwave = solar radiation). If high clouds increase in altitude it has no effect on the reflection of solar radiation – and so the blue dots are on zero.

Looking at the low clouds – bottom graph (c) – we see that the feedback is almost all from increasing reflection of solar radiation from increasing amounts of low clouds.

From Ceppi et al 2017

Figure 5 

Now a couple more graphs from Ceppi et al – the spatial distribution of cloud feedback from models (note this is different from our figure 1 which showed current cloud radiative effect):

From Ceppi et al 2017

Figure 6

And the cloud feedback by latitude broken down into: altitude effects; amount of cloud; and optical depth (higher optical depth primarily increases the reflection to space of solar radiation but also has an effect on terrestrial radiation).

From Ceppi et al 2017

Figure 7

They state:

The patterns of cloud amount and optical depth changes suggest the existence of distinct physical processes in different latitude ranges and climate regimes, as discussed in the next section. The results in Figure 4 allow us to further refine the conclusions drawn from Figure 2. In the multi- model mean, the cloud feedback in current GCMs mainly results from:

  • globally rising free-tropospheric clouds
  • decreasing low cloud amount at low to middle latitudes, and
  • increasing low cloud optical depth at middle to high latitudes

Cloud feedback is the main contributor to intermodel spread in climate sensitivity, ranging from near zero to strongly positive (−0.13 to 1.24 W/m²K) in current climate models.

It is a combination of three effects present in nearly all GCMs: rising free- tropospheric clouds (a LW heating effect); decreasing low cloud amount in tropics to midlatitudes (a SW heating effect); and increasing low cloud optical depth at high latitudes (a SW cooling effect). Low cloud amount in tropical subsidence regions dominates the intermodel spread in cloud feedback.

Happy Christmas to all Science of Doom readers.

Note – if anyone wants to debate the existence of the “greenhouse” effect, please add your comments to Two Basic Foundations or The “Greenhouse” Effect Explained in Simple Terms or any of the other tens of articles on that subject. Comments here on the existence of the “greenhouse” effect will be deleted.

References

Cloud feedback mechanisms and their representation in global climate models, Paulo Ceppi, Florent Brient, Mark D Zelinka & Dennis Hartmann, IREs Clim Change 2017 – free paper

Clearing clouds of uncertainty, Mark D Zelinka, David A Randall, Mark J Webb & Stephen A Klein, Nature 2017 – paywall paper

Notes

Note 1: From Ceppi et al 2017: CLOUD-RADIATIVE EFFECT AND CLOUD FEEDBACK:

The radiative impact of clouds is measured as the cloud-radiative effect (CRE), the difference between clear-sky and all-sky radiative flux at the top of atmosphere. Clouds reflect solar radiation (negative SW CRE, global-mean effect of −45W/m²) and reduce outgoing terrestrial radiation (positive LW CRE, 27W/m²−2), with an overall cooling effect estimated at −18W/m² (numbers from Henderson et al.).

CRE is proportional to cloud amount, but is also determined by cloud altitude and optical depth.

The magnitude of SW CRE increases with cloud optical depth, and to a much lesser extent with cloud altitude.

By contrast, the LW CRE depends primarily on cloud altitude, which determines the difference in emission temperature between clear and cloudy skies, but also increases with optical depth. As the cloud properties change with warming, so does their radiative effect. The resulting radiative flux response at the top of atmosphere, normalized by the global-mean surface temperature increase, is known as cloud feedback.

This is not strictly equal to the change in CRE with warming, because the CRE also responds to changes in clear-sky radiation—for example, due to changes in surface albedo or water vapor. The CRE response thus underestimates cloud feedback by about 0.3W/m² on average. Cloud feedback is therefore the component of CRE change that is due to changing cloud properties only. Various methods exist to diagnose cloud feedback from standard GCM output. The values presented in this paper are either based on CRE changes corrected for noncloud effects, or estimated directly from changes in cloud properties, for those GCMs providing appropriate cloud output. The most accurate procedure involves running the GCM radiation code offline—replacing instantaneous cloud fields from a control climatology with those from a perturbed climatology, while keeping other fields unchanged—to obtain the radiative perturbation due to changes in clouds. This method is computationally expensive and technically challenging, however.

Read Full Post »

In the comments on Part Five there was some discussion on Mauritsen & Stevens 2015 which looked at the “iris effect”:

A controversial hypothesis suggests that the dry and clear regions of the tropical atmosphere expand in a warming climate and thereby allow more infrared radiation to escape to space

One of the big challenges in climate modeling (there are many) is model resolution and “sub-grid parameterization”. A climate model is created by breaking up the atmosphere (and ocean) into “small” cells of something like 200km x 200km, assigning one value in each cell for parameters like N-S wind, E-W wind and up-down wind – and solving the set of equations (momentum, heat transfer and so on) across the whole earth. However, in one cell like this below you have many small regions of rapidly ascending air (convection) topped by clouds of different thicknesses and different heights and large regions of slowly descending air:

Held and Soden (2000)

Held and Soden (2000)

The model can’t resolve the actual processes inside the grid. That’s the nature of how finite element analysis works. So, of course, the “parameterization schemes” to figure out how much cloud, rain and humidity results from say a warming earth are problematic and very hard to verify.

Running higher resolution models helps to illuminate the subject. We can’t run these higher resolution models for the whole earth – instead all kinds of smaller scale model experiments are done which allow climate scientists to see which factors affect the results.

Here is the “plain language summary” from Organization of tropical convection in low vertical wind shears:Role of updraft entrainment, Tompkins & Semie 2017:

Thunderstorms dry out the atmosphere since they produce rainfall. However, their efficiency at drying the atmosphere depends on how they are arranged; take a set of thunderstorms and sprinkle them randomly over the tropics and the troposphere will remain quite moist, but take that same number of thunderstorms and place them all close together in a “cluster” and the atmosphere will be much drier.

Previous work has indicated that thunderstorms might start to cluster more as temperatures increase, thus drying the atmosphere and letting more infrared radiation escape to space as aresult – acting as a strong negative feedback on climate, the so-called iris effect.

We investigate the clustering mechanisms using 2km grid resolution simulations, which show that strong turbulent mixing of air between thunderstorms and their surrounding is crucial for organization to occur. However, with grid cells of 2 km this mixing is not modelled explicitly but instead represented by simple model approximations, which are hugely uncertain. We show three commonly used schemes differ by over an order of magnitude. Thus we recommend that further investigation into the climate iris feedback be conducted in a coordinated community model intercomparison effort to allow model uncertainty to be robustly accounted for.

And a little about computation resources and resolution. CRMs are “cloud resolving models”, i.e. higher resolution models over smaller areas:

In summary, cloud-resolving models with grid sizes of the order of 1 km have revealed many of the potential feed-back processes that may lead to, or enhance, convective organization. It should be recalled however, that these studies are often idealized and involve computational compromises, as recently discussed in Mapes [2016]. The computational requirements of RCE experiments that require more than 40 days of integration still largely prohibit horizontal resolutions finer than 1 km. Simulations such as Tompkins [2001c], Bryan et al. [2003], and Khairoutdinov et al. [2009] that use resolutions less than 350 m were restricted to 1 or 2 days. If water vapor entrainment is a factor for either the establishment and/or the amplification of convective organization, it raises the issue that the organization strength in CRMs model using grid sizes of the order of 1 km or larger is likely to be sensitive to the model resolution and simulation framework in terms of the choice of subgrid-scale diffusion and mixing.

In their conclusion on what resolution is needed:

.. and states that convergence is achieved when the most energetic eddies are well resolved, which is not the case at 2 km, and Craig and Dornbrack [2008] also suggest that resolving clouds requires grid sizes that resolve the typical buoyancy scale of a few hundred meters. The present state of the art of LES is represented by Heinze et al. [2016], integrating a model for the whole of Germany with a 100 m grid spacing, for a period of 4 days.

They continue:

The simulations in this paper also highlight the fact that intricacies of the assumptions contained in the parameterization of small- scale physics can strongly impact the possibility of crossing the threshold from unorganized to organized equilibrium states. The expense of such simulations has usually meant that only one model configuration is used concerning assumptions of small-scale processes such as mixing and microphysics, often initialized from a single initial condition. The potential of multiple equilibria and also an hysteresis in the transition between organized and unorganized states [Muller and Held, 2012], points to the requirement for larger integration ensembles employing a range of initial and boundary conditions, and physical parameterization assumptions. The ongoing requirements of large-domain, RCE numerical experiments imply that this challenge can be best met with a community-based, convective organization model intercomparison project (CORGMIP).

Here is Detailed Investigation of the Self-Aggregation of Convection in Cloud-Resolving Simulations, Muller & Held (2012). The second author is Isaac Held, often referenced on this blog who has been writing very interesting papers for about 40 years:

It is well known that convection can organize on a wide range of scales. Important examples of organized convection include squall lines, mesoscale convective systems (Emanuel 1994; Holton 2004), and the Madden– Julian oscillation (Grabowski and Moncrieff 2004). The ubiquity of convective organization above tropical oceans has been pointed out in several observational studies (Houze and Betts 1981; WCRP 1999; Nesbitt et al. 2000)..

..Recent studies using a three-dimensional cloud resolving model show that when the domain is sufficiently large, tropical convection can spontaneously aggregate into one single region, a phenomenon referred to as self-aggregation (Bretherton et al. 2005; Emanuel and Khairoutdinov 2010). The final climate is a spatially organized atmosphere composed of two distinct areas: a moist area with intense convection, and a dry area with strong radiative cooling (Figs. 1b and 2b,d). Whether or not a horizontally homogeneous convecting atmosphere in radiative convective equilibrium self-aggregates seems to depend on the domain size (Bretherton et al. 2005). More generally, the conditions under which this instability of the disorganized radiative convective equilibrium state of tropical convection occurs, as well as the feedback responsible, remain unclear.

We see the difference in self-aggregation of convection between the two domain sizes below:

 

From Muller & Held 2012

Figure 1

The effect on rainfall and OLR (outgoing longwave radiation) is striking, and also note that the mean is affected:

From Muller & Held 2012

Figure 2

Then they look at varying model resolution (dx), domain size (L) and also the initial conditions. The higher resolution models don’t produce the self-aggregation, but the results are also sensitive to domain size and initial conditions. The black crosses denote model runs where the convection stayed disorganized, the red circles where the convection self-aggregated:

From Muller & Held 2012

Figure 3

In their conclusion:

The relevance of self-aggregation to observed convective organization (mesoscale convective systems, mesoscale convective complexes, etc.) requires further investigation. Based on its sensitivity to resolution (Fig. 6a), it may be tempting to see self-aggregation as a numerical artifact that occurs at coarse resolutions, whereby low-cloud radiative feedback organizes the convection.

Nevertheless, it is not clear that self-aggregation would not occur at fine resolution if the domain size were large enough. Furthermore, the hysteresis (Fig. 6b) increases the importance of the aggregated state, since it expands the parameter span over which the aggregated state exists as a stable climate equilibrium. The existence of the aggregated state appears to be less sensitive to resolution than the self-aggregation process. It is also possible that our results are sensitive to the value of the sea surface temperature; indeed, Emanuel and Khairoutdinov (2010) find that warmer sea surface temperatures tend to favor the spontaneous self-aggregation of convection.

Current convective parameterizations used in global climate models typically do not account for convective organization.

More two-dimensional and three dimensional simulations at high resolution are desirable to better understand self-aggregation, and convective organization in general, and its dependence on the subgrid-scale closure, boundary layer, ocean surface, and radiative scheme used. The ultimate goal is to help guide and improve current convective parameterizations.

From the results in their paper we might think that self-aggregation of convection was a model artifact that disappears with higher resolution models (they are careful not to really conclude this). Tompkins & Semie 2017 suggested that Muller & Held’s results may be just a dependence on their sub-grid parameterization scheme (see note 1).

From Hohenegger & Stevens 2016, how convection self-aggregates over time in their model:

From Hohenegger & Stevens 2016

Figure 4 – Click to enlarge

From a review paper on the same topic by Wing et al 2017:

The novelty of self-aggregation is reflected by the many remaining unanswered questions about its character, causes and effects. It is clear that interactions between longwave radiation and water vapor and/or clouds are critical: non-rotating aggregation does not occur when they are omitted. Beyond this, the field is in play, with the relative roles of surface fluxes, rain evaporation, cloud versus water vapor interactions with radiation, wind shear, convective sensitivity to free atmosphere water vapor, and the effects of an interactive surface yet to be firmly characterized and understood.

The sensitivity of simulated aggregation not only to model physics but to the size and shape of the numerical domain and resolution remains a source of concern about whether we have even robustly characterized and simulated the phenomenon. While aggregation has been observed in models (e.g., global models) in which moist convection is parameterized, it is not yet clear whether such models simulate aggregation with any real fidelity. The ability to simulate self-aggregation using models with parameterized convection and clouds will no doubt become an important test of the quality of such schemes.

Understanding self-aggregation may hold the key to solving a number of obstinate problems in meteorology and climate. There is, for example, growing optimism that understanding the interplay among radiation, surface fluxes, clouds, and water vapor may lead to robust accounts of the Madden Julian oscillation and tropical cyclogenesis, two long-standing problems in atmospheric science.

Indeed, the difficulty of modeling these phenomena may be owing in part to the challenges of simulating them using representations of clouds and convection that were not designed or tested with self-aggregation in mind.

Perhaps most exciting is the prospect that understanding self-aggregation may lead to an improved understanding of climate. The strong hysteresis observed in many simulations of aggregation—once a cluster is formed it tends to be robust to changing environmental conditions—points to the possibility of intransitive or almost intransitive behavior of tropical climate.

The strong drying that accompanies aggregation, by cooling the system, may act as a kind of thermostat, if indeed the existence or degree of aggregation depends on temperature. Whether or how well this regulation is simulated in current climate models depends on how well such models can simulate aggregation, given the imperfections of their convection and cloud parameterizations.

Clearly, there is much exciting work to be done on aggregation of moist convection.

[Emphasis added]

Conclusion

Climate science asks difficult questions that are currently unanswerable. This goes against two myths that circulate media and many blogs: on the one hand the myth that the important points are all worked out; and on the other hand the myth that climate science is a political movement creating alarm, with each paper reaching more serious and certain conclusions than the paper before. Reading lots of papers I find a real science. What is reported in the media is unrelated to the state of the field.

At the heart of modeling climate is the need to model turbulent fluid flows (air and water) and this can’t be done. Well, it can be done, but using schemes that leave open the possibility or probability that further work will reveal them to be inadequate in a serious way. Running higher resolution models helps to answer some questions, but more often reveals yet new questions. If you have a mathematical background this is probably easy to grasp. If you don’t it might not make a whole lot of sense, but hopefully you can see from the papers that very recent papers are not yet able to resolve some challenging questions.

At some stage sufficiently high resolution models will be validated and possibly allow development of more realistic parameterization schemes for GCMs. For example, here is Large-eddy simulations over Germany using ICON: a comprehensive evaluation, Reike Heinze et al 2016, evaluating their model with 150m grid resolution – 3.3bn grid points on a sub-1 second time step over 4 days over Germany:

These results consistently show that the high-resolution model significantly improves the representation of small- to mesoscale variability. This generates confidence in the ability to simulate moist processes with fidelity. When using the model output to assess turbulent and moist processes and to evaluate and develop climate model parametrizations, it seems relevant to make use of the highest resolution, since the coarser-resolved model variants fail to reproduce aspects of the variability.

Related Articles

Ensemble Forecasting – why running a lot of models gets better results than one “best” model

Latent heat and Parameterization – example of one parameterization and its problems

Turbulence, Closure and Parameterization – explaining how the insoluble problem of turbulence gets handled in models

Part Four – Tuning & the Magic Behind the Scenes – how some important model choices get made

Part Five – More on Tuning & the Magic Behind the Scenes – parameterization choices, aerosol properties and the impact on temperature hindcasting, plus a high resolution model study

Part Six – Tuning and Seasonal Contrasts – model targets and model skill, plus reviewing seasonal temperature trends in observations and models

References

Missing iris efect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen and Bjorn Stevens, Nature Geoscience (2015) – free paper

Organization of tropical convection in low vertical wind shears:Role of updraft entrainment, Adrian M Tompkins & Addisu G Semie, Journal of Advances in Modeling Earth Systems (2017) – free paper

Detailed Investigation of the Self-Aggregation of Convection in Cloud-Resolving Simulations, Caroline Muller & Isaac Held, Journal of the Atmospheric Sciences (2012) – free paper

Coupled radiative convective equilibrium simulations with explicit and parameterized convection, Cathy Hohenegger & Bjorn Stevens, Journal of Advances in Modeling Earth Systems (2016) – free paper

Convective Self-Aggregation in Numerical Simulations: A Review, Allison A Wing, Kerry Emanuel, Christopher E Holloway & Caroline Muller, Surv Geophys (2017) – free paper

Large-eddy simulations over Germany using ICON: a comprehensive evaluation, Reike Heinze et al, Quarterly Journal of the Royal Meteorological Society (2016)

Other papers worth reading:

Featured Article Self-aggregation of convection in long channel geometry, Allison A Wing & Timothy W Cronin, Quarterly Journal of the Royal Meteorological Society (2016) – paywall paper

Notes

Note 1: The equations for turbulent fluid flow are insoluble due to the computing resources required. Energy gets dissipated at the scales where viscosity comes into play. In air this is a few mm. So even much higher resolution models like the cloud resolving models (CRMs) with scales of 1km or even smaller still need some kind of parameterizations to work. For more on this see Turbulence, Closure and Parameterization.

Read Full Post »

I was re-reading Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen and Bjorn Stevens from 2015 (because I referenced it in a recent comment) and then looked up other recent papers citing it. One interesting review paper is by Stevens et al from 2016. I recognized his name from many other papers and it looks like Bjorn Stevens has been publishing papers since the early 1990s, with almost 200 papers in peer-reviewed journals, mostly on this and related topics. Likewise, Sherwood and Bony (two of the coauthors) are very familiar names from this field.

Many regular readers (and I’m sure new readers of this blog) will understand much more than me about current controversies in climate sensitivity. The question in brief (of course there are many subtleties) – how much will the earth warm if we double CO2? It’s a very important question. As the authors explain at the start:

Nearly 40 years have passed since the U.S. National Academies issued the “Charney Report.” This landmark assessment popularized the concept of the “equilibrium climate sensitivity” (ECS), the increase of Earth’s globally and annually averaged near surface temperature that would follow a sustained doubling of atmospheric carbon dioxide relative to its preindustrial value. Through the application of physical reasoning applied to the analysis of output from a handful of relatively simple models of the climate system, Jule G. Charney and his co-authors estimated a range of 1.5 –4.5 K for the ECS [Charney et al., 1979].

Charney is a eminent name you will know, along with Lorentz, if you read about the people who broke ground on numerical weather modeling. The authors explain a little about the definition of ECS:

ECS is an idealized but central measure of climate change, which gives specificity to the more general idea of Earth’s radiative response to warming. This specificity makes ECS something that is easy to grasp, if not to realize. For instance, the high heat capacity and vast carbon stores of the deep ocean mean that a new climate equilibrium would only be fully attained a few millennia after an applied forcing [Held et al., 2010; Winton et al., 2010; Li et al., 2012]; and uncertainties in the carbon cycle make it difficult to know what level of emissions is compatible with a doubling of the atmospheric CO2 concentration in the first place.

Concepts such as the “transient climate response” or the “transient climate response to cumulative carbon emissions” have been introduced to account for these effects and may be a better index of the warming that will occur within a century or two [Allen and Frame, 2007; Knutti and Hegerl, 2008; Collins et al., 2013;MacDougall, 2016].

But the ECS is strongly related and conceptually simpler, so it endures as the central measure of Earth’s susceptibility to forcing [Flato et al., 2013].

And about the implications of narrowing the range of ECS:

The socioeconomic value of better understanding the ECS is well documented. If the ECS were well below 1.5 K, climate change would be a less serious problem. The stakes are much higher for the upper bound. If the ECS were above 4.5 K, immediate and severe reductions of greenhouse gas emissions would be imperative to avoid dangerous climate changes within a few human generations.

From a mitigation point of view, the difference between an ECS of 1.5 K and 4.5 K corresponds to about a factor of two in the allowable CO2 emissions for a given temperature target [Stocker et al., 2013] and it explains why the value of learning more about the ECS has been appraised so highly [Cooke et al., 2013; Neubersch et al., 2014].

The ECS also gains importance because it conditions many other impacts of greenhouse gases, such as regional temperature and rainfall [Bony et al., 2013; Tebaldi and Arblaster, 2014], and even extremes [Seneviratne et al., 2016], knowledge of which is required for developing effective adaptation strategies. Being an important and simple measure of climate change, the ECS is something that climate science should and must be able to better understand and quantify more precisely.

One of the questions they raise is at the heart of my question about whether climate sensitivity is a constant that we can measure, or a value that has some durable meaning rather than being dependent on the actual climate specifics at the time. For example, there are attempts to measure it via the climate response during an El Nino. We see the climate warm and we measure how the top of atmosphere radiation balance changes. We attempt to measure the difference in ocean temperature between end of the last ice age and today and deduce climate sensitivity. Perhaps I have a mental picture of non-linear systems that is preventing me from seeing the obvious. However, the picture I have in my head is that the dependence of the top of radiation balance on temperature is not a constant.

Here is their commentary. They use the term “pattern effect” for my mental model described above:

Hence, a generalization of the concept of climate sensitivity to different eras may need to account for differences that arise from the different base state of the climate system, increasingly so for large perturbations.

Even for small perturbations, there is mounting evidence that the outward radiation may be sensitive to the geographic pattern of surface temperature changes. Senior and Mitchell [2000] argued that if warming is greater over land, or at high latitudes, different feedbacks may occur than for the case where the same amount of warming is instead concentrated over tropical oceans.

These effects appear to be present in a range of models [Armour et al., 2013; Andrews et al., 2015]. Physically they can be understood because clouds—and their impact on radiation—are sensitive to changes in the atmospheric circulation, which responds to geographic differences in warming [Kang et al., 2013], or simply because an evolving pattern of surface warming weights local responses differently at different times [Armour et al., 2013].

Hence different patterns of warming, occurring on different timescales, may be associated with stronger or weaker radiative responses. This introduces an additional state dependence, one that is not encapsulated by the global mean temperature. We call this a “pattern effect.” Pattern effects are thought to be important for interpreting changes over the instrumental period [Gregory and Andrews, 2016], and may contribute to the state dependence of generalized measures of Earth’s climate sensitivity as inferred from the geological record.

Some of my thoughts are that the insoluble questions on this specific topic are also tied into the question about the climate being chaotic vs just weather being chaotic – see for example, Natural Variability and Chaos – Four – The Thirty Year Myth. In that article we look at the convention of defining climate as the average of 30 years of weather and why that “eliminates” chaos, or doesn’t. Non-linear systems have lots of intractable problems – more on that topic in the whole series Natural Variability and Chaos. It’s good to see it being mentioned in this paper.

Read the whole paper – it reviews the conditions necessary for very low climate sensitivity and for very high climate sensitivity, with the idea being that if one necessary condition can be ruled out then the very low and/or very high climate sensitivity can be ruled out. The paper also includes some excellent references for further insights.

From Stevens et al 2016

Click to enlarge

Happy Thanksgiving to our US readers.

References

Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen & Bjorn Stevens, Nature Geoscience (2015) – paywall paper

Prospects for narrowing bounds on Earth’s equilibrium climate sensitivity, Bjorn Stevens, Steven C Sherwood, Sandrine Bony & Mark J Webb, Earth’s
Future (2016) – free paper

Read Full Post »

In Part Five – More on Tuning & the Magic Behind the Scenes and also in the earlier Part Four we looked at the challenge of selecting parameters in climate models. A recent 2017 paper on this topic by Frédéric Hourdin and colleagues is very illuminating. One of the co-authors is Thorsten Mauritsen, the principal author of the 2012 paper we reviewed in Part Four. Another co-author is Jean-Christophe Golaz, the principal author of the 2013 paper we reviewed in Part Five.

The topics are similar but there is some interesting additional detail and commentary. The paper is open and, as always, I recommend reading the whole paper.

One of the key points is that climate models need to be specific about their “target” – were they trying to get the model to match recent climatology? top of atmosphere radiation balance? last 100 years of temperature trends? If we know that a model was developed with an eye on a particular target then it doesn’t demonstrate model skill if they get that target right.

Because of the uncertainties in observations and in the model formulation, the possible parameter choices are numerous and will differ from one modeling group to another. These choices should be more often considered in model intercomparison studies. The diversity of tuning choices reflects the state of our current climate understanding, observation, and modeling. It is vital that this diversity be maintained. It is, however, important that groups better communicate their tuning strategy. In particular, when comparing models on a given metric, either for model assessment or for understanding of climate mechanisms, it is essential to know whether some models used this metric as tuning target.

They comment on the paper by Jeffrey Kiehl from 2007 (referenced in The Debate is Over – 99% of Scientists believe Gravity and the Heliocentric Solar System so therefore..) which showed how models with higher sensitivity to CO2 have higher counter-balancing negative forcing from aerosols.

And later in the paper:

The question of whether the twentieth-century warming should be considered a target of model development or an emergent property is polarizing the climate modeling community, with 35% of modelers stating that twentieth-century warming was rated very important to decisive, whereas 30% would not consider it at all during development.

Some view the temperature record as an independent evaluation dataset not to be used, while others view it as a valuable observational constraint on the model development. Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming.

The question of developing toward the twentieth- century warming therefore is an area of vigorous debate within the community..

..The fact that some models are explicitly, or implicitly, tuned to better match the twentieth-century warming, while others may not be, clearly complicates the interpretation of the results of combined model ensembles such as CMIP. The diversity of approaches is unavoidable as individual modeling centers pursue their model development to seek their specific scientific goals.

It is, however, essential that decisions affecting forcing or feedback made during model development be transparently documented.

And so, onto another recent paper by Sumant Nigam and colleagues. They review the temperature trends by season over the last 100 years and review that against models. They look only at the northern hemisphere over land, due to the better temperature dataset available (compared with the southern hemisphere).

Here are the observations of the trends for each of the four seasons, I find it fascinating to see the difference between the seasonal trends:

From Nigam et al 2017

Figure 1 – Click to enlarge

Then they compare the observations to some of the models used in IPCC AR5 (from the model intercomparison project, CMIP5) – top line is observations, each line below is a different model. When we compare the geographical distribution of winter-summer trend (right column) we can see that the models don’t do very well:

From Nigam et al 2017

Figure 2 – Click to enlarge

From their conclusion:

The urgent need for shifting the evaluative and diagnostic focus away from the customary annual mean toward the seasonal cycle of secular warming is manifest in the inability of the leading climate models (whose simulations inform the IPCC’s Fifth Assessment Report) to generate realistic and robust (large signal-to noise ratio) twentieth-century winter and summer SAT trends over the northern continents. The large intra-ensemble SD of century-long SAT trends in some IPCC AR5 models (e.g., GFDL-CM3) moreover raises interesting questions: If this subset of climate models is realistic, especially in generation of ultra-low-frequency variability, is the century-long (1902–2014) linear trend in observed SAT—a one-member ensemble of the climate record—a reliable indicator of the secular warming signal?

I’ve commented a number of times in various articles – people who don’t read climate science papers often have some idea that climate scientists are monolithically opposed to questioning model results or questioning “the orthodoxy”. This is contrary to what you find if you read lots of papers. It might be that press releases that show up in The New York Times, CNN or the BBC (or pick another ideological bellwether) have some kind of monolithic sameness but this just demonstrates that no one interested in finding out anything important (apart from the weather and celebrity news) should ever watch/read media outlets.

They continue:

The relative contribution of both mechanisms to the observed seasonality in century-long SAT trends needs further assessment because of uncertainties in the diagnosis of evapotranspiration and sea level pressure from the century-long observational records. Climate system models—ideal tools for investigation of mechanisms through controlled experimentation—are unfortunately not yet ready given their inability to simulate the seasonality of trends in historical simulations.

Subversive indeed.

Their investigation digs into evapotranspiration – the additional water vapor, available from plants, to be evaporated and therefore to remove heat from the surface during the summer months.

Conclusion

All models are wrong but some are useful” – a statement attributed to a modeler from a different profession (statistical process control) and sometimes quoted also by climate modelers.

This is always a good way to think about models. Perhaps the inability of climate models to reproduce seasonal trends is inconsequential – or perhaps it is important. Models fail on many levels. The question is why, and the answers lead to better models.

Climate science is a real science, contrary to the claims of many people who don’t read much climate science papers, because many published papers ask important and difficult questions, and critique the current state of the science. That is, falsifiability is being addressed. These questions might not become media headlines, or even make it into the Summary for Policymakers in IPCC reports, but papers with these questions are not outliers.

I found both of these papers very interesting. Hourdin et al because they ask valuable questions about how models are tuned, and Nigam et al because they point out that climate models do a poor job of reproducing an important climate trend (seasonal temperature) which provides an extra level of testing for climate models.

References

Striking Seasonality in the Secular Warming of the Northern Continents: Structure and Mechanisms, Sumant Nigam et al, Journal of Climate (2017)

The Art and Science of Climate Model Tuning, Frédéric Hourdin et al, American Meteorological Society (2017) – free paper

Read Full Post »

Over in another article, a commenter claims:

..Catastrophic predictions depend on accelerated forcings due to water vapour feedback. This water vapour feedback is simply written into climate models as parameters. It is not derived from any kind simulation of first principles in the General Circulation Model runs (GCMs)..

[Emphasis added]

I’ve seen this article of faith a lot. If you frequent fantasy climate blogs where people learn first principles and modeling basics from comments by other equally well-educated commenters this is the kind of contribution you will be able to make after years of study.

None of us knowed nothing, so we all sat around and teached each other.

Actually how the atmospheric section of climate models work is pretty simple in principle. The atmosphere is divided up into a set of blocks (a grid) with each block having dimensions something like 200 km x 200km x 500m high. The values vary a lot and depend on the resolution of the model, this is just to give you an idea.

Then each block has an E-W wind; a N-S wind; a vertical velocity; temperature; pressure; the concentrations of CO2, water vapor, methane; cloud fractions, and so on.

Then the model “steps forward in time” and uses equations to calculate the new values of each item.

The earth is spinning and conservation of momentum, heat, mass is applied to each block. The principles of radiation through each block in each direction apply via paramaterizations (note 1).

Specifically on water vapor – the change in mass of water vapor in each block is calculated from the amount of water evaporated, the amount of water vapor condensed, and the amount of rainfall taking water out of the block. And from the movement of air via E-W, N-S and up/down winds. The final amount of water vapor in each time step affects the radiation emitted upwards and downwards.

It’s more involved and you can read whole books on the subject.

I doubt that anyone who has troubled themselves to read even one paper on climate modeling basics could reach the conclusion so firmly believed in fantasy climate blogs and repeated above. If you never need to provide evidence for your claims..

For this blog we do like to see proof of claims, so please take a read of Description of the NCAR Community Atmosphere Model (CAM 4.0) and just show where this water vapor feedback is written in. Or pick another climate model used by a climate modeling group.

This is the kind of exciting stuff you find in the 200+ pages of an atmospheric model description:

From CAM4 Technical Note

You can also find details of the shortwave and longwave radiation parameterization schemes and how they apply to water vapor.

Here is a quote from The Global Circulation of the Atmosphere (ref below):

Essentially all GCMs yield water vapor feedback consistent with that which would result from holding relative humidity approximately fixed as climate changes. This is an emergent property of the simulated climate system; fixed relative humidity is not in any way built into the model physics, and the models offer ample means by which relative humidity could change.

From Water Vapor Feedback and Global Warming, a paper well-worth anyone reading for who wants to understand this key question in climate:

Water vapor is the dominant greenhouse gas, the most important gaseous source of infrared opacity in the atmosphere. As the concentrations of other greenhouse gases, particularly carbon dioxide, increase because of human activity, it is centrally important to predict how the water vapor distribution will be affected. To the extent that water vapor concentrations increase in a warmer world, the climatic effects of the other greenhouse gases will be amplified. Models of the Earth’s climate indicate that this is an important positive feedback that increases the sensitivity of surface temperatures to carbon dioxide by nearly a factor of two when considered in isolation from other feedbacks, and possibly by as much as a factor of three or more when interactions with other feedbacks are considered. Critics of this consensus have attempted to provide reasons why modeling results are overestimating the strength of this feedback..

Remember, just a few years of study at fantasy climate blogs can save an hour or more of reading papers on atmospheric physics.

References

Description of the NCAR Community Atmosphere Model (CAM 4)– free paper

On the Relative Humidity of the Atmosphere, Chapter 6 of The Global Circulation of the Atmosphere, edited by Tapio Schneider & Adam Sobel, Princeton University Press (2007)

Water Vapor Feedback and Global Warming, Held & Soden, Annu. Rev. Energy Environ (2000) – free paper

Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006)

Notes

Note 1: The very accurate calculation of radiation transfer is done via line by line calculations but they are computationally very expensive and so a simpler approximation is used in GCMs. Of course there are many studies comparing parameterizations vs line by line calculations. One example is Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006).

Read Full Post »

In recent articles we have looked at rainfall and there is still more to discuss. This article changes tack to look at tropical cyclones, prompted by the recent US landfall of Harvey and Irma along with questions from readers about attribution and the future.

It might be surprising to find the following statement from leading climate scientists (Kevin Walsh and many co-authors in 2015):

At present, there is no climate theory that can predict the formation rate of tropical cyclones from the mean climate state.

The subject gets a little involved so let’s dig into a few papers. First from Gabriel Vecchi and some co-authors in 2008 in the journal Science. The paper is very brief and essentially raises one question – has the recent rise in total Atlantic cyclone intensity been a result of increases in absolute sea surface temperature (SST) or relative sea surface temperature:

From Vecchi et al 2008

Figure 1

The top graph (above) shows a correlation of 0.79 between SST and PDI (power dissipation index). The bottom graph shows a correlation of 0.79 between relative SST (local sea surface temperature minus the average tropical sea surface temperature) and PDI.

With more CO2 in the atmosphere from burning fossil fuels we expect a warmer SST in the tropical Atlantic in 2100 than today. But we don’t expect the tropical Atlantic to warm faster than the tropics in general.

If cyclone intensity is dependent on local SST we expect more cyclones, or more powerful cyclones. If cyclone intensity is dependent on relative SST we expect no increase in cyclones. This is because climate models predict warmer SSTs in the future but not warmer Atlantic SSTs than the tropics. The paper also shows a few high resolution models – green symbols – sitting close to the zero change line.

Now predicting tropical cyclones with GCMs has a fundamental issue – the scale of a modern high resolution GCM is around 100km. But cyclone prediction requires a higher resolution due to their relatively small size.

Thomas Knutson and co-authors (including the great Isaac Held) produced a 2007 paper with an interesting method (of course, the idea is not at all new). They input actual meteorological data (i.e. real history from NCEP reanalysis) into a high resolution model which covered just the Atlantic region. Their aim was to see how well this model could reproduce tropical storms. There are some technicalities to the model – the output is constantly “nudged” back towards the actual climatology and out at the boundaries of the model we can’t expect good simulation results. The model resolution is 18km.

The main question addressed here is the following: Assuming one has essentially perfect knowledge of large-scale atmospheric conditions in the Atlantic over time, how well can one then simulate past variations in Atlantic hurricane activity using a dynamical model?

They comment that the cause of the recent (at that time) upswing in hurricane activity “remains unresolved”. (Of course, fast forward to 2016, prior to the recent two large landfall hurricanes, and the overall activity is at a 1970 low. In early 2018, this may be revised again..).

Two interesting graphs emerge. First an excellent match between model and observations for overall frequency year on year:

From Knutson et al 2007

Figure 2

Second, an inability to predict the most intense hurricanes. The black dots are observations, the red dots are simulations from the model. The vertical axis, a little difficult to read, is SLP, or sea level pressure:

From Knutson et al 2007

Figure 3

These results are a common theme of many papers – inputting the historical climatological data into a model we can get some decent results on year to year variation in tropical cyclones. But models under-predict the most intense cyclones (hurricanes).

Here is Morris Bender and co-authors (including Thomas Knutson, Gabriel Vecchi – a frequent author or co-author in this genre, and of course Isaac Held) from 2010:

Some statistical analyses suggest a link between warmer Atlantic SSTs and increased hurricane activity, although other studies contend that the spatial structure of the SST change may be a more important control on tropical cyclone frequency and intensity. A few studies suggest that greenhouse warming has already produced a substantial rise in Atlantic tropical cyclone activity, but others question that conclusion.

This is a very typical introduction in papers on this topic. I note in passing this is a huge blow to the idea that climate scientists only ever introduce more certainty and alarm on the harm from future CO2 emissions. They don’t. However, it is also true that some climate scientists believe that recent events have been accentuated due to the last century of fossil fuel burning and these perspectives might be reported in the media. I try to ignore the media and that is my recommendation to readers on just about all subjects except essential ones like the weather and celebrity news.

This paper used a weather prediction model starting a few days before each storm to predict the outcome. If you understand the idea behind Knutson 2007 then this is just one step further – a few days prior to the emergence of an intense storm, input the actual climate data into a high resolution model and see how well the high res model predicts the observations. They also used projected future climates from CMIP3 models (note 1).

In the set of graphs below there are three points I want to highlight – and you probably need to click on the graph to enlarge it.

First, in graph B, “Zetac” is the model used by Knutson et al 2007, whereas GFDL is the weather prediction model getting better results in this paper – you can see that observations and the GFDL are pretty close in the maximum wind speed distribution. Second, the climate change predictions in E show that predictions of the future show an overall reduction in frequency of tropical storms, but an increase in the frequency of storms with the highest wind speeds – this is a common theme in papers from this genre. Third, in graph F, the results (from the weather prediction model) fed by different GCMs for future climate show quite different distributions. For example, the UKMO model produces a distribution of future wind speeds that is lower than current values.

From Bender et al 2010

Figure 4 – Click to enlarge

In this graph (S3 from the Supplementary data) we see graphs of the difference between future projected climatologies and current climatologies for three relevant parameters for each of the four different models shown in graph F in the figure above:

From Bender et al 2010

Figure 5 – Click to enlarge

This illustrates that different projected future climatologies, which all show increased SST in the Atlantic region, generate quite different hurricane intensities. The paper suggests that the reduction in wind shear in the UKMO model produces a lower frequency of higher intensity hurricanes.

Conclusion

This article illustrates that feeding higher resolution models with current data can generate realistic cyclone data in some aspects, but less so in other aspects. As we increase the model resolution we can get even better results – but this is dependent on inputting the correct climate data. As we look towards 2100 the questions are – How realistic is the future climate data? How does that affect projections of hurricane frequencies and intensities?

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

Impacts – XII – Rainfall 2

Impacts – XIII – Rainfall 3

References

Hurricanes and climate: the US CLIVAR working group on hurricanes, American Meteorological Society, Kevin Walsh et al (2015) – free paper

Whither Hurricane Activity? Gabriel A Vecchi, Kyle L Swanson & Brian J. Soden, Science (2008) – free paper

Simulation of the Recent Multidecadal Increase of Atlantic Hurricane Activity Using an 18-km-Grid Regional Model, Thomas Knutson et al, American Meteorological Society, (2007) – free paper

Modeled Impact of Anthropogenic Warming on the Frequency of Intense Atlantic Hurricanes, Morris A Bender et al, Science (2010) – free paper

Notes

Note 1: The scenario is A1B, which is similar to RCP6 – that is, an approximate doubling of CO2 by the end of the century. The simulations came from the CMIP3 suite of model results.

Read Full Post »

Older Posts »