Feeds:
Posts
Comments

Archive for November, 2017

In the comments on Part Five there was some discussion on Mauritsen & Stevens 2015 which looked at the “iris effect”:

A controversial hypothesis suggests that the dry and clear regions of the tropical atmosphere expand in a warming climate and thereby allow more infrared radiation to escape to space

One of the big challenges in climate modeling (there are many) is model resolution and “sub-grid parameterization”. A climate model is created by breaking up the atmosphere (and ocean) into “small” cells of something like 200km x 200km, assigning one value in each cell for parameters like N-S wind, E-W wind and up-down wind – and solving the set of equations (momentum, heat transfer and so on) across the whole earth. However, in one cell like this below you have many small regions of rapidly ascending air (convection) topped by clouds of different thicknesses and different heights and large regions of slowly descending air:

Held and Soden (2000)

Held and Soden (2000)

The model can’t resolve the actual processes inside the grid. That’s the nature of how finite element analysis works. So, of course, the “parameterization schemes” to figure out how much cloud, rain and humidity results from say a warming earth are problematic and very hard to verify.

Running higher resolution models helps to illuminate the subject. We can’t run these higher resolution models for the whole earth – instead all kinds of smaller scale model experiments are done which allow climate scientists to see which factors affect the results.

Here is the “plain language summary” from Organization of tropical convection in low vertical wind shears:Role of updraft entrainment, Tompkins & Semie 2017:

Thunderstorms dry out the atmosphere since they produce rainfall. However, their efficiency at drying the atmosphere depends on how they are arranged; take a set of thunderstorms and sprinkle them randomly over the tropics and the troposphere will remain quite moist, but take that same number of thunderstorms and place them all close together in a “cluster” and the atmosphere will be much drier.

Previous work has indicated that thunderstorms might start to cluster more as temperatures increase, thus drying the atmosphere and letting more infrared radiation escape to space as aresult – acting as a strong negative feedback on climate, the so-called iris effect.

We investigate the clustering mechanisms using 2km grid resolution simulations, which show that strong turbulent mixing of air between thunderstorms and their surrounding is crucial for organization to occur. However, with grid cells of 2 km this mixing is not modelled explicitly but instead represented by simple model approximations, which are hugely uncertain. We show three commonly used schemes differ by over an order of magnitude. Thus we recommend that further investigation into the climate iris feedback be conducted in a coordinated community model intercomparison effort to allow model uncertainty to be robustly accounted for.

And a little about computation resources and resolution. CRMs are “cloud resolving models”, i.e. higher resolution models over smaller areas:

In summary, cloud-resolving models with grid sizes of the order of 1 km have revealed many of the potential feed-back processes that may lead to, or enhance, convective organization. It should be recalled however, that these studies are often idealized and involve computational compromises, as recently discussed in Mapes [2016]. The computational requirements of RCE experiments that require more than 40 days of integration still largely prohibit horizontal resolutions finer than 1 km. Simulations such as Tompkins [2001c], Bryan et al. [2003], and Khairoutdinov et al. [2009] that use resolutions less than 350 m were restricted to 1 or 2 days. If water vapor entrainment is a factor for either the establishment and/or the amplification of convective organization, it raises the issue that the organization strength in CRMs model using grid sizes of the order of 1 km or larger is likely to be sensitive to the model resolution and simulation framework in terms of the choice of subgrid-scale diffusion and mixing.

In their conclusion on what resolution is needed:

.. and states that convergence is achieved when the most energetic eddies are well resolved, which is not the case at 2 km, and Craig and Dornbrack [2008] also suggest that resolving clouds requires grid sizes that resolve the typical buoyancy scale of a few hundred meters. The present state of the art of LES is represented by Heinze et al. [2016], integrating a model for the whole of Germany with a 100 m grid spacing, for a period of 4 days.

They continue:

The simulations in this paper also highlight the fact that intricacies of the assumptions contained in the parameterization of small- scale physics can strongly impact the possibility of crossing the threshold from unorganized to organized equilibrium states. The expense of such simulations has usually meant that only one model configuration is used concerning assumptions of small-scale processes such as mixing and microphysics, often initialized from a single initial condition. The potential of multiple equilibria and also an hysteresis in the transition between organized and unorganized states [Muller and Held, 2012], points to the requirement for larger integration ensembles employing a range of initial and boundary conditions, and physical parameterization assumptions. The ongoing requirements of large-domain, RCE numerical experiments imply that this challenge can be best met with a community-based, convective organization model intercomparison project (CORGMIP).

Here is Detailed Investigation of the Self-Aggregation of Convection in Cloud-Resolving Simulations, Muller & Held (2012). The second author is Isaac Held, often referenced on this blog who has been writing very interesting papers for about 40 years:

It is well known that convection can organize on a wide range of scales. Important examples of organized convection include squall lines, mesoscale convective systems (Emanuel 1994; Holton 2004), and the Madden– Julian oscillation (Grabowski and Moncrieff 2004). The ubiquity of convective organization above tropical oceans has been pointed out in several observational studies (Houze and Betts 1981; WCRP 1999; Nesbitt et al. 2000)..

..Recent studies using a three-dimensional cloud resolving model show that when the domain is sufficiently large, tropical convection can spontaneously aggregate into one single region, a phenomenon referred to as self-aggregation (Bretherton et al. 2005; Emanuel and Khairoutdinov 2010). The final climate is a spatially organized atmosphere composed of two distinct areas: a moist area with intense convection, and a dry area with strong radiative cooling (Figs. 1b and 2b,d). Whether or not a horizontally homogeneous convecting atmosphere in radiative convective equilibrium self-aggregates seems to depend on the domain size (Bretherton et al. 2005). More generally, the conditions under which this instability of the disorganized radiative convective equilibrium state of tropical convection occurs, as well as the feedback responsible, remain unclear.

We see the difference in self-aggregation of convection between the two domain sizes below:

 

From Muller & Held 2012

Figure 1

The effect on rainfall and OLR (outgoing longwave radiation) is striking, and also note that the mean is affected:

From Muller & Held 2012

Figure 2

Then they look at varying model resolution (dx), domain size (L) and also the initial conditions. The higher resolution models don’t produce the self-aggregation, but the results are also sensitive to domain size and initial conditions. The black crosses denote model runs where the convection stayed disorganized, the red circles where the convection self-aggregated:

From Muller & Held 2012

Figure 3

In their conclusion:

The relevance of self-aggregation to observed convective organization (mesoscale convective systems, mesoscale convective complexes, etc.) requires further investigation. Based on its sensitivity to resolution (Fig. 6a), it may be tempting to see self-aggregation as a numerical artifact that occurs at coarse resolutions, whereby low-cloud radiative feedback organizes the convection.

Nevertheless, it is not clear that self-aggregation would not occur at fine resolution if the domain size were large enough. Furthermore, the hysteresis (Fig. 6b) increases the importance of the aggregated state, since it expands the parameter span over which the aggregated state exists as a stable climate equilibrium. The existence of the aggregated state appears to be less sensitive to resolution than the self-aggregation process. It is also possible that our results are sensitive to the value of the sea surface temperature; indeed, Emanuel and Khairoutdinov (2010) find that warmer sea surface temperatures tend to favor the spontaneous self-aggregation of convection.

Current convective parameterizations used in global climate models typically do not account for convective organization.

More two-dimensional and three dimensional simulations at high resolution are desirable to better understand self-aggregation, and convective organization in general, and its dependence on the subgrid-scale closure, boundary layer, ocean surface, and radiative scheme used. The ultimate goal is to help guide and improve current convective parameterizations.

From the results in their paper we might think that self-aggregation of convection was a model artifact that disappears with higher resolution models (they are careful not to really conclude this). Tompkins & Semie 2017 suggested that Muller & Held’s results may be just a dependence on their sub-grid parameterization scheme (see note 1).

From Hohenegger & Stevens 2016, how convection self-aggregates over time in their model:

From Hohenegger & Stevens 2016

Figure 4 – Click to enlarge

From a review paper on the same topic by Wing et al 2017:

The novelty of self-aggregation is reflected by the many remaining unanswered questions about its character, causes and effects. It is clear that interactions between longwave radiation and water vapor and/or clouds are critical: non-rotating aggregation does not occur when they are omitted. Beyond this, the field is in play, with the relative roles of surface fluxes, rain evaporation, cloud versus water vapor interactions with radiation, wind shear, convective sensitivity to free atmosphere water vapor, and the effects of an interactive surface yet to be firmly characterized and understood.

The sensitivity of simulated aggregation not only to model physics but to the size and shape of the numerical domain and resolution remains a source of concern about whether we have even robustly characterized and simulated the phenomenon. While aggregation has been observed in models (e.g., global models) in which moist convection is parameterized, it is not yet clear whether such models simulate aggregation with any real fidelity. The ability to simulate self-aggregation using models with parameterized convection and clouds will no doubt become an important test of the quality of such schemes.

Understanding self-aggregation may hold the key to solving a number of obstinate problems in meteorology and climate. There is, for example, growing optimism that understanding the interplay among radiation, surface fluxes, clouds, and water vapor may lead to robust accounts of the Madden Julian oscillation and tropical cyclogenesis, two long-standing problems in atmospheric science.

Indeed, the difficulty of modeling these phenomena may be owing in part to the challenges of simulating them using representations of clouds and convection that were not designed or tested with self-aggregation in mind.

Perhaps most exciting is the prospect that understanding self-aggregation may lead to an improved understanding of climate. The strong hysteresis observed in many simulations of aggregation—once a cluster is formed it tends to be robust to changing environmental conditions—points to the possibility of intransitive or almost intransitive behavior of tropical climate.

The strong drying that accompanies aggregation, by cooling the system, may act as a kind of thermostat, if indeed the existence or degree of aggregation depends on temperature. Whether or how well this regulation is simulated in current climate models depends on how well such models can simulate aggregation, given the imperfections of their convection and cloud parameterizations.

Clearly, there is much exciting work to be done on aggregation of moist convection.

[Emphasis added]

Conclusion

Climate science asks difficult questions that are currently unanswerable. This goes against two myths that circulate media and many blogs: on the one hand the myth that the important points are all worked out; and on the other hand the myth that climate science is a political movement creating alarm, with each paper reaching more serious and certain conclusions than the paper before. Reading lots of papers I find a real science. What is reported in the media is unrelated to the state of the field.

At the heart of modeling climate is the need to model turbulent fluid flows (air and water) and this can’t be done. Well, it can be done, but using schemes that leave open the possibility or probability that further work will reveal them to be inadequate in a serious way. Running higher resolution models helps to answer some questions, but more often reveals yet new questions. If you have a mathematical background this is probably easy to grasp. If you don’t it might not make a whole lot of sense, but hopefully you can see from the papers that very recent papers are not yet able to resolve some challenging questions.

At some stage sufficiently high resolution models will be validated and possibly allow development of more realistic parameterization schemes for GCMs. For example, here is Large-eddy simulations over Germany using ICON: a comprehensive evaluation, Reike Heinze et al 2016, evaluating their model with 150m grid resolution – 3.3bn grid points on a sub-1 second time step over 4 days over Germany:

These results consistently show that the high-resolution model significantly improves the representation of small- to mesoscale variability. This generates confidence in the ability to simulate moist processes with fidelity. When using the model output to assess turbulent and moist processes and to evaluate and develop climate model parametrizations, it seems relevant to make use of the highest resolution, since the coarser-resolved model variants fail to reproduce aspects of the variability.

Related Articles

Ensemble Forecasting – why running a lot of models gets better results than one “best” model

Latent heat and Parameterization – example of one parameterization and its problems

Turbulence, Closure and Parameterization – explaining how the insoluble problem of turbulence gets handled in models

Part Four – Tuning & the Magic Behind the Scenes – how some important model choices get made

Part Five – More on Tuning & the Magic Behind the Scenes – parameterization choices, aerosol properties and the impact on temperature hindcasting, plus a high resolution model study

Part Six – Tuning and Seasonal Contrasts – model targets and model skill, plus reviewing seasonal temperature trends in observations and models

References

Missing iris efect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen and Bjorn Stevens, Nature Geoscience (2015) – free paper

Organization of tropical convection in low vertical wind shears:Role of updraft entrainment, Adrian M Tompkins & Addisu G Semie, Journal of Advances in Modeling Earth Systems (2017) – free paper

Detailed Investigation of the Self-Aggregation of Convection in Cloud-Resolving Simulations, Caroline Muller & Isaac Held, Journal of the Atmospheric Sciences (2012) – free paper

Coupled radiative convective equilibrium simulations with explicit and parameterized convection, Cathy Hohenegger & Bjorn Stevens, Journal of Advances in Modeling Earth Systems (2016) – free paper

Convective Self-Aggregation in Numerical Simulations: A Review, Allison A Wing, Kerry Emanuel, Christopher E Holloway & Caroline Muller, Surv Geophys (2017) – free paper

Large-eddy simulations over Germany using ICON: a comprehensive evaluation, Reike Heinze et al, Quarterly Journal of the Royal Meteorological Society (2016)

Other papers worth reading:

Featured Article Self-aggregation of convection in long channel geometry, Allison A Wing & Timothy W Cronin, Quarterly Journal of the Royal Meteorological Society (2016) – paywall paper

Notes

Note 1: The equations for turbulent fluid flow are insoluble due to the computing resources required. Energy gets dissipated at the scales where viscosity comes into play. In air this is a few mm. So even much higher resolution models like the cloud resolving models (CRMs) with scales of 1km or even smaller still need some kind of parameterizations to work. For more on this see Turbulence, Closure and Parameterization.

Read Full Post »

I was re-reading Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen and Bjorn Stevens from 2015 (because I referenced it in a recent comment) and then looked up other recent papers citing it. One interesting review paper is by Stevens et al from 2016. I recognized his name from many other papers and it looks like Bjorn Stevens has been publishing papers since the early 1990s, with almost 200 papers in peer-reviewed journals, mostly on this and related topics. Likewise, Sherwood and Bony (two of the coauthors) are very familiar names from this field.

Many regular readers (and I’m sure new readers of this blog) will understand much more than me about current controversies in climate sensitivity. The question in brief (of course there are many subtleties) – how much will the earth warm if we double CO2? It’s a very important question. As the authors explain at the start:

Nearly 40 years have passed since the U.S. National Academies issued the “Charney Report.” This landmark assessment popularized the concept of the “equilibrium climate sensitivity” (ECS), the increase of Earth’s globally and annually averaged near surface temperature that would follow a sustained doubling of atmospheric carbon dioxide relative to its preindustrial value. Through the application of physical reasoning applied to the analysis of output from a handful of relatively simple models of the climate system, Jule G. Charney and his co-authors estimated a range of 1.5 –4.5 K for the ECS [Charney et al., 1979].

Charney is a eminent name you will know, along with Lorentz, if you read about the people who broke ground on numerical weather modeling. The authors explain a little about the definition of ECS:

ECS is an idealized but central measure of climate change, which gives specificity to the more general idea of Earth’s radiative response to warming. This specificity makes ECS something that is easy to grasp, if not to realize. For instance, the high heat capacity and vast carbon stores of the deep ocean mean that a new climate equilibrium would only be fully attained a few millennia after an applied forcing [Held et al., 2010; Winton et al., 2010; Li et al., 2012]; and uncertainties in the carbon cycle make it difficult to know what level of emissions is compatible with a doubling of the atmospheric CO2 concentration in the first place.

Concepts such as the “transient climate response” or the “transient climate response to cumulative carbon emissions” have been introduced to account for these effects and may be a better index of the warming that will occur within a century or two [Allen and Frame, 2007; Knutti and Hegerl, 2008; Collins et al., 2013;MacDougall, 2016].

But the ECS is strongly related and conceptually simpler, so it endures as the central measure of Earth’s susceptibility to forcing [Flato et al., 2013].

And about the implications of narrowing the range of ECS:

The socioeconomic value of better understanding the ECS is well documented. If the ECS were well below 1.5 K, climate change would be a less serious problem. The stakes are much higher for the upper bound. If the ECS were above 4.5 K, immediate and severe reductions of greenhouse gas emissions would be imperative to avoid dangerous climate changes within a few human generations.

From a mitigation point of view, the difference between an ECS of 1.5 K and 4.5 K corresponds to about a factor of two in the allowable CO2 emissions for a given temperature target [Stocker et al., 2013] and it explains why the value of learning more about the ECS has been appraised so highly [Cooke et al., 2013; Neubersch et al., 2014].

The ECS also gains importance because it conditions many other impacts of greenhouse gases, such as regional temperature and rainfall [Bony et al., 2013; Tebaldi and Arblaster, 2014], and even extremes [Seneviratne et al., 2016], knowledge of which is required for developing effective adaptation strategies. Being an important and simple measure of climate change, the ECS is something that climate science should and must be able to better understand and quantify more precisely.

One of the questions they raise is at the heart of my question about whether climate sensitivity is a constant that we can measure, or a value that has some durable meaning rather than being dependent on the actual climate specifics at the time. For example, there are attempts to measure it via the climate response during an El Nino. We see the climate warm and we measure how the top of atmosphere radiation balance changes. We attempt to measure the difference in ocean temperature between end of the last ice age and today and deduce climate sensitivity. Perhaps I have a mental picture of non-linear systems that is preventing me from seeing the obvious. However, the picture I have in my head is that the dependence of the top of radiation balance on temperature is not a constant.

Here is their commentary. They use the term “pattern effect” for my mental model described above:

Hence, a generalization of the concept of climate sensitivity to different eras may need to account for differences that arise from the different base state of the climate system, increasingly so for large perturbations.

Even for small perturbations, there is mounting evidence that the outward radiation may be sensitive to the geographic pattern of surface temperature changes. Senior and Mitchell [2000] argued that if warming is greater over land, or at high latitudes, different feedbacks may occur than for the case where the same amount of warming is instead concentrated over tropical oceans.

These effects appear to be present in a range of models [Armour et al., 2013; Andrews et al., 2015]. Physically they can be understood because clouds—and their impact on radiation—are sensitive to changes in the atmospheric circulation, which responds to geographic differences in warming [Kang et al., 2013], or simply because an evolving pattern of surface warming weights local responses differently at different times [Armour et al., 2013].

Hence different patterns of warming, occurring on different timescales, may be associated with stronger or weaker radiative responses. This introduces an additional state dependence, one that is not encapsulated by the global mean temperature. We call this a “pattern effect.” Pattern effects are thought to be important for interpreting changes over the instrumental period [Gregory and Andrews, 2016], and may contribute to the state dependence of generalized measures of Earth’s climate sensitivity as inferred from the geological record.

Some of my thoughts are that the insoluble questions on this specific topic are also tied into the question about the climate being chaotic vs just weather being chaotic – see for example, Natural Variability and Chaos – Four – The Thirty Year Myth. In that article we look at the convention of defining climate as the average of 30 years of weather and why that “eliminates” chaos, or doesn’t. Non-linear systems have lots of intractable problems – more on that topic in the whole series Natural Variability and Chaos. It’s good to see it being mentioned in this paper.

Read the whole paper – it reviews the conditions necessary for very low climate sensitivity and for very high climate sensitivity, with the idea being that if one necessary condition can be ruled out then the very low and/or very high climate sensitivity can be ruled out. The paper also includes some excellent references for further insights.

From Stevens et al 2016

Click to enlarge

Happy Thanksgiving to our US readers.

References

Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models, Thorsten Mauritsen & Bjorn Stevens, Nature Geoscience (2015) – paywall paper

Prospects for narrowing bounds on Earth’s equilibrium climate sensitivity, Bjorn Stevens, Steven C Sherwood, Sandrine Bony & Mark J Webb, Earth’s
Future (2016) – free paper

Read Full Post »

In Part Five – More on Tuning & the Magic Behind the Scenes and also in the earlier Part Four we looked at the challenge of selecting parameters in climate models. A recent 2017 paper on this topic by Frédéric Hourdin and colleagues is very illuminating. One of the co-authors is Thorsten Mauritsen, the principal author of the 2012 paper we reviewed in Part Four. Another co-author is Jean-Christophe Golaz, the principal author of the 2013 paper we reviewed in Part Five.

The topics are similar but there is some interesting additional detail and commentary. The paper is open and, as always, I recommend reading the whole paper.

One of the key points is that climate models need to be specific about their “target” – were they trying to get the model to match recent climatology? top of atmosphere radiation balance? last 100 years of temperature trends? If we know that a model was developed with an eye on a particular target then it doesn’t demonstrate model skill if they get that target right.

Because of the uncertainties in observations and in the model formulation, the possible parameter choices are numerous and will differ from one modeling group to another. These choices should be more often considered in model intercomparison studies. The diversity of tuning choices reflects the state of our current climate understanding, observation, and modeling. It is vital that this diversity be maintained. It is, however, important that groups better communicate their tuning strategy. In particular, when comparing models on a given metric, either for model assessment or for understanding of climate mechanisms, it is essential to know whether some models used this metric as tuning target.

They comment on the paper by Jeffrey Kiehl from 2007 (referenced in The Debate is Over – 99% of Scientists believe Gravity and the Heliocentric Solar System so therefore..) which showed how models with higher sensitivity to CO2 have higher counter-balancing negative forcing from aerosols.

And later in the paper:

The question of whether the twentieth-century warming should be considered a target of model development or an emergent property is polarizing the climate modeling community, with 35% of modelers stating that twentieth-century warming was rated very important to decisive, whereas 30% would not consider it at all during development.

Some view the temperature record as an independent evaluation dataset not to be used, while others view it as a valuable observational constraint on the model development. Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming.

The question of developing toward the twentieth- century warming therefore is an area of vigorous debate within the community..

..The fact that some models are explicitly, or implicitly, tuned to better match the twentieth-century warming, while others may not be, clearly complicates the interpretation of the results of combined model ensembles such as CMIP. The diversity of approaches is unavoidable as individual modeling centers pursue their model development to seek their specific scientific goals.

It is, however, essential that decisions affecting forcing or feedback made during model development be transparently documented.

And so, onto another recent paper by Sumant Nigam and colleagues. They review the temperature trends by season over the last 100 years and review that against models. They look only at the northern hemisphere over land, due to the better temperature dataset available (compared with the southern hemisphere).

Here are the observations of the trends for each of the four seasons, I find it fascinating to see the difference between the seasonal trends:

From Nigam et al 2017

Figure 1 – Click to enlarge

Then they compare the observations to some of the models used in IPCC AR5 (from the model intercomparison project, CMIP5) – top line is observations, each line below is a different model. When we compare the geographical distribution of winter-summer trend (right column) we can see that the models don’t do very well:

From Nigam et al 2017

Figure 2 – Click to enlarge

From their conclusion:

The urgent need for shifting the evaluative and diagnostic focus away from the customary annual mean toward the seasonal cycle of secular warming is manifest in the inability of the leading climate models (whose simulations inform the IPCC’s Fifth Assessment Report) to generate realistic and robust (large signal-to noise ratio) twentieth-century winter and summer SAT trends over the northern continents. The large intra-ensemble SD of century-long SAT trends in some IPCC AR5 models (e.g., GFDL-CM3) moreover raises interesting questions: If this subset of climate models is realistic, especially in generation of ultra-low-frequency variability, is the century-long (1902–2014) linear trend in observed SAT—a one-member ensemble of the climate record—a reliable indicator of the secular warming signal?

I’ve commented a number of times in various articles – people who don’t read climate science papers often have some idea that climate scientists are monolithically opposed to questioning model results or questioning “the orthodoxy”. This is contrary to what you find if you read lots of papers. It might be that press releases that show up in The New York Times, CNN or the BBC (or pick another ideological bellwether) have some kind of monolithic sameness but this just demonstrates that no one interested in finding out anything important (apart from the weather and celebrity news) should ever watch/read media outlets.

They continue:

The relative contribution of both mechanisms to the observed seasonality in century-long SAT trends needs further assessment because of uncertainties in the diagnosis of evapotranspiration and sea level pressure from the century-long observational records. Climate system models—ideal tools for investigation of mechanisms through controlled experimentation—are unfortunately not yet ready given their inability to simulate the seasonality of trends in historical simulations.

Subversive indeed.

Their investigation digs into evapotranspiration – the additional water vapor, available from plants, to be evaporated and therefore to remove heat from the surface during the summer months.

Conclusion

All models are wrong but some are useful” – a statement attributed to a modeler from a different profession (statistical process control) and sometimes quoted also by climate modelers.

This is always a good way to think about models. Perhaps the inability of climate models to reproduce seasonal trends is inconsequential – or perhaps it is important. Models fail on many levels. The question is why, and the answers lead to better models.

Climate science is a real science, contrary to the claims of many people who don’t read much climate science papers, because many published papers ask important and difficult questions, and critique the current state of the science. That is, falsifiability is being addressed. These questions might not become media headlines, or even make it into the Summary for Policymakers in IPCC reports, but papers with these questions are not outliers.

I found both of these papers very interesting. Hourdin et al because they ask valuable questions about how models are tuned, and Nigam et al because they point out that climate models do a poor job of reproducing an important climate trend (seasonal temperature) which provides an extra level of testing for climate models.

References

Striking Seasonality in the Secular Warming of the Northern Continents: Structure and Mechanisms, Sumant Nigam et al, Journal of Climate (2017)

The Art and Science of Climate Model Tuning, Frédéric Hourdin et al, American Meteorological Society (2017) – free paper

Read Full Post »

I’ve been digging through some statistics for my own benefit.

When you read or hear a statistic that country X is generating Y% of electricity via renewables it can sound wonderful, but the headline number can conceal or overstate useful progress. A few tips for readers new to the subject:

  • Energy is not electricity. So you need to know – were they quoting energy or electricity. For most developed nations, electricity accounts for something around 40% of total energy.
  • “Renewables” includes two components that are important to separate out:
    • hydroelectric – this is “tapped out” in most developed countries. If the “share of renewables” is say 30%, but hydro is 20% (i.e. 2/3 of the total renewables) then the expandable renewables are only 10%. This can help you see recent progress and extrapolate to possible future progress (different story in developing countries, but there is often a large human cost to creating hydroelectric projects)
    • biomass – if you stop burning coal and you burn wood chip instead this tips the reporting scales from “the work of Satan” to “green and renewable”, even though burning wood chip generates more CO2 emissions per unit of electricity generated. Not all biomass is like this, but as a rule of thumb, put the biomass entry into the “more investigation needed” pile before declaring victory
  • Nameplate is not actual – if you have a gas plant (designed to run all the time) the actual output will be about 90% or more of the nameplate (the maximum output under normal conditions), but if you have a wind farm the actual output across a year will be about 20% of the nameplate in Germany, 30% in Ireland and over 40% in Oklahoma. So if you read that “10GW of wind power” was added to Germany’s generating capacity you need to mentally convert that to about 2GW. Similar story for solar – there is a conversion factor.

If you mentally take account of these points when you hear an update, you will be with the 1% of journalists who could pass the literacy test on the progress of renewables. It’s an elite club.

Once again I’ll state that I’m not trying to knock renewables, I’m trying to promote “literacy”. Instead of hapless cheerleaders, think informed citizens..

So, onto recent data.

I’m using two stalwarts of energy reporting: IEA and BP.

IEA produce data to 2015 and quote useful units like electricity consumed in TWh. This is a unit of energy – a TWh is a billion kWh. You find kWh on your electricity bill.

BP produce data to 2016 – which is better – and breakdown renewables much better, but quote units of Mtoe – millions tons of oil equivalent. If you delve into energy industry reports, you often find mixed together in one report: kWh/TWh (energy), GJ (energy), GW (power), tcf (volume of gas), barrels of oil, mmBtu (energy in obscure British units)..

In the case of the BP report it’s not clear to me how to convert from Mtoe to GWh – they do provide a footnote but when I do the conversion I can’t reconcile the numbers using their footnote. No doubt one of our readers has gone down this rabbit hole and can illuminate us all (?). In the meantime, I took the BP numbers in Mtoe and looked up IEA % values for 2016 in TWh and worked out a conversion factor – multiply Mtoe by 0.0045. Then cross-checked with Fraunhofer ISE for Germany. This allows us to see the BP 2016 renewables breakdown in real electricity units rather than in mythical barrels of oil.

Another note – I’m not trying to generate exact figures. Every source has different values. Reconciling them is a big undertaking and very uninteresting work. I’m simply trying to get some perspective on actual renewables progress.

I don’t quote nuclear energy statistics in this article. It’s very low carbon emission, but not exactly “renewable”. The real reason for not including the numbers is that most developed countries are not significantly expanding their nuclear generation, and in Germany’s case are shutting it down. China is a different story, with a big nuclear expansion ongoing.

Germany v US

You would think that Germany, one of the leading lights in renewable energy, would be greatly outperforming the US on CO2 emissions reduction.

  • 2005 – 2015 German CO2 reduction = 0.9% p.a
  • 2005 – 2015 US CO2 reduction = 1.1% p.a

Over that time period the German population has stayed the same, while the US population has grown by about 9%, so we can adjust the US reduction to about 2% p.a on a per capita basis.

Now the US emissions peaked in 2005. You actually don’t need to read a report to find that out because when the US commitment to reducing CO2 emissions was announced in Paris in 2015 the commitment was a reduction “from 2005”. Being cynical about politicians never loses, and sure enough (when checking data in a report) the peak was 2005 – and the reduction from 2005 to 2015 was already about 12%.

Germany’s emission peaked in 1990 so I believe their commitment is always referenced to 1990. The story I haven’t verified is that after the collapse of the Soviet Union and the re-unification of Germany, lots of dirty heavy industry shut down and this was a big help in emissions reductions.

The US reduction looks to be – in part – due to the embrace of natural gas due to its recent very low cost (gas produces about half the CO2 of coal for the same electricity production). This is a result of the current revolution in “unconventional gas”.

When we look at CO2 emissions per kWh in 2016 the story is also surprising:

  • Germany – 1.3g CO2/kWh
  • US – 1.2g CO2/kWh

So this tells us that the GHG efficiency of electricity generation is effectively the same in both countries, slightly better in the US.

When we look at total usage (across all electricity generation, including industry) the story is what we might expect:

  • Germany – 19 kWh per person per day
  • US – 35 kWh per person per day

This tells us that the US uses almost double the electricity per person.

Changes in Renewables

I looked up a few other countries – Denmark, the UK and Spain because they have a big push into renewables; and China to contrast a rapidly developing country. The last column in the table, Total Produced, is total electricity produced from all sources, including fossil fuels and nuclear.

From BP data

The IEA values (not shown) give lower total electricity for each country. The BP figures are electricity produced and IEA figures are electricity consumed. The solar + wind value for Germany in 2016 moves from 18% to 20% of total if I use the lower IEA total.

I also looked up electricity prices in the IEA report and while I have values for 2016, I don’t have comparable values for 2006. I couldn’t find the 2006 or 2007 version of the report. Based on a variety of websites all using different methods, quoting in different currencies and from unverified sources (so not reliable) the average consumer price in Germany has gone from about 19c/kWh to 33c/kWh from 2006-2016 (US$). The US looks almost flat, perhaps from 12 to 12.5c/kWh. UK from 14 to 21c/kWh. The IEA report didn’t give a figure for Denmark.

So Germany produces about 18% of electricity from solar + wind. Its total renewables are 30% if we include biomass, and about 21% if don’t include them. As I mentioned at the start, biomass sometimes includes burning “renewable” wood chip instead of fossil fuels. Biomass is a (big) subject for another day with numerous problems and I haven’t looked at the breakdown.

The Denmark figure for total electricity is probably quite misleading – see the huge reduction in electricity production from 46 TWh to 30 TWh over 10 years. On wikipedia someone has provided a better breakdown, showing consumption as well and the consumption has dropped by just 4% over that time. Also 2006 appears to be a big outlier in electricity production. Denmark is a country connected to neighboring grids and generating lots of wind energy. So Denmark’s 2006 real figure for wind was about 20% of total consumed (not 14%) and has gone up to 43% over 10 years. On this basis Denmark could be at 80% of electricity generation by wind in 2035.

Confusion

When looking for electricity price changes, here was a random site I came across, Economists at Large:

By June 16 this year electricity generated from solar and wind power accounted for a record 61% of total electricity generated in Germany.

The actual figure for 2016 is about 18%.

If I went looking I’m sure I could find lots of sites, including “reputable” media outlets, with wide ranges of inflated figures. It’s very easy to generate confusion – quote a peak daytime value like this “Germany’s renewable output was …%… on May 28th at 1:15pm” and wait for the recyclers of mush (this includes “reputable” media outlets) to propogate it in a new way. Or quote growth figures – as in how much has been added this year. Or quote capacity added, and rely on the fact that no one understands that 10GW of wind farm only generates about 2GW on average of output in Germany. And so on.

I realize young people may expect media outlets to “fact check” but that is not their job. Their job is to generate headlines and have their stories quoted more widely.

Also, if you pay zero for your electricity because you have solar power you might think that you are generating all of your own electricity. Most of the time you would be wrong. Various governments have guaranteed feed-in tariffs for rooftop solar at well above market price.

Basic energy literacy means understanding the difference between these items.

Conclusion

I was just trying to find the core statistics for my own understanding and was especially interested in Germany.

For Germany, we could look at the 3.5x increase in solar + wind in a decade and say “amazing”. Alternatively, we could look at going from 5% to 18% of total electricity generation in 10 years and say that to get to 80% of electricity production will take another 40-50 years at the same rate and say “disappointing”.

Remember that electricity is only about 40% of energy use in most developed countries. Therefore, if you want to decarbonize the whole economy you also have to boost your electricity supply by 2.5x and switch over heating, transport, etc to electric supply.

At the moment, there are currently issues with increasing “non-synchronous” generation beyond a certain point (see V – Grid Stability As Wind Power Penetration Increases). If you read spruiking websites you will find two common suggestions, first “people said we couldn’t get past 10% and now we’re already at 20%” and second “look at Denmark”. If you like happy stories probably skip the rest of this section..

The most helpful textbook I found on the topic was Renewable Electricity and the Grid : The Challenge of Variability written by people who are trying to do it. Long story short, integrating wind energy is very easy at the start, and up to about 20% of total supply on average it doesn’t seem to present a problem. Above 20% there are questions and uncertainties. These are electricity generation and grid experts contributing to the various chapters.

The key point is that grid stability can come from who you are connected to and how.

Denmark, while a country, is really just the size of a large city (population 6M) connected to the rest of Europe and this connection provides their grid stability. Denmark produced 43% of their electricity from wind in 2016 but this is a much lower % of the grid that it is connected to. The question is not “can one small country connected to nearby large countries produce 80% of electricity from wind?” but instead “can the interconnected grid produce 80% from wind?” The answer to the first question is of course yes. The other countries provide grid stability to Denmark. When all the surrounding countries are producing wind energy at 80% of the total inter-connected grid it will be a different story.

However, this is not some fundamental physics problem, it’s an engineering problem that I’m sure can be solved. I haven’t dug in much beyond the references in Part V (referenced above) so I don’t know what issues and costs are involved.

Other Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

Renewables XVI – JP Morgan advises

Renewables XVII – Demand Management 1

Renewables XVIII – Demand Management & Levelized Cost

Renewables XIX – Behind the Executive Summary and Reality vs Dreams

References

BP Statistical Review of World Energy June 2017

BP Statistical Review of World Energy June 2017 – Renewables Appendices (this is a separate pdf)

IEA Key world energy statistics 2017

Renewable Electricity and the Grid : The Challenge of Variability, Godfrey Boyle, Earthscan (2007)

 

Read Full Post »

Over in another article, a commenter claims:

..Catastrophic predictions depend on accelerated forcings due to water vapour feedback. This water vapour feedback is simply written into climate models as parameters. It is not derived from any kind simulation of first principles in the General Circulation Model runs (GCMs)..

[Emphasis added]

I’ve seen this article of faith a lot. If you frequent fantasy climate blogs where people learn first principles and modeling basics from comments by other equally well-educated commenters this is the kind of contribution you will be able to make after years of study.

None of us knowed nothing, so we all sat around and teached each other.

Actually how the atmospheric section of climate models work is pretty simple in principle. The atmosphere is divided up into a set of blocks (a grid) with each block having dimensions something like 200 km x 200km x 500m high. The values vary a lot and depend on the resolution of the model, this is just to give you an idea.

Then each block has an E-W wind; a N-S wind; a vertical velocity; temperature; pressure; the concentrations of CO2, water vapor, methane; cloud fractions, and so on.

Then the model “steps forward in time” and uses equations to calculate the new values of each item.

The earth is spinning and conservation of momentum, heat, mass is applied to each block. The principles of radiation through each block in each direction apply via paramaterizations (note 1).

Specifically on water vapor – the change in mass of water vapor in each block is calculated from the amount of water evaporated, the amount of water vapor condensed, and the amount of rainfall taking water out of the block. And from the movement of air via E-W, N-S and up/down winds. The final amount of water vapor in each time step affects the radiation emitted upwards and downwards.

It’s more involved and you can read whole books on the subject.

I doubt that anyone who has troubled themselves to read even one paper on climate modeling basics could reach the conclusion so firmly believed in fantasy climate blogs and repeated above. If you never need to provide evidence for your claims..

For this blog we do like to see proof of claims, so please take a read of Description of the NCAR Community Atmosphere Model (CAM 4.0) and just show where this water vapor feedback is written in. Or pick another climate model used by a climate modeling group.

This is the kind of exciting stuff you find in the 200+ pages of an atmospheric model description:

From CAM4 Technical Note

You can also find details of the shortwave and longwave radiation parameterization schemes and how they apply to water vapor.

Here is a quote from The Global Circulation of the Atmosphere (ref below):

Essentially all GCMs yield water vapor feedback consistent with that which would result from holding relative humidity approximately fixed as climate changes. This is an emergent property of the simulated climate system; fixed relative humidity is not in any way built into the model physics, and the models offer ample means by which relative humidity could change.

From Water Vapor Feedback and Global Warming, a paper well-worth anyone reading for who wants to understand this key question in climate:

Water vapor is the dominant greenhouse gas, the most important gaseous source of infrared opacity in the atmosphere. As the concentrations of other greenhouse gases, particularly carbon dioxide, increase because of human activity, it is centrally important to predict how the water vapor distribution will be affected. To the extent that water vapor concentrations increase in a warmer world, the climatic effects of the other greenhouse gases will be amplified. Models of the Earth’s climate indicate that this is an important positive feedback that increases the sensitivity of surface temperatures to carbon dioxide by nearly a factor of two when considered in isolation from other feedbacks, and possibly by as much as a factor of three or more when interactions with other feedbacks are considered. Critics of this consensus have attempted to provide reasons why modeling results are overestimating the strength of this feedback..

Remember, just a few years of study at fantasy climate blogs can save an hour or more of reading papers on atmospheric physics.

References

Description of the NCAR Community Atmosphere Model (CAM 4)– free paper

On the Relative Humidity of the Atmosphere, Chapter 6 of The Global Circulation of the Atmosphere, edited by Tapio Schneider & Adam Sobel, Princeton University Press (2007)

Water Vapor Feedback and Global Warming, Held & Soden, Annu. Rev. Energy Environ (2000) – free paper

Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006)

Notes

Note 1: The very accurate calculation of radiation transfer is done via line by line calculations but they are computationally very expensive and so a simpler approximation is used in GCMs. Of course there are many studies comparing parameterizations vs line by line calculations. One example is Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006).

Read Full Post »

This article will be a placeholder article to filter out a select group of people. The many people who arrive and confidently explain that atmospheric physics is fatally flawed (without the benefit of having read a textbook). They don’t think they are confused, in their minds they are helpfully explaining why the standard theory is wrong. There have been a lot of such people.

Almost none of them ever provides an equation. If on rare occasions they do provide a random equation, they never explain what is wrong with the 65-year old equation of radiative transfer (explained by Nobel prize winner Subrahmanyan Chandrasekhar, see note 1) which is derived from fundamental physics. Or an explanation for why observation matches the standard theory. For example (and I have lots of others), here is a graph produced nearly 50 years ago (referenced almost 30 years ago) of the observed spectrum at the top of atmosphere vs the calculated spectrum from the standard theory.

Why is it so accurate?

From Atmospheric Radiation, Goody (1989)

From Atmospheric Radiation, Goody (1989)

If it was me, and I thought the theory was wrong, I would read a textbook and try and explain why the textbook was wrong. But I’m old school and generally expect physics textbooks to be correct, short of some major revolution. Conventionally, when you “prove” textbook theory wrong you are expected to explain why everyone got it wrong before.

There is a simple reason why our many confident visitors never do that. They don’t know anything about the basic theory. Entertaining as that is, and I’ll be the first to admit that it has been highly entertaining, it’s time to prune comments from overconfident and confused visitors.

I am not trying to push away people with questions. If you have questions please ask. This article is just intended to limit the tsunami of comments from visitors with their overconfident non-textbook understanding of physics – that have often dominated comment threads. 

So here are my two questions for the many visitors with huge confidence in their physics knowledge. Dodging isn’t an option. You can say “not correct” and explain your alternative formulation with evidence, but you can’t dodge.

Answer these two questions:

1. Is the equation of radiative transfer correct or not?

Iλ(0) = Iλm)em + ∫ Bλ(T)e [16]

The intensity at the top of atmosphere equals.. The surface radiation attenuated by the transmittance of the atmosphere, plus.. The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere

Of course (and I’m sure I don’t even need to spell it out) we need to integrate across all wavelengths, λ, to get the flux value.

For the derivation see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. If you don’t agree it is correct then explain why.

[Note that other articles explain the basics. For example – The “Greenhouse” Effect Explained in Simple Terms, which has many links to other in depth articles].

If you don’t understand the equation you don’t understand the core of radiative atmospheric physics.

—-

2. Is this graphic with explanation from an undergraduate heat transfer textbook (Fundamentals of Heat and Mass Transfer, 6th edition, Incropera and DeWitt 2007) correct or not?

From "Fundamentals of Heat and Mass Transfer, 6th edition", Incropera and DeWitt (2007)

From “Fundamentals of Heat and Mass Transfer, 6th edition”, Incropera and DeWitt (2007)

You can see that radiation is emitted from a hot surface and absorbed by a cool surface. And that radiation is emitted from a cool surface and absorbed by a hot surface. More examples of this principle, including equations, in Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics – scanned pages from six undergraduate heat transfer textbooks (seven textbooks if we include the one added in comments after entertaining commenter Bryan suggested the first six were “cherry-picked” and offered his preferred textbook which had exactly the same equations).

—-

What I will be doing for the subset of new visitors with their amazing and confident insights is to send them to this article and ask for answers. In the past I have never been able to get a single member of this group to commit. The reason why is obvious.

But – if you don’t answer, your comments may never be published.

Once again, this is not designed to stop regular visitors asking questions. Most people interested in climate don’t understand equations, calculus, radiative physics or thermodynamics – and that is totally fine.

Call it censorship if it makes you sleep better at night.

Notes

Note 1 – I believe the theory is older than Chandrasekhar but I don’t have older references. It derives from basic emission (Planck), absorption (Beer Lambert) and the first law of thermodynamics. Chandrasekhar published this in his 1952 book Radiative Transfer (the link is the 1960 reprint). This isn’t the “argument from authority”, I’m just pointing out that the theory has been long established. Punters are welcome to try and prove it wrong, just no one ever does.

Read Full Post »