Feeds:
Posts
Comments

In Part Four – The Thirty Year Myth we looked at the idea of climate as the “long term statistics” of weather. In one case, climate = statistics of weather, has been arbitrarily defined as over a 30 year period. In certain chaotic systems, “long term statistics” might be repeatable and reliable, but “long term” can’t be arbitrarily defined for convenience. Climate, when defined as predictable statistics of weather, might just as well be 100,000 years (note 1)

I’ve had a question about the current approach to climate models for some time and found it difficult to articulate. In reading Broad range of 2050 warming from an observationally constrained large climate model ensemble, Daniel Rowlands et al, Nature (2012) I found an explanation that helps me clarify my question.

This paper by Rowlands et al is similar in approach to that of Stainforth et al 2005 – the idea of much larger ensembles of climate models. The Stainforth paper was discussed in the comments of Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes.

For new readers who want to understand a bit more about ensembles of models – take a look at Ensemble Forecasting.

Weather Forecasting

The basic idea behind ensembles for weather forecasts is that we have uncertainty about:

  • the initial conditions – because observations are not perfect
  • parameters in our model – because our understanding of the physics of weather is not perfect

So multiple simulations are run and the frequency of occurrence of, say, a severe storm tells us the probability that the severe storm will occur.

Given the short term nature of weather forecasts we can compare the frequency of occurrence of particular events with the % probability that our ensemble produced.

Let’s take an example to make it clear. Suppose the ensemble prediction of a severe storm in a certain area is 5%. The severe storm occurs. What can we make of the accuracy our prediction? Well, we can’t deduce anything from that event.

Why? Because we only had one occurrence.

Out of a 1000 future forecasts, the “5%ers” are going to occur 50 times – if we are right on the money with our probabilistic forecast. We need a lot of forecasts to be compared with a lot of results. Then we might find that 5%ers actually occur 20% of the time. Or only 1% of the time. Armed with this information we can a) try and improve our model because we know the deficiencies, and b) temper our ensemble forecast with our knowledge of how well it has historically predicted the 5%, 10%, 90% chances of occurrence.

This is exactly what currently happens with numerical weather prediction.

And if instead we run one simulation with our “best estimate” of initial conditions and parameters the results are not as good as the results from the ensemble.

Climate Forecasting

The idea behind ensembles of climate forecasts is subtly different. Initial conditions are no help with predicting the long term statistics (aka “climate”). But we still have a lot of uncertainty over model physics and parameterizations. So we run ensembles of simulations with slightly different physics/parameterizations (see note 2).

Assuming our model is a decent representation of climate, there are three important points:

  1. we need to know the timescale of “predictable statistics”, given constant “external” forcings (e.g. anthropogenic GHG changes)
  2. we need to cover the real range of possible parameterizations
  3. the results we get from ensembles can, at best, only ever give us the probabilities of outcomes over a given time period

Item 1 was discussed in the last article and I have not been able to find any discussion of this timescale in climate science papers (that doesn’t mean there aren’t any, hopefully someone can point me to a discussion of this topic).

Item 2 is something that I believe climate scientists are very interested in. The limitation has been, and still is, the computing power required.

Item 3 is what I want to discuss in this article, around the paper by Rowlands et al.

Rowlands et al 2012

In the latest generation of coupled atmosphere–ocean general circulation models (AOGCMs) contributing to the Coupled Model Intercomparison Project phase 3 (CMIP-3), uncertainties in key properties controlling the twenty-first century response to sustained anthropogenic greenhouse-gas forcing were not fully sampled, partially owing to a correlation between climate sensitivity and aerosol forcing, a tendency to overestimate ocean heat uptake and compensation between short-wave and long-wave feedbacks.

This complicates the interpretation of the ensemble spread as a direct uncertainty estimate, a point reflected in the fact that the ‘likely’ (>66% probability) uncertainty range on the transient response was explicitly subjectively assessed as −40% to +60% of the CMIP-3 ensemble mean for global-mean temperature in 2100, in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4). The IPCC expert range was supported by a range of sources, including studies using pattern scaling, ensembles of intermediate-complexity models, and estimates of the strength of carbon-cycle feedbacks. From this evidence it is clear that the CMIP-3 ensemble, which represents a valuable expression of plausible responses consistent with our current ability to explore model structural uncertainties, fails to reflect the full range of uncertainties indicated by expert opinion and other methods..

..Perturbed-physics ensembles offer a systematic approach to quantify uncertainty in models of the climate system response to external forcing. Here we investigate uncertainties in the twenty-first century transient response in a multi-thousand-member ensemble of transient AOGCM simulations from 1920 to 2080 using HadCM3L, a version of the UK Met Office Unified Model, as part of the climateprediction.net British Broadcasting Corporation (BBC) climate change experiment (CCE). We generate ensemble members by perturbing the physics in the atmosphere, ocean and sulphur cycle components, with transient simulations driven by a set of natural forcing scenarios and the SRES A1B emissions scenario, and also control simulations to account for unforced model drifts.

[Emphasis added]. So this project runs a much larger ensemble than the CMIP3 models produced for AR4.

Figure 1 shows the evolution of global-mean surface temperatures in the ensemble (relative to 1961–1990), each coloured by the goodness-of-fit to observations of recent surface temperature changes, as detailed below.

Rowlands et al 2012

From Rowlands et al 2012

The raw ensemble range (1.1–4.2 K around 2050), primarily driven by uncertainties in climate sensitivity (Supplementary Information), is potentially misleading because many ensemble members have an unrealistic response to the forcing over the past 50 years.

[Emphasis added]

And later in the paper:

..On the assumption that models that simulate past warming realistically are our best candidates for making estimates of the future..

So here’s my question:

If model simulations give us probabilistic forecasts of future climate, why are climate model simulations “compared” with the average of the last few years current “weather” – and those that don’t match up well are rejected or devalued?

It seems like an obvious thing to do, of course. But current averaged weather might be in the top 10% or the bottom 10% of probabilities. We have no way of knowing.

Let’s say that the current 10-year average of GMST = 13.7ºC (I haven’t looked up the right value).

Suppose for the given “external” conditions (solar output and latitudinal distribution, GHG concentration) the “climate” – i.e., the real long term statistics of weather – has an average of 14.5ºC, with a standard deviation for any 10-year period of 0.5ºC. That is, 95% of 10-year periods would lie inside 13.5 – 15.5ºC (2 std deviations).

If we run a lot of simulations (and they truly represent the climate) then of course we expect 5% to be outside 13.5 – 15.5ºC. If we reject that 5% as being “unrealistic of current climate”, we’ve arbitrarily and incorrectly reduced the spread of our ensemble.

If we assume that “current averaged weather” – at 13.7ºC – represents reality then we might bias our results even more, depending on the standard deviation that we calculate or assume. We might accept outliers of 13.0ºC because they are closer to our observable and reject good simulations of 15.0ºC because they are more than two standard deviations from our observable (note 3).

The whole point of running an ensemble of simulations is to find out what the spread is, given our current understanding of climate physics.

Let me give another example. One theory for initiation of El Nino is that its initiation is essentially a random process during certain favorable conditions. Now we might have a model that reproduced El Nino starting in 1998 and 10 models that reproduced El Nino starting in other years. Do we promote the El Nino model that “predicted in retrospect” 1998 and demote/reject the others? No. We might actually be rejecting better models. We would need to look at the statistics of lots of El Ninos to decide.

Kiehl 2007 & Knutti 2008

Here’s a couple of papers that don’t articulate the point of view of this article – however, they do comment on the uncertainties in parameter space from a different and yet related perspective.

First, Kiehl 2007:

Methods of testing these models with observations form an important part of model development and application. Over the past decade one such test is our ability to simulate the global anomaly in surface air temperature for the 20th century.. Climate model simulations of the 20th century can be compared in terms of their ability to reproduce this temperature record. This is now an established necessary test for global climate models.

Of course this is not a sufficient test of these models and other metrics should be used to test models..

..A review of the published literature on climate simulations of the 20th century indicates that a large number of fully coupled three dimensional climate models are able to simulate the global surface air temperature anomaly with a good degree of accuracy [Houghton et al., 2001]. For example all models simulate a global warming of 0.5 to 0.7°C over this time period to within 25% accuracy. This is viewed as a reassuring confirmation that models to first order capture the behavior of the physical climate system..

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5°C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.

Second, Why are climate models reproducing the observed global surface warming so well? Knutti (2008):

The agreement between the CMIP3 simulated and observed 20th century warming is indeed remarkable. But do the current models simulate the right magnitude of warming for the right reasons? How much does the agreement really tell us?

Kiehl [2007] recently showed a correlation of climate sensitivity and total radiative forcing across an older set of models, suggesting that models with high sensitivity (strong feedbacks) avoid simulating too much warming by using a small net forcing (large negative aerosol forcing), and models with weak feedbacks can still simulate the observed warming with a larger forcing (weak aerosol forcing).

Climate sensitivity, aerosol forcing and ocean diffusivity are all uncertain and relatively poorly constrained from the observed surface warming and ocean heat uptake [e.g., Knutti et al., 2002; Forest et al., 2006]. Models differ because of their underlying assumptions and parameterizations, and it is plausible that choices are made based on the model’s ability to simulate observed trends..

..Models, therefore, simulate similar warming for different reasons, and it is unlikely that this effect would appear randomly. While it is impossible to know what decisions are made in the development process of each model, it seems plausible that choices are made based on agreement with observations as to what parameterizations are used, what forcing datasets are selected, or whether an uncertain forcing (e.g., mineral dust, land use change) or feedback (indirect aerosol effect) is incorporated or not.

..Second, the question is whether we should be worried about the correlation between total forcing and climate sensitivity. Schwartz et al. [2007] recently suggested that ‘‘the narrow range of modelled temperatures [in the CMIP3 models over the 20th century] gives a false sense of the certainty that has been achieved’’. Because of the good agreement between models and observations and compensating effects between climate sensitivity and radiative forcing (as shown here and by Kiehl [2007]) Schwartz et al. [2007] concluded that the CMIP3 models used in the most recent Intergovernmental Panel on Climate Change (IPCC) report [IPCC, 2007] ‘‘may give a false sense of their predictive capabilities’’.

Here I offer a different interpretation of the CMIP3 climate models. They constitute an ‘ensemble of opportunity’, they share biases, and probably do not sample the full range of uncertainty [Tebaldi and Knutti, 2007; Knutti et al., 2008]. The model development process is always open to influence, conscious or unconscious, from the participants’ knowledge of the observed changes. It is therefore neither surprising nor problematic that the simulated and observed trends in global temperature are in good agreement.

Conclusion

The idea that climate models should all reproduce global temperature anomalies over a 10-year or 20-year or 30-year time period, presupposes that we know:

a) climate, as the long term statistics of weather, can be reliably obtained over these time periods. Remember that with a simple chaotic system where we have “deity like powers” we can simulate the results and find the time period over which the statistics are reliable.

or

b) climate, as the 10-year (or 20-year or 30-year) statistics of weather is tightly constrained within a small range, to a high level of confidence, and therefore we can reject climate model simulations that fall outside this range.

Given that this Rowlands et al 2012 is attempting to better sample climate uncertainty by a larger ensemble it’s clear that this answer is not known in advance.

There are a lot of uncertainties in climate simulation. Constraining models to match the past may be under-sampling the actual range of climate variability.

Models are not reality. But if we accept that climate simulation is, at best, a probabilistic endeavor, then we must sample what the models produce, rather than throwing out results that don’t match the last 100 years of recorded temperature history.

References

Broad range of 2050 warming from an observationally constrained large climate model ensemble, Daniel Rowlands et al, Nature (2012) – free paper

Uncertainty in predictions of the climate response to rising levels of greenhouse gases, Stainforth et al, Nature (2005) – free paper

Why are climate models reproducing the observed global surface warming so well? Reto Knutti, GRL (2008) – free paper

Twentieth century climate model response and climate sensitivity, Jeffrey T Kiehl, GRL (2007) – free paper

Notes

Note 1: We are using the ideas that have been learnt from simple chaotic systems, like the Lorenz 1963 model. There is discussion of this in Part One and Part Two of this series. As some commenters have pointed out that doesn’t mean the climate works in the same way as these simple systems, it is much more complex.

The starting point is that weather is unpredictable. With modern numerical weather prediction (NWP) on current supercomputers we can get good forecasts 1 week ahead. But beyond that we might as well use the average value for that month in that location, measured over the last decade. It’s going to be better than a forecast from NWP.

The idea behind climate prediction is that even though picking the weather 8 weeks from now is a no-hoper, what we have learnt from simple chaotic systems is that the statistics of many chaotic systems can be reliably predicted.

Note 2: Models are run with different initial conditions as well. My only way of understanding this from a theoretical point of view (i.e., from anything other than a “practical” or “this is how we have always done it” approach) is to see different initial conditions as comparable to one model run over a much longer period.

That is, if climate is not an “initial value problem”, why are initial values changed in each ensemble member to assist climate model output? Running 10 simulations of the same model for 100 years, each with different initial conditions, should be equivalent to running one simulation for 1,000 years.

Well, that is not necessarily true because that 1,000 years might not sample the complete “attractor space”, which is the same point discussed in the last article.

Note 3: Models are usually compared to observations via temperature anomalies rather than via actual temperatures, see Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes. The example was given for simplicity.

In Part Three we looked at attribution in the early work on this topic by Hegerl et al 1996. I started to write Part Four as the follow up on Attribution as explained in the 5th IPCC report (AR5), but got caught up in the many volumes of AR5.

And instead for this article I decided to focus on what might seem like an obscure point. I hope readers stay with me because it is important.

Here is a graphic from chapter 11 of IPCC AR5:

From IPCC AR5 Chapter 11

From IPCC AR5 Chapter 11

Figure 1

And in the introduction, chapter 1:

Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The relevant quantities are most often surface variables such as temperature, precipitation and wind.

Classically the period for averaging these variables is 30 years, as defined by the World Meteorological Organization.

Climate in a wider sense also includes not just the mean conditions, but also the associated statistics (frequency, magnitude, persistence, trends, etc.), often combining parameters to describe phenomena such as droughts. Climate change refers to a change in the state of the climate that can be identified (e.g., by using statistical tests) by changes in the mean and/or the variability of its properties, and that persists for an extended period, typically decades or longer.

[Emphasis added].

Weather is an Initial Value Problem, Climate is a Boundary Value Problem

The idea is fundamental, the implementation is problematic.

As explained in Natural Variability and Chaos – Two – Lorenz 1963, there are two key points about a chaotic system:

  1. With even a minute uncertainty in the initial starting condition, the predictability of future states is very limited
  2. Over a long time period the statistics of the system are well-defined

(Being technical, the statistics are well-defined in a transitive system).

So in essence, we can’t predict the exact state of the future – from the current conditions – beyond a certain timescale which might be quite small. In fact, in current weather prediction this time period is about one week.

After a week we might as well say either “the weather on that day will be the same as now” or “the weather on that day will be the climatological average” – and either of these will be better than trying to predict the weather based on the initial state.

No one disagrees on this first point.

In current climate science and meteorology the term used is the skill of the forecast. Skill means, not how good is the forecast, but how much better is it than a naive approach like, “it’s July in New York City so the maximum air temperature today will be 28ºC”.

What happens in practice, as can be seen in the simple Lorenz system shown in Part Two, is a tiny uncertainty about the starting condition gets amplified. Two almost identical starting conditions will diverge rapidly – the “butterfly effect”. Eventually these two conditions are no more alike than one of the conditions and a time chosen at random from the future.

The wide divergence doesn’t mean that the future state can be anything. Here’s an example from the simple Lorenz system for three slightly different initial conditions:

Lorenz63-5ksecs-x-vs-time-zoom-499px

Figure 2

We can see that the three conditions that looked identical for the first 20 seconds (see figure 2 in Part Two) have diverged. The values are bounded but at any given time we can’t predict what the value will be.

On the second point – the statistics of the system, there is a tiny hiccup.

But first let’s review what is agreed upon. Climate is the statistics of weather. Weather is unpredictable more than a week ahead. Climate, as the statistics of weather, might be predictable. That is, just because weather is unpredictable, it doesn’t mean (or prove) that climate is also unpredictable.

This is what we find with simple chaotic systems.

So in the endeavor of climate modeling the best we can hope for is a probabilistic forecast. We have to run “a lot” of simulations and review the statistics of the parameter we are trying to measure.

To give a concrete example, we might determine from model simulations that the mean sea surface temperature in the western Pacific (between a certain latitude and longitude) in July has a mean of 29ºC with a standard deviation of 0.5ºC, while for a certain part of the north Atlantic it is 6ºC with a standard deviation of 3ºC. In the first case the spread of results tells us – if we are confident in our predictions – that we know the western Pacific SST quite accurately, but the north Atlantic SST has a lot of uncertainty. We can’t do anything about the model spread. In the end, the statistics are knowable (in theory), but the actual value on a given day or month or year are not.

Now onto the hiccup.

With “simple” chaotic systems that we can perfectly model (note 1) we don’t know in advance the timescale of “predictable statistics”. We have to run lots of simulations over long time periods until the statistics converge on the same result. If we have parameter uncertainty (see Ensemble Forecasting) this means we also have to run simulations over the spread of parameters.

Here’s my suggested alternative of the initial value vs boundary value problem:

Suggested replacement for AR5, Box 11.1, Figure 2

Figure 3

So one body made an ad hoc definition of climate as the 30-year average of weather.

If this definition is correct and accepted then “climate” is not a “boundary value problem” at all. Climate is an initial value problem and therefore a massive problem given our ability to forecast only one week ahead.

Suppose, equally reasonably, that the statistics of weather (=climate), given constant forcing (note 2), are predictable over a 10,000 year period.

In that case we can be confident that, with near perfect models, we have the ability to be confident about the averages, standard deviations, skews, etc of the temperature at various locations on the globe over a 10,000 year period.

Conclusion

The fact that chaotic systems exhibit certain behavior doesn’t mean that 30-year statistics of weather can be reliably predicted.

30-year statistics might be just as dependent on the initial state as the weather three weeks from today.

Notes

Note 1: The climate system is obviously imperfectly modeled by GCMs, and this will always be the case. The advantage of a simple model is we can state that the model is a perfect representation of the system – it is just a definition for convenience. It allows us to evaluate how slight changes in initial conditions or parameters affect our ability to predict the future.

The IPCC report also has continual reminders that the model is not reality, for example, chapter 11, p. 982:

For the remaining projections in this chapter the spread among the CMIP5 models is used as a simple, but crude, measure of uncertainty. The extent of agreement between the CMIP5 projections provides rough guidance about the likelihood of a particular outcome. But — as partly illustrated by the discussion above — it must be kept firmly in mind that the real world could fall outside of the range spanned by these particular models.

[Emphasis added].

Chapter 1, p.138:

Model spread is often used as a measure of climate response uncertainty, but such a measure is crude as it takes no account of factors such as model quality (Chapter 9) or model independence (e.g., Masson and Knutti, 2011; Pennell and Reichler, 2011), and not all variables of interest are adequately simulated by global climate models..

..Climate varies naturally on nearly all time and space scales, and quantifying precisely the nature of this variability is challenging, and is characterized by considerable uncertainty.

I haven’t yet been able to determine how these firmly noted and challenging uncertainties have been factored into the quantification of 95-100%, 99-100%, etc, in the various chapters of the IPCC report.

Note 2:  There are some complications with defining exactly what system is under review. For example, do we take the current solar output, current obliquity,precession and eccentricity as fixed? If so, then any statistics will be calculated for a condition that will anyway be changing. Alternatively, we can take these values as changing inputs in so far as we know the changes – which is true for obliquity, precession and eccentricity but not for solar output.

The details don’t really alter the main point of this article.

I’ve been somewhat sidetracked on this series, mostly by starting up a company and having no time, but also by the voluminous distractions of IPCC AR5. The subject of attribution could be a series by itself but as I started the series Natural Variability and Chaos it makes sense to weave it into that story.

In Part One and Part Two we had a look at chaotic systems and what that might mean for weather and climate. I was planning to develop those ideas a lot more before discussing attribution, but anyway..

AR5, Chapter 10: Attribution is 85 pages on the idea that the changes over the last 50 or 100 years in mean surface temperature – and also some other climate variables – can be attributed primarily to anthropogenic greenhouse gases.

The technical side of the discussion fascinated me, but has a large statistical component. I’m a rookie with statistics, and maybe because of this, I’m often suspicious about statistical arguments.

Digression on Statistics

The foundation of a lot of statistics is the idea of independent events. For example, spin a roulette wheel and you get a number between 0 and 36 and a color that is red, black – or if you’ve landed on a zero, neither.

The statistics are simple – each spin of the roulette wheel is an independent event – that is, it has no relationship with the last spin of the roulette wheel. So, looking ahead, what is the chance of getting 5 two times in a row? The answer (with a 0 only and no “00” as found in some roulette tables) is 1/37 x 1/37 = 0.073%.

However, after you have spun the roulette wheel and got a 5, what is the chance of a second 5? It’s now just 1/37 = 2.7%. The past has no impact on the future statistics. Most of real life doesn’t correspond particularly well to this idea, apart from playing games of chance like poker and so on.

I was in the gym the other day and although I try and drown it out with music from my iPhone, the Travesty (aka “the News”) was on some of the screens in the gym – with text of the “high points” on the screen aimed at people trying to drown out the annoying travestyreaders. There was a report that a new study had found that autism was caused by “Cause X” – I have blanked it out to avoid any unpleasant feeling for parents of autistic kids – or people planning on having kids who might worry about “Cause X”.

It did get me thinking – if you have let’s say 10,000 potential candidates for causing autism, and you set the bar at 95% probability of rejecting the hypothesis that a given potential cause is a factor, what is the outcome? Well, if there is a random spread of autism among the population with no actual cause (let’s say it is caused by a random genetic mutation with no link to any parental behavior, parental genetics or the environment) then you will expect to find about 500 “statistically significant” factors for autism simply by testing at the 95% level. That’s 500, when none of them are actually the real cause. It’s just chance. Plenty of fodder for pundits though.

That’s one problem with statistics – the answer you get unavoidably depends on your frame of reference.

The questions I have about attribution are unrelated to this specific point about statistics, but there are statistical arguments in the attribution field that seem fatally flawed. Luckily I’m a statistical novice so no doubt readers will set me straight.

On another unrelated point about statistical independence, only slightly more relevant to the question at hand, Pirtle, Meyer & Hamilton (2010) said:

In short, we note that GCMs are commonly treated as independent from one another, when in fact there are many reasons to believe otherwise. The assumption of independence leads to increased confidence in the ‘‘robustness’’ of model results when multiple models agree. But GCM independence has not been evaluated by model builders and others in the climate science community. Until now the climate science literature has given only passing attention to this problem, and the field has not developed systematic approaches for assessing model independence.

.. end of digression

Attribution History

In my efforts to understand Chapter 10 of AR5 I followed up on a lot of references and ended up winding my way back to Hegerl et al 1996.

Gabriele Hegerl is one of the lead authors of Chapter 10 of AR5, was one of the two coordinating lead authors of the Attribution chapter of AR4, and one of four lead authors on the relevant chapter of AR3 – and of course has a lot of papers published on this subject.

As is often the case, I find that to understand a subject you have to start with a focus on the earlier papers because the later work doesn’t make a whole lot of sense without this background.

This paper by Hegerl and her colleagues use the work of one of the co-authors, Klaus Hasselmann – his 1993 paper “Optimal fingerprints for detection of time dependent climate change”.

Fingerprints, by the way, seems like a marketing term. Fingerprints evokes the idea that you can readily demonstrate that John G. Doe of 137 Smith St, Smithsville was at least present at the crime scene and there is no possibility of confusing his fingerprints with John G. Dode who lives next door even though their mothers could barely tell them apart.

This kind of attribution is more in the realm of “was it the 6ft bald white guy or the 5’5″ black guy”?

Well, let’s set aside questions of marketing and look at the details.

Detecting GHG Climate Change with Optimal Fingerprint Methods in 1996

The essence of the method is to compare observations (measurements) with:

  • model runs with GHG forcing
  • model runs with “other anthropogenic” and natural forcings
  • model runs with internal variability only

Then based on the fit you can distinguish one from the other. The statistical basis is covered in detail in Hasselmann 1993 and more briefly in this paper: Hegerl et al 1996 – both papers are linked below in the References.

At this point I make another digression.. as regular readers know I am fully convinced that the increases in CO2, CH4 and other GHGs over the past 100 years or more can be very well quantified into “radiative forcing” and am 100% in agreement with the IPCCs summary of the work of atmospheric physics over the last 50 years on this topic. That is, the increases in GHGs have led to something like a “radiative forcing” of 2.8 W/m² [corrected, thanks to niclewis].

And there isn’t any scientific basis for disputing this “pre-feedback” value. It’s simply the result of basic radiative transfer theory, well-established, and well-demonstrated in observations both in the lab and through the atmosphere. People confused about this topic are confused about science basics and comments to the contrary may be allowed or more likely will be capriciously removed due to the fact that there have been more than 50 posts on this topic (post your comments on those instead). See The “Greenhouse” Effect Explained in Simple Terms and On Uses of A 4 x 2: Arrhenius, The Last 15 years of Temperature History and Other Parodies.

Therefore, it’s “very likely” that the increases in GHGs over the last 100 years have contributed significantly to the temperature changes that we have seen.

To say otherwise – and still accept physics basics – means believing that the radiative forcing has been “mostly” cancelled out by feedbacks while internal variability has been amplified by feedbacks to cause a significant temperature change.

Yet this work on attribution seems to be fundamentally flawed.

Here was the conclusion:

We find that the latest observed 30-year trend pattern of near-surface temperature change can be distinguished from all estimates of natural climate variability with an estimated risk of less than 2.5% if the optimal fingerprint is applied.

With the caveats, that to me, eliminated the statistical basis of the previous statement:

The greatest uncertainty of our analysis is the estimate of the natural variability noise level..

..The shortcomings of the present estimates of natural climate variability cannot be readily overcome. However, the next generation of models should provide us with better simulations of natural variability. In the future, more observations and paleoclimatic information should yield more insight into natural variability, especially on longer timescales. This would enhance the credibility of the statistical test.

Earlier in the paper the authors said:

..However, it is generally believed that models reproduce the space-time statistics of natural variability on large space and long time scales (months to years) reasonably realistic. The verification of variability of CGMCs [coupled GCMs] on decadal to century timescales is relatively short, while paleoclimatic data are sparce and often of limited quality.

..We assume that the detection variable is Gaussian with zero mean, that is, that there is no long-term nonstationarity in the natural variability.

[Emphasis added].

The climate models used would be considered rudimentary by today’s standards. Three different coupled atmosphere-ocean GCMs were used. However, each of them required “flux corrections”.

This method was pretty much the standard until the post 2000 era. The climate models “drifted”, unless, in deity-like form, you topped up (or took out) heat and momentum from various grid boxes.

That is, the models themselves struggled (in 1996) to represent climate unless the climate modeler knew, and corrected for, the long term “drift” in the model.

Conclusion

In the next article we will look at more recent work in attribution and fingerprints and see whether the field has developed.

But in this article we see that the conclusion of an attribution study in 1996 was that there was only a “2.5% chance” that recent temperature changes could be attributed to natural variability. At the same time, the question of how accurate the models were in simulating natural variability was noted but never quantified. And the models were all “flux corrected”. This means that some aspects of the long term statistics of climate were considered to be known – in advance.

So I find it difficult to accept any statistical significance in the study at all.

If the finding instead was introduced with the caveat “assuming the accuracy of our estimates of long term natural variability of climate is correct..” then I would probably be quite happy with the finding. And that question is the key.

The question should be:

What is the likelihood that climate models accurately represent the long-term statistics of natural variability?

  • Virtually certain
  • Very likely
  • Likely
  • About as likely as not
  • Unlikely
  • Very unlikely
  • Exceptionally unlikely

So far I am yet to run across a study that poses this question.

References

Bindoff, N.L., et al, 2013: Detection and Attribution of Climate Change: from Global to Regional. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change

Detecting greenhouse gas induced climate change with an optimal fingerprint method, Hegerl, von Storch, Hasselmann, Santer, Cubasch & Jones, Journal of Climate (1996)

What does it mean when climate models agree? A case for assessing independence among general circulation models, Zachary Pirtle, Ryan Meyer & Andrew Hamilton, Environ. Sci. Policy (2010)

Optimal fingerprints for detection of time dependent climate change, Klaus Hasselmann, Journal of Climate (1993)

In Latent heat and Parameterization I showed a formula for calculating latent heat transfer from the surface into the atmosphere, as well as the “real” formula. The parameterized version has horizontal wind speed x humidity difference (between the surface and some reference height in the atmosphere, typically 10m) x “a coefficient”.

One commenter asked:

Why do we expect that vertical transport of water vapor to vary linearly with horizontal wind speed? Is this standard turbulent mixing?

The simple answer is “almost yes”. But as someone famously said, make it simple, but not too simple.

Charting a course between too simple and too hard is a challenge with this subject. By contrast, radiative physics is a cakewalk. I’ll begin with some preamble and eventually get to the destination.

There’s a set of equations describing motion of fluids and what they do is conserve momentum in 3 directions (x,y,z) – these are the Navier-Stokes equations, and they conserve mass. Then there are also equations to conserve humidity and heat. There is an exact solution to the equations but there is a bit of a problem in practice. The Navier-Stokes equations in a rotating frame can be seen in The Coriolis Effect and Geostrophic Motion under “Some Maths”.

Simple linear equations with simple boundary conditions can be re-arranged and you get a nice formula for the answer. Then you can plot this against that and everyone can see how the relationships change with different material properties or boundary conditions. In real life equations are not linear and the boundary conditions are not simple. So there is no “analytical solution”, where we want to know say the velocity of the fluid in the east-west direction as a function of time and get a nice equation for the answer. Instead we have to use numerical methods.

Let’s take a simple problem – if you want to know heat flow through an odd-shaped metal plate that is heated in one corner and cooled by steady air flow on the rest of its surface you can use these numerical methods and usually get a very accurate answer.

Turbulence is a lot more difficult due to the range of scales involved. Here’s a nice image of turbulence:

Figure 1

There is a cascade of energy from the largest scales down to the point where viscosity “eats up” the kinetic energy. In the atmosphere this is the sub 1mm scale. So if you want to accurately numerically model atmospheric motion across a 100km scale you need a grid size probably 100,000,000 x 100,000,000 x 10,000,000 and solving sub-second for a few days. Well, that’s a lot of calculation. I’m not sure where turbulence modeling via “direct numerical simulation” has got to but I’m pretty sure that is still too hard and in a decade it will still be a long way off. The computing power isn’t there.

Anyway, for atmospheric modeling you don’t really want to know the velocity in the x,y,z direction (usually annotated as u,v,w) at trillions of points every second. Who is going to dig through that data? What you want is a statistical description of the key features.

So if we take the Navier-Stokes equation and average, what do we get? We get a problem.

For the mathematically inclined the following is obvious, but of course many readers aren’t, so here’s a simple example:

Let’s take 3 numbers: 1, 10, 100:   the average = (1+10+100)/3 = 37.

Now let’s look at the square of those numbers: 1, 100, 10000:  the average of the square of those numbers = (1+100+10000)/3 = 3367.

But if we take the average of our original numbers and square it, we get 37² = 1369. It’s strange but the average squared is not the same as the average of the squared numbers. That’s non-linearity for you.

In the Navier Stokes equations we have values like east velocity x upwards velocity, written as uw. The average of uw, written as \overline{uw} is not equal to the average of u x the average of w, written as \overline{u}.\overline{w}. For the same reason we just looked at.

When we create the Reynolds averaged Navier-Stokes (RANS) equations we get lots of new terms like\overline{uw}. That is, we started with the original equations which gave us a complete solution – the same number of equations as unknowns. But when we average we end up with more unknowns than equations.

It’s like saying x + y = 1, what is x and y? No one can say. Perhaps 1 & 0. Perhaps 1000 & -999.

Digression on RANS for Slightly Interested People

The Reynolds approach is to take a value like u,v,w (velocity in 3 directions) and decompose into a mean and a “rapidly varying” turbulent component.

So u = \overline{u} + u', where \overline{u} = mean value;  u’ = the varying component. So \overline{u'} = 0. Likewise for the other directions.

And \overline{uw} = \overline{u} . \overline{w} + \overline{u'w'}

So in the original equation where we have a term like u . \frac{\partial u}{\partial x}, it turns into  (\overline{u} + u') . \frac{\partial (\overline{u} + u')}{\partial x}, which, when averaged, becomes:

\overline{u} . \frac{\partial \overline{u}}{\partial x} +\overline{u' . \frac{\partial u'}{\partial x}}

So 2 unknowns instead of 1. The first term is the averaged flow, the second term is the turbulent flow. (Well, it’s an advection term for the change in velocity following the flow)

When we look at the conservation of energy equation we end up with terms for the movement of heat upwards due to average flow (almost zero) and terms for the movement of heat upwards due to turbulent flow (often significant). That is, a term like \overline{\theta'w'} which is “the mean of potential temperature variations x upwards eddy velocity”.

Or, in plainer English, how heat gets moved up by turbulence.

..End of Digression

Closure and the Invention of New Ideas

“Closure” is a maths term. To “close the equations” when we have more unknowns that equations means we have to invent a new idea. Some geniuses like Reynolds, Prandtl and Kolmogoroff did come up with some smart new ideas.

Often the smart ideas are around “dimensionless terms” or “scaling terms”. The first time you encounter these ideas they seem odd or just plain crazy. But like everything, over time strange ideas start to seem normal.

The Reynolds number is probably the simplest to get used to. The Reynolds number seeks to relate fluid flows to other similar fluid flows. You can have fluid flow through a massive pipe that is identical in the way turbulence forms to that in a tiny pipe – so long as the viscosity and density change accordingly.

The Reynolds number, Re = density x length scale x mean velocity of the fluid / viscosity

And regardless of the actual physical size of the system and the actual velocity, turbulence forms for flow over a flat plate when the Reynolds number is about 500,000. By the way, for the atmosphere and ocean this is true most of the time.

Kolmogoroff came up with an idea in 1941 about the turbulent energy cascade using dimensional analysis and came to the conclusion that the energy of eddies increases with their size to the power 2/3 (in the “inertial subrange”). This is usually written vs frequency where it becomes a -5/3 power. Here’s a relatively recent experimental verification of this power law.

From Durbin & Reif 2010

From Durbin & Reif 2010

 Figure 2

In less genius like manner, people measure stuff and use these measured values to “close the equations” for “similar” circumstances. Unfortunately, the measurements are only valid in a small range around the experiments and with turbulence it is hard to predict where the cutoff is.

A nice simple example, to which I hope to return because it is critical in modeling climate, is vertical eddy diffusivity in the ocean. By way of introduction to this, let’s look at heat transfer by conduction.

If only all heat transfer was as simple as conduction. That’s why it’s always first on the list in heat transfer courses..

If have a plate of thickness d, and we hold one side at temperature T1 and the other side at temperature T2, the heat conduction per unit area:

H_z = \frac{k(T_2-T_1)}{d}

where k is a material property called conductivity. We can measure this property and it’s always the same. It might vary with temperature but otherwise if you take a plate of the same material and have widely different temperature differences, widely different thicknesses – the heat conduction always follows the same equation.

Now using these ideas, we can take the actual equation for vertical heat flux via turbulence:

H_z =\rho c_p\overline{w'\theta'}

where w = vertical velocity, θ = potential temperature

And relate that to the heat conduction equation and come up with (aka ‘invent’):

H_z = \rho c_p K . \frac{\partial \theta}{\partial z}

Now we have an equation we can actually use because we can measure how potential temperature changes with depth. The equation has a new “constant”, K. But this one is not really a constant, it’s not really a material property – it’s a property of the turbulent fluid in question. Many people have measured the “implied eddy diffusivity” and come up with a range of values which tells us how heat gets transferred down into the depths of the ocean.

Well, maybe it does. Maybe it doesn’t tell us very much that is useful. Let’s come back to that topic and that “constant” another day.

The Main Dish – Vertical Heat Transfer via Horizontal Wind

Back to the original question. If you imagine a sheet of paper as big as your desk then that pretty much gives you an idea of the height of the troposphere (lower atmosphere where convection is prominent).

It’s as thin as a sheet of desk size paper in comparison to the dimensions of the earth. So any large scale motion is horizontal, not vertical. Mean vertical velocities – which doesn’t include turbulence via strong localized convection – are very low. Mean horizontal velocities can be the order of 5 -10 m/s near the surface of the earth. Mean vertical velocities are the order of cm/s.

Let’s look at flow over the surface under “neutral conditions”. This means that there is little buoyancy production due to strong surface heating. In this case the energy for turbulence close to the surface comes from the kinetic energy of the mean wind flow – which is horizontal.

There is a surface drag which gets transmitted up through the boundary layer until there is “free flow” at some height. By using dimensional analysis, we can figure out what this velocity profile looks like in the absence of strong convection. It’s logarithmic:

Surface-wind-profile

Figure 3 – for typical ocean surface

Lots of measurements confirm this logarithmic profile.

We can then calculate the surface drag – or how momentum is transferred from the atmosphere to the ocean – using the simple formula derived and we come up with a simple expression:

\tau_0 = \rho C_D U_r^2

Where Ur is the velocity at some reference height (usually 10m), and CD is a constant calculated from the ratio of the reference height to the roughness height and the von Karman constant.

Using similar arguments we can come up with heat transfer from the surface. The principles are very similar. What we are actually modeling in the surface drag case is the turbulent vertical flux of horizontal momentum \rho \overline{u'w'} with a simple formula that just has mean horizontal velocity. We have “closed the equations” by some dimensional analysis.

Adding the Richardson number for non-neutral conditions we end up with a temperature difference along with a reference velocity to model the turbulent vertical flux of sensible heat \rho c_p . \overline{w'\theta'}. Similar arguments give latent heat flux L\rho . \overline{w'q'} in a simple form.

Now with a bit more maths..

At the surface the horizontal velocity must be zero. The vertical flux of horizontal momentum creates a drag on the boundary layer wind. The vertical gradient of the mean wind, U, can only depend on height z, density ρ and surface drag.

So the “characteristic wind speed” for dimensional analysis is called the friction velocity, u*, and u* = \sqrt\frac{\tau_0}{\rho}

This strange number has the units of velocity: m/s  – ask if you want this explained.

So dimensional analysis suggests that \frac{z}{u*} . \frac{\partial U}{\partial z} should be a constant – “scaled wind shear”. The inverse of that constant is known as the Von Karman constant, k = 0.4.

So a simple re-arrangement and integration gives:

U(z) = \frac{u*}{k} . ln(\frac{z}{z_0})

where z0 is a constant from the integration, which is roughness height – a physical property of the surface where the mean wind reaches zero.

The “real form” of the friction velocity is:

u*^2 = \frac{\tau_0}{\rho} = (\overline{u'w'}^2 + \overline{v'w'}^2)^\frac{1}{2},  where these eddy values are at the surface

we can pick a horizontal direction along the line of the mean wind (rotate coordinates) and come up with:

u*^2 = \overline{u'w'}

If we consider a simple constant gradient argument:

\tau = - \rho . \overline{u'w'} = \rho K \frac{\partial \overline{u}}{\partial z}

where the first expression is the “real” equation and the second is the “invented” equation, or “our attempt to close the equation” from dimensional analysis.

Of course, this is showing how momentum is transferred, but the approach is pretty similar, just slightly more involved, for sensible and latent heat.

Conclusion

Turbulence is a hard problem. The atmosphere and ocean are turbulent so calculating anything is difficult. Until a new paradigm in computing comes along, the real equations can’t be numerically solved from the small scales needed where viscous dissipation damps out the kinetic energy of the turbulence up to the large scale of the whole earth, or even of a synoptic scale event. However, numerical analysis has been used a lot to test out ideas that are hard to test in laboratory experiments. And can give a lot of insight into parts of the problems.

In the meantime, experiments, dimensional analysis and intuition have provided a lot of very useful tools for modeling real climate problems.

In Ensemble Forecasting I wrote a short section on parameterization using the example of latent heat transfer and said:

Are we sure that over Connecticut the parameter CDE = 0.004, or should it be 0.0035? In fact, parameters like this are usually calculated from the average of a number of experiments. They conceal as much as they reveal. The correct value probably depends on other parameters. In so far as it represents a real physical property it will vary depending on the time of day, seasons and other factors. It might even be, “on average”, wrong. Because “on average” over the set of experiments was an imperfect sample. And “on average” over all climate conditions is a different sample.

Interestingly, a new paper has just shown up in JGR (“accepted for publication” and on their website in the pre-publishing format): Seasonal changes in physical processes controlling evaporation over an inland water, Qianyu Zhang & Heping Liu.

They carried out detailed measurements over a large reservoir (134 km² and 4-8m deep) in Mississippi for the winter and summer months of 2008. What were they trying to do?

Understanding physical processes that control turbulent fluxes of energy, heat, water vapor, and trace gases over inland water surfaces is critical in quantifying their influences on local, regional, and global climate. Since direct measurements of turbulent fluxes of sensible heat (H) and latent heat (LE) over inland waters with eddy covariance systems are still rare, process-based understanding of water-atmosphere interactions remains very limited..

..Many numerical weather prediction and climate models use the bulk transfer relations to estimate H and LE over water surfaces. Given substantial biases in modeling results against observations, process-based analysis and model validations are essential in improving parameterizations of water-atmosphere exchange processes..

Before we get into their paper, here is a relevant quote on parameterization from a different discipline. This is from Turbulent dispersion in the ocean, Garrett (2006):

Including the effects of processes that are unresolved in models is one of the central problems in oceanography.

In particular, for temperature, salinity, or some other scalar, one seeks to parameterize the eddy flux in terms of quantities that are resolved by the models. This has been much discussed, with determinations of the correct parameterization relying on a combination of deductions from the large-scale models, observations of the eddy fluxes or associated quantities, and the development of an understanding of the processes responsible for the fluxes.

The key remark to make is that it is only through process studies that we can reach an understanding leading to formulae that are valid in changing conditions, rather than just having numerical values which may only be valid in present conditions.

[Emphasis added]

Background

Latent heat transfer is the primary mechanism globally for transferring the solar radiation that is absorbed at the surface up into the atmosphere. Sensible heat is a lot smaller by comparison with latent heat. Both are “convection” in a broad term – the movement of heat by the bulk movement of air. But one is carrying the “extra heat” of evaporated water. When the evaporated water condenses (usually higher up in the atmosphere) it releases this stored heat.

Let’s take a look at the standard parameterization in use (adopting their notation) for latent heat:

LE = ρaLCEU(qw −qa)

LE = latent heat transfer, ρa = air density, L = latent heat of vaporization (2.5×106 J kg–1), CE = bulk transfer coefficient for moisture, U = wind speed, qw & qa are the respective specific humidity in the water-atmosphere interface and the over-water atmosphere

The values  ρa and L are a fundamental values. The formula says that the key parameters are:

  • wind speed (horizontal)
  • the difference between the humidity at the water surface (this is the saturated value which varies strongly with temperature) and the humidity in the air above

We would expect the differential of humidity to be important – if the air above is saturated then latent heat transfer will be zero, because there is no way to move any more water vapor into the air above. At the other extreme, if the air above is completely dry then we have maximized the potential for moving water vapor into the atmosphere.

The product of wind speed and humidity difference indicate how much mixing is going on due to air flow. There is a lot of theory and experiment behind the ideas, going back into the 1950s or further, but in the end it is an over-simplification.

That’s what all parameterizations are – over-simplifications.

The real formula is much simpler:

 LE = ρaL<w’q’>, where the brackets denote averages,w’q’ = the turbulent moisture flux

w is the upwards velocity, q is moisture; and the ‘ denoting eddies

Note to commenters, if you write < or > in the comment it gets dropped because WordPress treats it like a html tag. You need to write &lt; or &gt;

The key part of this equation just says “how much moisture is being carried upwards by turbulent flow”. That’s the real value so why don’t we measure that instead?

Here’s a graph of horizontal wind over a short time period from Stull (1988):

From Stull 1988

From Stull 1988

Figure 1

And any given location the wind varies across every timescale. Pick another location and the results are different. This is the problem of turbulence.

And to get accurate measurements for the paper we are looking at now, they had quite a setup:

Zhang 2014-instruments

Figure 2

Here’s the description of the instrumentation:

An eddy covariance system at a height of 4 m above the water surface consisted of a three-dimensional sonic anemometer (model CSAT3, Campbell Scientific, Inc.) and an open path CO2/H2O infrared gas analyzer (IRGA; Model LI-7500, LI-COR, Inc.).

A datalogger (model CR5000, Campbell Scientific, Inc.) recorded three-dimensional wind velocity components and sonic virtual temperature from the sonic anemometer and densities of carbon dioxide and water vapor from the IRGA at a frequency of 10 Hz.

Other microclimate variables were also measured, including Rn at 1.2 m (model Q-7.1, Radiation and Energy Balance Systems, Campbell Scientific, Inc.), air temperature (Ta) and relative humidity (RH) (model HMP45C, Vaisala, Inc.) at approximately 1.9, 3.0, 4.0, and 5.5 m, wind speeds (U) and wind direction (WD) (model 03001, RM Young, Inc.) at 5.5 m.

An infrared temperature sensor (model IRR-P, Apogee, Inc.) was deployed to measure water skin temperature (Tw).

Vapor pressure (ew) in the water-air interface was equivalent to saturation vapor pressure at Tw [Buck, 1981].

The same datalogger recorded signals from all the above microclimate sensors at 30-min intervals. Six deep cycling marine batteries charged by two solar panels (model SP65, 65 Watt Solar Panel, Campbell Scientific, Inc.) powered all instruments. A monthly visit to the tower was scheduled to provide maintenance and download the 10-Hz time-series data.

I don’t know the price tag but I don’t think the equipment is cheap. So this kind of setup can be used for research, but we can’t put one each every 1km across a country or an ocean and collect continuous data.

That’s why we need parameterizations if we want to get some climatological data. Of course, these need verifying, and that’s what this paper (and many others) are about.

Results

When we look back at the parameterized equation for latent heat it’s clear that latent heat should vary linearly with the product of wind speed and humidity differential. The top graph is sensible heat which we won’t focus on, the bottom graph is latent heat. Δe is humidity, expressed as partial pressure rather than g/kg. We see that the correlation between LE and wind speed x humidity differential is very different in summer and winter:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 2

The scatterplots showing the same information:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 3

The authors looked at the diurnal cycle – averaging the result for the time of day over the period of the results, separated into winter and summer.

Our results also suggest that the influences of U on LE may not be captured simply by the product of U and Δe [humidity differential] on short timescales, especially in summer. This situation became more serious when the ASL (atmospheric surface layer, see note 1) became more unstable, as reflected by our summer cases (i.e., more unstable) versus the winter cases.

They selected one period to review in detail. First the winter results:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 4

On March 18, Δe was small (i.e., 0 ~ 0.2 kPa) and it experienced little diurnal variations, leading to limited water vapor supply (Fig. 5a).

The ASL (see note 1) during this period was slightly stable (Fig. 5b), which suppressed turbulent exchange of LE. As a result, LE approached zero and even became negative, though strong wind speeds of approximately around 10 ms–1 were present, indicating a strong mechanical turbulent mixing in the ASL.

On March 19, with an increased Δe up to approximately 1.0 kPa, LE closely followed Δe and increased from zero to more than 200 Wm–2. Meanwhile, the ASL experienced a transition from stable to unstable conditions (Fig. 5b), coinciding with an increase in LE.

On March 20, however, the continuous increase of Δe did not lead to an increase in LE. Instead, LE decreased gradually from 200 Wm–2 to about zero, which was closely associated with the steady decrease in U from 10 ms–1 to nearly zero and with the decreased instability.

These results suggest that LE was strongly limited by Δe, instead of U when Δe was low; and LE was jointly regulated by variations in Δe and U once a moderate Δe level was reached and maintained, indicating a nonlinear response of LE to U and Δe induced by ASL stability. The ASL stability largely contributed to variations in LE in winter.

Then the summer results:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 5

In summer (i.e., July 23 – 25 in Fig. 6), Δe was large with a magnitude of 1.5 ~ 3.0 kPa, providing adequate water vapor supply for evaporation, and had strong diurnal variations (Fig. 6a).

U exhibited diurnal variations from about 0 to 8 ms–1. LE was regulated by both Δe and U, as reflected by the fact that LE variations on the July 24 afternoon did not follow solely either the variations of U or the variations of Δe. When the diurnal variations of Δe and U were small in July 25, LE was also regulated by both U and Δe or largely by U when the change in U was apparent.

Note that during this period, the ASL was strongly unstable in the morning and weakly unstable in the afternoon and evening (Fig. 6b), negatively corresponding to diurnal variations in LE. This result indicates that the ASL stability had minor impacts on diurnal variations in LE during this period.

Another way to see the data is by plotting the results to see how valid the parameterized equation appears. Here we should have a straight line between LE/U and Δe as the caption explains:

From Zhang & Liu 2014

From Zhang & Liu 2014

Figure 6

One method to determine the bulk transfer coefficients is to use the mass transfer relations (Eqs. 1, 2) by quantifying the slopes of the linear regression of LE against UΔe. Our results suggest that using this approach to determine the bulk transfer coefficient may cause large bias, given the fact that one UΔe value may correspond to largely different LE values.

They conclude:

Our results suggest that these highly nonlinear responses of LE to environmental variables may not be represented in the bulk transfer relations in an appropriate manner, which requires further studies and discussion.

Conclusion

Parameterizations are inevitable. Understanding their limitations is very difficult. A series of studies might indicate that there is a “linear” relationship with some scatter, but that might just be disguising or ignoring a variable that never appears in the parameterization.

As Garrett commented “..having numerical values which may only be valid in present conditions”. That is, if the mean state of another climate variable shifts the parameterization will be invalid, or less accurate.

Alternatively, given the non-linear nature of climate process, changes don’t “average out”. So the mean state of another climate variable may not shift, the mean state might be constant, but its variation with time or another variable may introduce a change in the real process that results in an overall shift in climate.

There are other problems with calculating latent heat transfer – even accepting the parameterization as the best version of “the truth” – there are large observational gaps in the parameters we need to measure (wind speed and humidity above the ocean) even at the resolution of current climate models. This is one reason why there is a need for reanalysis products.

I found it interesting to see how complicated latent heat variations were over a water surface.

References

Seasonal changes in physical processes controlling evaporation over an inland water, Qianyu Zhang & Heping Liu, JGR (2014)

Turbulent dispersion in the ocean, Chris Garrett, Progress in Oceanography (2006)

Notes

Note 1:  The ASL (atmospheric surface layer) stability is described by the Obukhov stability parameter:

ζ = z/L0

where z is the height above ground level and L0 is the Obukhov parameter.

L0 = −θvu*3/[kg(w’θv‘)s ]

where θv is virtual potential temperature (K), u* is frictional velocity by the eddy covariance system (ms–1), k is Von Karman constant (0.4), g is acceleration due to gravity (9.8 ms–2), w is vertical velocity (m s–1), and (w’θv‘)s is the flux of virtual potential temperature by the eddy covariance system

The atmosphere cools to space by radiation. Well, without getting into all the details, the surface cools to space as well by radiation but not much radiation is emitted by the surface that escapes directly to space (note 1). Most surface radiation is absorbed by the atmosphere. And of course the surface mostly cools by convection into the troposphere (lower atmosphere).

If there were no radiatively-active gases (aka “GHG”s) in the atmosphere then the atmosphere couldn’t cool to space at all.

Technically, the emissivity of the atmosphere would be zero. Emission is determined by the local temperature of the atmosphere and its emissivity. Wavelength by wavelength emissivity is equal to absorptivity, another technical term, which says what proportion of radiation is absorbed by the atmosphere. If the atmosphere can’t emit, it can’t absorb (note 2).

So as you increase the GHGs in the atmosphere you increase its ability to cool to space. A lot of people realize this at some point during their climate science journey and finally realize how they have been duped by climate science all along! It’s irrefutable – more GHGs more cooling to space, more GHGs mean less global warming!

Ok, it’s true. Now the game’s up, I’ll pack up Science of Doom into a crate and start writing about something else. Maybe cognitive dissonance..

Bye everyone!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Halfway through boxing everything up I realized there was a little complication to the simplicity of that paragraph. The atmosphere with more GHGs has a higher emissivity, but also a higher absorptivity.

Let’s draw a little diagram. Here are two “layers” (see note 3) of the atmosphere in two different cases. On the left 400 ppmv CO2, on the right 500ppmv CO2 (and relative humidity of water vapor was set at 50%, surface temperature at 288K):

Cooling-to-space-2a

Figure 1

It’s clear that the two layers are both emitting more radiation with more CO2.More cooling to space.

For interest, the “total emissivity” of the top layer is 0.190 in the first case and 0.197 in the second case. The layer below has 0.389 and 0.395.

Let’s take a look at all of the numbers and see what is going on. This diagram is a little busier:

Cooling-to-space-3a

Figure 2

The key point is that the OLR (outgoing longwave radiation) is lower in the case with more CO2. Yet each layer is emitting more radiation. How can this be?

Take a look at the radiation entering the top layer on the left = 265.1, and add to that the emitted radiation = 23.0 – the total is 288.1. Now subtract the radiation leaving through the top boundary = 257.0 and we get the radiation absorbed in the layer. This is 31.1 W/m².

Compare that with the same calculation with more CO2 – the absorption is 32.2 W/m².

This is the case all the way up through the atmosphere – each layer emits more because its emissivity has increased, but it also absorbs more because its absorptivity has increased by the same amount.

So more cooling to space, but unfortunately more absorption of the radiation below – two competing terms.

So why don’t they cancel out?

Emission of radiation is a result of local temperature and emissivity.

Absorption of radiation is the result of the incident radiation and absorptivity. Incident upwards radiation started lower in the atmosphere where it is hotter. So absorption changes always outweigh emission changes (note 4).

Conceptual Problems?

If it’s still not making sense then think about what happens as you reduce the GHGs in the atmosphere. The atmosphere emits less but absorbs even less of the radiation from below. So the outgoing longwave radiation increases. More surface radiation is making it to the top of atmosphere without being absorbed. So there is less cooling to space from the atmosphere, but more cooling to space from the surface and the atmosphere.

If you add lagging to a pipe, the temperature of the pipe increases (assuming of course it is “internally” heated with hot water). And yet, the pipe cools to the surrounding room via the lagging! Does that mean more lagging, more cooling? No, it’s just the transfer mechanism for getting the heat out.

That was just an analogy. Analogies don’t prove anything. If well chosen, they can be useful in illustrating problems. End of analogy disclaimer.

If you want to understand more about how radiation travels through the atmosphere and how GHG changes affect this journey, take a look at the series Visualizing Atmospheric Radiation.

 

Notes

Note 1: For more on the details see

Note 2: A very basic point – absolutely essential for understanding anything at all about climate science – is that the absorptivity of the atmosphere can be (and is) totally different from its emissivity when you are considering different wavelengths. The atmosphere is quite transparent to solar radiation, but quite opaque to terrestrial radiation – because they are at different wavelengths. 99% of solar radiation is at wavelengths less than 4 μm, and 99% of terrestrial radiation is at wavelengths greater than 4 μm. That’s because the sun’s surface is around 6000K while the earth’s surface is around 290K. So the atmosphere has low absorptivity of solar radiation (<4 μm) but high emissivity of terrestrial radiation.

Note 3: Any numerical calculation has to create some kind of grid. This is a very course grid, with 10 layers of roughly equal pressure in the atmosphere from the surface to 200mbar. The grid assumes there is just one temperature for each layer. Of course the temperature is decreasing as you go up. We could divide the atmosphere into 30 layers instead. We would get more accurate results. We would find the same effect.

Note 4: The equations for radiative transfer are found in Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. The equations prove this effect.

In A Challenge for Bryan I put up a simple heat transfer problem and asked for the equations. Bryan elected not to provide these equations. So I provide the answer, but also attempt some enlightenment for people who don’t think the answer can be correct.

As DeWitt Payne noted, a post with a similar problem posted on Wattsupwiththat managed to gather some (unintentionally) hilarious comments.

Here’s the problem again:

Case 1

Spherical body, A, of radius ra, with an emissivity, εa =1. The sphere is in the vacuum of space.

It is internally heated by a mystery power source (let’s say nuclear, but it doesn’t matter), with power input = P.

The sphere radiates into deep space, let’s say the temperature of deep space = 0K to make the maths simpler.

1. What is the equation for the equilibrium surface temperature of the sphere, Ta?

Case 2

The condition of case A, but now body A is surrounded by a slightly larger spherical shell, B, which of course is itself now surrounded by deep space at 0K.

B has a radius rb, with an emissivity, εb =1. This shell is highly conductive and very thin.

2a. What is the equation for the new equilibrium surface temperature, Ta’?

2b. What is the equation for the equilibrium temperature, Tb, of shell B?

 

Notes:

The reason for the “slightly larger shell” is to avoid “complex” view factor issues. Of course, I’m happy to relax the requirement for “slightly larger” and let Bryan provide the more general answer.

The reason for the “highly conductive” and “thin” outer shell, B, is to avoid any temperature difference between the inside and the outside surfaces of the shell. That is, we can assume the outside surface is at the same temperature as the inside surface – both at temperature, Tb.

This kind of problem is a staple of introductory heat transfer. This is a “find the equilibrium” problem.

How do we solve these kinds of problems? It’s pretty easy once you understand the tools.

The first tool is the first law of thermodynamics. Steady state means temperatures have stabilized and so energy in = energy out. We draw a “boundary” around each body and apply the “boundary condition” of the first law.

The second tool is the set of equations that govern the movement of energy. These are the equations for conduction, convection and radiation. In this case we just have radiation to consider.

For people who see the solution, shake their heads and say, this can’t be, stay on to the end and I will try and shed some light on possible conceptual problems. Of course, if it’s wrong, you should easily be able to provide the correct equations – or even if you can’t write equations you should be able to explain the flaw in the formulation of the equation.

In the original article I put some numbers down – “For anyone who wants to visualize some numbers: ra=1m, P=1000W, rb=1.01m“. I will use these to calculate an answer from the equations. I realize many readers aren’t comfortable with equations and so the answers will help illuminate the meaning of the equations.

I go through the equations in tedious detail, again for people who would like to follow the maths but don’t find maths easy.

Case 1

Energy in, Ein = Energy out, Eout  :  in Watts (Joules per second).

Ein = P

Eout = emission of thermal radiation per unit area x area

The first part is given by the Stefan-Boltzmann equation (σTa4, where σ = 5.67×10-8), and the second part by the equation for the surface area of a sphere (4πra²)

Eout = 4πra² x σTa4 …..[eqn 1]

Therefore, P = 4πra²σTa4 ….[eqn 2]

We have to rearrange the equation to see how Ta changes with the other factors:

Ta = [P / (4πra²σ)]1/4 ….[eqn 3]

If you aren’t comfortable with maths this might seem a little daunting. Let’s put the numbers in:

Ta = 194K (-80ºC)

Now we haven’t said anything about how long it takes to reach this temperature. We don’t have enough information for that. That’s the nice thing about steady state calculations, they are easier than dynamic calculations. We will look at that at the end.

Probably everyone is happy with this equation. Energy is conserved. No surprises and nothing controversial.

Now we will apply the exact same approach to the second case.

Case 2

First we consider “body A”. Given that it is enclosed by another “body” – the shell B – we have to consider any energy being transferred by radiation from B to A. If it turns out to be zero, of course it won’t affect the temperature of body A.

Ein(a) = P + Eb-a ….[eqn 4], where Eb-a is a value we don’t yet know. It is the radiation from B absorbed by A.

Eout(a) = 4πra² x σTa4 ….[eqn 5]- this is the same as in case 1. Emission of radiation from a body only depends on its temperature (and emissivity and area but these aren’t changing between the two cases)

– we will look at shell B and come back to the last term in eqn 4.

Now the shell outer surface:

Radiates out to space

We set space at absolute zero so no radiation is received by the outer surface

Shell inner surface:

Radiates in to A (in fact almost all of the radiation emitted from the inner surface is absorbed by A and for now we will treat it as all) – this was the term Eb-a

Absorbs all of the radiation emitted by A, this is Eout(a)

And we made the shell thin and highly conductive so there is no temperature difference between the two surfaces. Let’s collect the heat transfer terms for shell B under steady state:

Ein(b) = Eout(a) + 0  …..[eqn 6] – energy in is all from the sphere A, and nothing from outside

             =  4πra² x σTa4 ….[eqn 6a] – we just took the value from eqn 5

Eout(b) = 4πrb² x σTb4 + 4πrb² x σTb4 …..[eqn 7] – energy out is the emitted radiation from the inner surface + emitted radiation from the outer surface

                = 2 x 4πrb² x σTb4 ….[eqn 7a]

 And we know that for shell B, Ein = Eout so we equate 6a and 7a:

4πra² x σTa4 = 2 x 4πrb² x σTb4 ….[eqn 8]

and now we can cancel a lot of the common terms:

ra² x Ta4 = 2 x rb² x Tb4 ….[eqn 8a]

and re-arrange to get Ta in terms of Tb:

Ta4 = 2rb²/ra² x Tb4 ….[eqn 8b]

Ta = [2rb²/ra²]1/4 x Tb ….[eqn 8b]

or we can write it the other way round:

Tb = [ra²/2rb²]1/4 x Ta ….[eqn 8c]

Using the numbers given, Ta = 1.2 Tb. So the sphere is 20% warmer than the shell (actually 2 to the power 1/4).

We need to use Ein=Eout for the sphere A to be able to get the full solution. We wrote down: Ein(a) = P + Eb-a ….[eqn 4]. Now we know “Eb-a” – this is one of the terms in eqn 7.

So:

Ein(a) = P + 4πrb² x σTb4 ….[eqn 9]

and Ein(a) = Eout(a), so:

P + 4πrb² x σTb4 = 4πra² x σTa4  ….[eqn 9]

we can substitute the equation for Tb:

P + 4πra² /2 x σTa4 = 4πra² x σTa4  ….[eqn 9a]

the 2nd term on the left and the right hand side can be combined:

P = 2πra² x σTa4  ….[eqn 9a]

And so, voila:

T’a = [P / (2πra²σ)]1/4 ….[eqn 10] – I added a dash to Ta so we can compare it with the original value before the shell arrived.

T’a = 21/4 Ta   ….[eqn 11] – that is, the temperature of the sphere A is about 20% warmer in case 2 compared with case 1.

Using the numbers, T’a = 230 K (-43ºC). And Tb = 193 K (-81ºC)

Explaining the Results

In case 2, the inner sphere, A, has its temperature increase by 36K even though the same energy production takes place inside. Obviously, this can’t be right because we have created energy??.. let’s come back to that shortly.

Notice something very important – Tb in case 2 is almost identical to Ta in case 1. The difference is actually only due to the slight difference in surface area. Why?

The system has an energy production, P, in both cases.

  • In case 1, the sphere A is the boundary transferring energy to space and so its equilibrium temperature must be determined by P
  • In case 2, the shell B is the boundary transferring energy to space and so its equilibrium temperature must be determined by P

Now let’s confirm the mystery unphysical totally fake invented energy.

Let’s compare the flux emitted from A in case 1 and case 2. I’ll call it R.

  • R(case 1) = 80 W/m²
  • R(case 2) = 159 W/m²

This is obviously rubbish. The same energy source inside the sphere and we doubled the sphere’s energy production!!! Get this idiot to take down this post, he has no idea what he is writing..

Yet if we check the energy balance we find that 80 W/m² is being “created” by our power source, and the “extra mystery” energy of 79 W/m² is coming from our outer shell. In any given second no energy is created.

The Mystery Invented Energy – Revealed

When we snapped the outer shell over the sphere we made it harder for heat to get out of the system. Energy in = energy out, in steady state. When we are not in steady state: energy in – energy out = energy retained. Energy retained is internal energy which is manifested as temperature.

We made it hard for heat to get out, which accumulated energy, which increased temperature.. until finally the inner sphere A was hot enough for all of the internally generated energy, P, to get out of the system.

Let’s add some information about the system: the heat capacity of the sphere = 1000 J/K; the heat capacity of the shell = 100 J/K. It doesn’t much matter what they are, it’s just to calculate the transients. We snap the shell – originally at 0K – around the sphere at time t=100 seconds and see what happens.

The top graph shows temperature, the bottom graph shows change in energy of the two objects and how much energy is leaving the system:

Bryan-sphere

At 100 seconds we see that instead of our steady state 1000W leaving the system, instead 0W leaves the system. This is the important part of the mystery energy puzzle.

We put a 0K shell around the sphere. This absorbs all the energy from the sphere. At time t=100s the shell is still at 0K so it emits 0W/m². It heats up pretty quickly, but remember that emission of radiation is not linear with temperature so you don’t see a linear relationship between the temperature of shell B and the energy leaving to space. For example at 100K, the outward emission is 6 W/m², at 150K it is 29 W/m² and at its final temperature of 193K, it is 79 W/m² (=1000 W in total).

As the shell heats up it emits more and more radiation inwards, heating up the sphere A.

The mystery energy has been revealed. The addition of a radiation barrier stopped energy leaving, which stored heat. The way equilibrium is finally restored is due to the temperature increase of the sphere.

Of course, for some strange reason an army of people thinks this is totally false. Well, produce your equations.. (this never happens)

All we have done here is used conservation of energy and the Stefan Boltzmann law of emission of thermal radiation.

Follow

Get every new post delivered to your Inbox.

Join 368 other followers