Feeds:
Posts
Comments

There are many classes of systems but in the climate blogosphere world two ideas about climate seem to be repeated the most.

In camp A:

We can’t forecast the weather two weeks ahead so what chance have we got of forecasting climate 100 years from now.

And in camp B:

Weather is an initial value problem, whereas climate is a boundary value problem. On the timescale of decades, every planetary object has a mean temperature mainly given by the power of its star according to Stefan-Boltzmann’s law combined with the greenhouse effect. If the sources and sinks of CO2 were chaotic and could quickly release and sequester large fractions of gas perhaps the climate could be chaotic. Weather is chaotic, climate is not.

Of course, like any complex debate, simplified statements don’t really help. So this article kicks off with some introductory basics.

Many inhabitants of the climate blogosphere already know the answer to this subject and with much conviction. A reminder for new readers that on this blog opinions are not so interesting, although occasionally entertaining. So instead, try to explain what evidence is there for your opinion. And, as suggested in About this Blog:

And sometimes others put forward points of view or “facts” that are obviously wrong and easily refuted.  Pretend for a moment that they aren’t part of an evil empire of disinformation and think how best to explain the error in an inoffensive way.

Pendulums

The equation for a simple pendulum is “non-linear”, although there is a simplified version of the equation, often used in introductions, which is linear. However, the number of variables involved is only two:

  • angle
  • speed

and this isn’t enough to create a “chaotic” system.

If we have a double pendulum, one pendulum attached at the bottom of another pendulum, we do get a chaotic system. There are some nice visual simulations around, which St. Google might help interested readers find.

If we have a forced damped pendulum like this one:

Pendulum-forced

Figure 1 – the blue arrows indicate that the point O is being driven up and down by an external force

-we also get a chaotic system.

What am I talking about? What is linear & non-linear? What is a “chaotic system”?

Digression on Non-Linearity for Non-Technical People

Common experience teaches us about linearity. If I pick up an apple in the supermarket it weighs about 0.15 kg or 150 grams (also known in some countries as “about 5 ounces”). If I take 10 apples the collection weighs 1.5 kg. That’s pretty simple stuff. Most of our real world experience follows this linearity and so we expect it.

On the other hand, if I was near a very cold black surface held at 170K (-103ºC) and measured the radiation emitted it would be 47 W/m². Then we double the temperature of this surface to 340K (67ºC) what would I measure? 94 W/m²? Seems reasonable – double the absolute temperature and get double the radiation.. But it’s not correct.

The right answer is 758 W/m², which is 16x the amount. Surprising, but most actual physics, engineering and chemistry is like this. Double a quantity and you don’t get double the result.

It gets more confusing when we consider the interaction of other variables.

Let’s take riding a bike [updated thanks to Pekka]. Once you get above a certain speed most of the resistance comes from the wind so we will focus on that. Typically the wind resistance increases as the square of the speed. So if you double your speed you get four times the wind resistance. Work done = force x distance moved, so with no head wind power input has to go up as the cube of speed (note 4). This means you have to put in 8x the effort to get 2x the speed.

On Sunday you go for a ride and the wind speed is zero. You get to 25 km/hr (16 miles/hr) by putting a bit of effort in – let’s say you are producing 150W of power (I have no idea what the right amount is). You want your new speedo to register 50 km/hr – so you have to produce 1,200W.

On Monday you go for a ride and the wind speed is 20 km/hr into your face. Probably should have taken the day off.. Now with 150W you get to only 14 km/hr, it takes almost 500W to get to your basic 25 km/hr, and to get to 50 km/hr it takes almost 2,400W. No chance of getting to that speed!

On Tuesday you go for a ride and the wind speed is the same so you go in the opposite direction and take the train home. Now with only 6W you get to go 25 km/hr, to get to 50km/hr you only need to pump out 430W.

In mathematical terms it’s quite simple: F = k(v-w)², Force = (a constant, k) x (road speed – wind speed) squared. Power, P = Fv = kv(v-w)². But notice that the effect of the “other variable”, the wind speed, has really complicated things.

To double your speed on the first day you had to produce eight times the power. To double your speed the second day you had to produce almost five times the power. To double your speed the third day you had to produce just over 70 times the power. All with the same physics.

The real problem with nonlinearity isn’t the problem of keeping track of these kind of numbers. You get used to the fact that real science – real world relationships – has these kind of factors and you come to expect them. And you have an equation that makes calculating them easy. And you have computers to do the work.

No, the real problem with non-linearity (the real world) is that many of these equations link together and solving them is very difficult and often only possible using “numerical methods”.

It is also the reason why something like climate feedback is very difficult to measure. Imagine measuring the change in power required to double speed on the Monday. It’s almost 5x, so you might think the relationship is something like the square of speed. On Tuesday it’s about 70 times, so you would come up with a completely different relationship. In this simple case know that wind speed is a factor, we can measure it, and so we can “factor it out” when we do the calculation. But in a more complicated system, if you don’t know the “confounding variables”, or the relationships, what are you measuring? We will return to this question later.

When you start out doing maths, physics, engineering.. you do “linear equations”. These teach you how to use the tools of the trade. You solve equations. You rearrange relationships using equations and mathematical tricks, and these rearranged equations give you insight into how things work. It’s amazing. But then you move to “nonlinear” equations, aka the real world, which turns out to be mostly insoluble. So nonlinear isn’t something special, it’s normal. Linear is special. You don’t usually get it.

..End of digression

Back to Pendulums

Let’s take a closer look at a forced damped pendulum. Damped, in physics terms, just means there is something opposing the movement. We have friction from the air and so over time the pendulum slows down and stops. That’s pretty simple. And not chaotic. And not interesting.

So we need something to keep it moving. We drive the pivot point at the top up and down and now we have a forced damped pendulum. The equation that results (note 1) has the massive number of three variables – position, speed and now time to keep track of the driving up and down of the pivot point. Three variables seems to be the minimum to create a chaotic system (note 2).

As we increase the ratio of the forcing amplitude to the length of the pendulum (β in note 1) we can move through three distinct types of response:

  • simple response
  • a “chaotic start” followed by a deterministic oscillation
  • a chaotic system

This is typical of chaotic systems – certain parameter values or combinations of parameters can move the system between quite different states.

Here is a plot (note 3) of position vs time for the chaotic system, β=0.7, with two initial conditions, only different from each other by 0.1%:

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Figure 1

It’s a little misleading to view the angle like this because it is in radians and so needs to be mapped between 0-2π (but then we get a discontinuity on a graph that doesn’t match the real world). We can map the graph onto a cylinder plot but it’s a mess of reds and blues.

Another way of looking at the data is via the statistics – so here is a histogram of the position (θ), mapped to 0-2π, and angular speed (dθ/dt) for the two starting conditions over the first 10,000 seconds:

Histograms for 10k seconds

Histograms for 10,000 seconds

Figure 2

We can see they are similar but not identical (note the different scales on the y-axis).

That might be due to the shortness of the run, so here are the results over 100,000 seconds:

Pendulum-0.7-100k seconds-2 conditions-hist

Histogram for 100,000 seconds

Figure 3

As we increase the timespan of the simulation the statistics of two slightly different initial conditions become more alike.

So if we want to know the state of a chaotic system at some point in the future, very small changes in the initial conditions will amplify over time, making the result unknowable – or no different from picking the state from a random time in the future. But if we look at the statistics of the results we might find that they are very predictable. This is typical of many (but not all) chaotic systems.

Orbits of the Planets

The orbits of the planets in the solar system are chaotic. In fact, even 3-body systems moving under gravitational attraction have chaotic behavior. So how did we land a man on the moon? This raises the interesting questions of timescales and amount of variation. Planetary movement – for our purposes – is extremely predictable over a few million years. But over 10s of millions of years we might have trouble predicting exactly the shape of the earth’s orbit – eccentricity, time of closest approach to the sun, obliquity.

However, it seems that even over a much longer time period the planets will still continue in their orbits – they won’t crash into the sun or escape the solar system. So here we see another important aspect of some chaotic systems – the “chaotic region” can be quite restricted. So chaos doesn’t mean unbounded.

According to Cencini, Cecconi & Vulpiani (2010):

Therefore, in principle, the Solar system can be chaotic, but not necessarily this implies events such as collisions or escaping planets..

However, there is evidence that the Solar system is “astronomically” stable, in the sense that the 8 largest planets seem to remain bound to the Sun in low eccentricity and low inclination orbits for time of the order of a billion years. In this respect, chaos mostly manifest in the irregular behavior of the eccentricity and inclination of the less massive planets, Mercury and Mars. Such variations are not large enough to provoke catastrophic events before extremely large time. For instance, recent numerical investigations show that for catastrophic events, such as “collisions” between Mercury and Venus or Mercury failure into the Sun, we should wait at least a billion years.

And bad luck, Pluto.

Deterministic, non-Chaotic, Systems with Uncertainty

Just to round out the picture a little, even if a system is not chaotic and is deterministic we might lack sufficient knowledge to be able to make useful predictions. If you take a look at figure 3 in Ensemble Forecasting you can see that with some uncertainty of the initial velocity and a key parameter the resulting velocity of an extremely simple system has quite a large uncertainty associated with it.

This case is quantitively different of course. By obtaining more accurate values of the starting conditions and the key parameters we can reduce our uncertainty. Small disturbances don’t grow over time to the point where our calculation of a future condition might as well just be selected from a randomly time in the future.

Transitive, Intransitive and “Almost Intransitive” Systems

Many chaotic systems have deterministic statistics. So we don’t know the future state beyond a certain time. But we do know that a particular position, or other “state” of the system, will be between a given range for x% of the time, taken over a “long enough” timescale. These are transitive systems.

Other chaotic systems can be intransitive. That is, for a very slight change in initial conditions we can have a different set of long term statistics. So the system has no “statistical” predictability. Lorenz 1968 gives a good example.

Lorenz introduces the concept of almost intransitive systems. This is where, strictly speaking, the statistics over infinite time are independent of the initial conditions, but the statistics over “long time periods” are dependent on the initial conditions. And so he also looks at the interesting case (Lorenz 1990) of moving between states of the system (seasons), where we can think of the precise starting conditions each time we move into a new season moving us into a different set of long term statistics. I find it hard to explain this clearly in one paragraph, but Lorenz’s papers are very readable.

Conclusion?

This is just a brief look at some of the basic ideas.

Other Articles in the Series

Part Two – Lorenz 1963

References

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Climatic Determinism, Edward Lorenz (1968) – free paper

Can chaos and intransivity lead to interannual variation, Edward Lorenz, Tellus (1990) – free paper

Notes

Note 1 – The equation is easiest to “manage” after the original parameters are transformed so that tω->t. That is, the period of external driving, T0=2π under the transformed time base.

Then:

Pendulum-forced-equation

where θ = angle, γ’ = γ/ω, α = g/Lω², β =h0/L;

these parameters based on γ = viscous drag coefficient, ω = angular speed of driving, g = acceleration due to gravity = 9.8m/s², L = length of pendulum, h0=amplitude of driving of pivot point

Note 2 – This is true for continuous systems. Discrete systems can be chaotic with less parameters

Note 3 – The results were calculated numerically using Matlab’s ODE (ordinary differential equation) solver, ode45.

Note 4 – Force = k(v-w)2 where k is a constant, v=velocity, w=wind speed. Work done = Force x distance moved so Power, P = Force x velocity.

Therefore:

P = kv(v-w)2

If we know k, v & w we can find P. If we have P, k & w and want to find v it is a cubic equation that needs solving.

In The “Greenhouse” Effect Explained in Simple Terms I list, and briefly explain, the main items that create the “greenhouse” effect. I also explain why more CO2 (and other GHGs) will, all other things remaining equal, increase the surface temperature. I recommend that article as the place to go for the straightforward explanation of the “greenhouse” effect. It also highlights that the radiative balance higher up in the troposphere is the most important component of the “greenhouse” effect.

However, someone recently commented on my first Kramm & Dlugi article and said I was “plainly wrong”. Kramm & Dlugi were in complete agreement with Gerlich and Tscheuschner because they both claim the “purported greenhouse effect simply doesn’t exist in the real world”.

If it’s just about flying a flag or wearing a football jersey then I couldn’t agree more. However, science does rely on tedious detail and “facts” rather than football jerseys. As I pointed out in New Theory Proves AGW Wrong! two contradictory theories don’t add up to two theories making the same case..

In the case of the first Kramm & Dlugi article I highlighted one point only. It wasn’t their main point. It wasn’t their minor point. They weren’t even making a point of it at all.

Many people believe the “greenhouse” effect violates the second law of thermodynamics, these are herein called “the illuminati”.

Kramm & Dlugi’s equation demonstrates that the illuminati are wrong. I thought this was worth pointing out.

The “illuminati” don’t understand entropy, can’t provide an equation for entropy, or even demonstrate the flaw in the simplest example of why the greenhouse effect is not in violation of the second law of thermodynamics. Therefore, it is necessary to highlight the (published) disagreement between celebrated champions of the illuminati – even if their demonstration of the disagreement was unintentional.

Let’s take a look.

Here is the one of the most popular G&T graphics in the blogosphere:

From Gerlich & Tscheuschner

From Gerlich & Tscheuschner

Figure 1

It’s difficult to know how to criticize an imaginary diagram. We could, for example, point out that it is imaginary. But that would be picky.

We could say that no one draws this diagram in atmospheric physics. That should be sufficient. But as so many of the illuminati have learnt their application of the second law of thermodynamics to the atmosphere from this fictitious diagram I feel the need to press forward a little.

Here is an extract from a widely-used undergraduate textbook on heat transfer, with a little annotation (red & blue):

From Incropera & DeWitt (2007)

From “Fundamentals of Heat and Mass Transfer” by Incropera & DeWitt (2007)

Figure 2

This is the actual textbook, before the Gerlich manoeuvre as I would like to describe it. We can see in the diagram and in the text that radiation travels both ways and there is a net transfer which is from the hotter to the colder. The term “net” is not really capable of being confused. It means one minus the other, “x-y”. Not “x”. (For extracts from six heat transfer textbooks and their equations read Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics).

Now let’s apply the Gerlich manoeuvre (compare fig. 2):

Fundamentals-of-heat-and-mass-transfer-post-G&T

Not from “Fundamentals of Heat and Mass Transfer”, or from any textbook ever

Figure 3

So hopefully that’s clear. Proof by parody. This is “now” a perpetual motion machine and so heat transfer textbooks are wrong. All of them. Somehow.

Just for comparison, we can review the globally annually averaged values of energy transfer in the atmosphere, including radiation, from Kiehl & Trenberth (I use the 1997 version because it is so familiar even though values were updated more recently):

From Kiehl & Trenberth (1997)

From Kiehl & Trenberth (1997)

Figure 4

It should be clear that the radiation from the hotter surface is higher than the radiation from the colder atmosphere. If anyone wants this explained, please ask.

I could apply the Gerlich manoeuvre to this diagram but they’ve already done that in their paper (as shown above in figure 1).

So lastly, we return to Kramm & Dlugi, and their “not even tiny point”, which nevertheless makes a useful point. They don’t provide a diagram, they provide an equation for energy balance at the surface – and I highlight each term in the equation to assist the less mathematically inclined:

Kramm-Dlugi-2011-eqn-highlight

 

Figure 5

The equation says, the sum of all fluxes – at one point on the surface = 0. This is an application of the famous first law of thermodynamics, that is, energy cannot be created or destroyed.

The red term – absorbed atmospheric radiation – is the radiation from the colder atmosphere absorbed by the hotter surface. This is also known as “DLR” or “downward longwave radiation, and as “back-radiation”.

Now, let’s assume that the atmospheric radiation increases in intensity over a small period. What happens?

The only way this equation can continue to be true is for one or more of the last 4 terms to increase.

  • The emitted surface radiation – can only increase if the surface temperature increases
  • The latent heat transfer – can only increase if there is an increase in wind speed or in the humidity differential between the surface and the atmosphere just above
  • The sensible heat transfer – can only increase if there is an increase in wind speed or in the temperature differential between the surface and the atmosphere just above
  • The heat transfer into the ground – can only increase if the surface temperature increases or the temperature below ground spontaneously cools

So, when atmospheric radiation increases the surface temperature must increase (or amazingly the humidity differential spontaneously increases to balance, but without a surface temperature change). According to G&T and the illuminati this surface temperature increase is impossible. According to Kramm & Dlugi, this is inevitable.

I would love it for Gerlich or Tscheuschner to show up and confirm (or deny?):

  • yes the atmosphere does emit thermal radiation
  • yes the surface of the earth does absorb atmospheric thermal radiation
  • yes this energy does not disappear (1st law of thermodynamics)
  • yes this energy must increase the temperature of the earth’s surface above what it would be if this radiation did not exist (1st law of thermodynamics)

Or even, which one of the above is wrong. That would be outstanding.

Of course, I know they won’t do that – even though I’m certain they believe all of the above points. (Likewise, Kramm & Dlugi won’t answer the question I have posed of them).

Well, we all know why

Hopefully, the illuminati can contact Kramm & Dlugi and explain to them where they went wrong. I have my doubts that any of the illuminati have grasped the first law of thermodynamics or the equation for temperature change and heat capacity, but who could say.

In Ensemble Forecasting we had a look at the principles behind “ensembles of initial conditions” and “ensembles of parameters” in forecasting weather. Climate models are a little different from weather forecasting models but use the same physics and the same framework.

A lot of people, including me, have questions about “tuning” climate models. While looking for what the latest IPCC report (AR5) had to say about ensembles of climate models I found a reference to Tuning the climate of a global model by Mauritsen et al (2012). Unless you work in the field of climate modeling you don’t know the magic behind the scenes. This free paper (note 1) gives some important insights and is very readable:

The need to tune models became apparent in the early days of coupled climate modeling, when the top of the atmosphere (TOA) radiative imbalance was so large that models would quickly drift away from the observed state. Initially, a practice to input or extract heat and freshwater from the model, by applying flux-corrections, was invented to address this problem. As models gradually improved to a point when flux-corrections were no longer necessary, this practice is now less accepted in the climate modeling community.

Instead, the radiation balance is controlled primarily by tuning cloud-related parameters at most climate modeling centers while others adjust the ocean surface albedo or scale the natural aerosol climatology to achieve radiation balance. Tuning cloud parameters partly masks the deficiencies in the simulated climate, as there is considerable uncertainty in the representation of cloud processes. But just like adding flux-corrections, adjusting cloud parameters involves a process of error compensation, as it is well appreciated that climate models poorly represent clouds and convective processes. Tuning aims at balancing the Earth’s energy budget by adjusting a deficient representation of clouds, without necessarily aiming at improving the latter.

A basic requirement of a climate model is reproducing the temperature change from pre-industrial times (mid 1800s) until today. So the focus is on temperature change, or in common terminology, anomalies.

It was interesting to see that if we plot the “actual modeled temperatures” from 1850 to present the picture doesn’t look so good (the grey curves are models from the coupled model inter-comparison projects: CMIP3 and CMIP5):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 1

The authors state:

There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K..

Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Although the inter-model span is only one percent relative to absolute zero, that argument fails to be reassuring. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present.

They point out that adjusting parameters might just be offsetting one error against another..

In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.

[Emphasis added]. And they give a bit more insight into the tuning process:

A few model properties can be tuned with a reasonable chain of understanding from model parameter to the impact on model representation, among them the global mean temperature. It is comprehendible that increasing the models low-level cloudiness, by for instance reducing the precipitation efficiency, will cause more reflection of the incoming sunlight, and thereby ultimately reduce the model’s surface temperature.

Likewise, we can slow down the Northern Hemisphere mid-latitude tropospheric jets by increasing orographic drag, and we can control the amount of sea ice by tinkering with the uncertain geometric factors of ice growth and melt. In a typical sequence, first we would try to correct Northern Hemisphere tropospheric wind and surface pressure biases by adjusting parameters related to the parameterized orographic gravity wave drag. Then, we tune the global mean temperature as described in Sections 2.1 and 2.3, and, after some time when the coupled model climate has come close to equilibrium, we will tune the Arctic sea ice volume (Section 2.4).

In many cases, however, we do not know how to tune a certain aspect of a model that we care about representing with fidelity, for example tropical variability, the Atlantic meridional overturning circulation strength, or sea surface temperature (SST) biases in specific regions. In these cases we would rather monitor these aspects and make decisions on the basis of a weak understanding of the relation between model formulation and model behavior.

Here we see how CMIP3 & 5 models “drift” – that is, over a long period of simulation time how the surface temperature varies with TOA flux imbalance (and also we see the cold bias of the models):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 2

If a model equilibrates at a positive radiation imbalance it indicates that it leaks energy, which appears to be the case in the majority of models, and if the equilibrium balance is negative it means that the model has artificial energy sources. We speculate that the fact that the bulk of models exhibit positive TOA radiation imbalances, and at the same time are cold-biased, is due to them having been tuned without account for energy leakage.

[Emphasis added].

From that graph they discuss the implied sensitivity to radiative forcing of each model (the slope of each model and how it compares with the blue and red “sensitivity” curves).

We get to see some of the parameters that are played around with (a-h in the figure):

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 3

And how changing some of these parameters affects (over a short run) “headline” parameters like TOA imbalance and cloud cover:

From Mauritsen et al 2012

From Mauritsen et al 2012

Figure 4 – Click to Enlarge

There’s also quite a bit in the paper about tuning the Arctic sea ice that will be of interest for Arctic sea ice enthusiasts.

In some of the final steps we get a great insight into how the whole machine goes through its final tune up:

..After these changes were introduced, the first parameter change was a reduction in two non-dimensional parameters controlling the strength of orographic wave drag from 0.7 to 0.5.

This greatly reduced the low zonal mean wind- and sea-level pressure biases in the Northern Hemisphere in atmosphere-only simulations, and further had a positive impact on the global to Arctic temperature gradient and made the distribution of Arctic sea-ice far more realistic when run in coupled mode.

In a second step the conversion rate of cloud water to rain in convective clouds was doubled from 1×10-4 s-1 to 2×10-4 s-1 in order to raise the OLR to be closer to the CERES satellite estimates.

At this point it was clear that the new coupled model was too warm compared to our target pre- industrial temperature. Different measures using the convection entrainment rates, convection overshooting fraction and the cloud homogeneity factors were tested to reduce the global mean temperature.

In the end, it was decided to use primarily an increased homogeneity factor for liquid clouds from 0.70 to 0.77 combined with a slight reduction of the convective overshooting fraction from 0.22 to 0.21, thereby making low-level clouds more reflective to reduce the surface temperature bias. Now the global mean temperature was sufficiently close to our target value and drift was very weak. At this point we decided to increase the Arctic sea ice volume from 18×1012 m3 to 22×1012 m3 by raising the cfreeze parameter from 1/2 to 2/3. ECHAM5/MPIOM had this parameter set to 4/5. These three final parameter settings were done while running the model in coupled mode.

Some of the paper’s results (not shown here) are some “parallel worlds” with different parameters. In essence, while working through the model development phase they took a lot of notes of what they did, what they changed, and at the end they went back and created some alternatives from some of their earlier choices. The parameter choices along with a set of resulting climate properties are shown in their table 10.

Some summary statements:

Parameter tuning is the last step in the climate model development cycle, and invariably involves making sequences of choices that influence the behavior of the model. Some of the behavioral changes are desirable, and even targeted, but others may be a side effect of the tuning. The choices we make naturally depend on our preconceptions, preferences and objectives. We choose to tune our model because the alternatives – to either drift away from the known climate state, or to introduce flux-corrections – are less attractive. Within the foreseeable future climate model tuning will continue to be necessary as the prospects of constraining the relevant unresolved processes with sufficient precision are not good.

Climate model tuning has developed well beyond just controlling global mean temperature drift. Today, we tune several aspects of the models, including the extratropical wind- and pressure fields, sea-ice volume and to some extent cloud-field properties. By doing so we clearly run the risk of building the models’ performance upon compensating errors, and the practice of tuning is partly masking these structural errors. As one continues to evaluate the models, sooner or later these compensating errors will become apparent, but the errors may prove tedious to rectify without jeopardizing other aspects of the model that have been adjusted to them.

Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results. Rather, our confidence in the results provided by climate models is gained through the development of a fundamental physical understanding of the basic processes that create climate change. More than a century ago it was first realized that increasing the atmospheric CO2 concentration leads to surface warming, and today the underlying physics and feedback mechanisms are reasonably understood (while quantitative uncertainty in climate sensitivity is still large). Coupled climate models are just one of the tools applied in gaining this understanding..

[Emphasis added].

..In this paper we have attempted to illustrate the tuning process, as it is being done currently at our institute. Our hope is to thereby help de-mystify the practice, and to demonstrate what can and cannot be achieved. The impacts of the alternative tunings presented were smaller than we thought they would be in advance of this study, which in many ways is reassuring. We must emphasize that our paper presents only a small glimpse at the actual development and evaluation involved in preparing a comprehensive coupled climate model – a process that continues to evolve as new datasets emerge, model parameterizations improve, additional computational resources become available, as our interests, perceptions and objectives shift, and as we learn more about our model and the climate system itself.

 Note 1: The link to the paper gives the html version. From there you can click the “Get pdf” link and it seems to come up ok – no paywall. If not try the link to the draft paper (but the formatting makes it not so readable)

I’ve had questions about the use of ensembles of climate models for a while. I was helped by working through a bunch of papers which explain the origin of ensemble forecasting. I still have my questions but maybe this post will help to create some perspective.

The Stochastic Sixties

Lorenz encapulated the problem in the mid-1960’s like this:

The proposed procedure chooses a finite ensemble of initial states, rather than the single observed initial state. Each state within the ensemble resembles the observed state closely enough so that the difference might be ascribed to errors or inadequacies in observation. A system of dynamic equations previously deemed to be suitable for forecasting is then applied to each member of the ensemble, leading to an ensemble of states at any future time. From an ensemble of future states, the probability of occurrence of any event, or such statistics as the ensemble mean and ensemble standard deviation of any quantity, may be evaluated.

Between the near future, when all states within an ensemble will look about alike, and the very distant future, when two states within an ensemble will show no more resemblance than two atmospheric states chosen at random, it is hoped that there will be an extended range when most of the states in an ensemble, while not constituting good pin-point forecasts, will possess certain important features in common. It is for this extended range that the procedure may prove useful.

[Emphasis added].

Epstein picked up some of these ideas in two papers in 1969. Here’s an extract from The Role of Initial Uncertainties in Prediction.

While it has long been recognized that the initial atmospheric conditions upon which meteorological forecasts are based are subject to considerable error, little if any explicit use of this fact has been made.

Operational forecasting consists of applying deterministic hydrodynamic equations to a single “best” initial condition and producing a single forecast value for each parameter..

..One of the questions which has been entirely ignored by the forecasters.. is whether of not one gets the “best” forecast by applying the deterministic equations to the “best” values of the initial conditions and relevant parameters..

..one cannot know a uniquely valid starting point for each forecast. There is instead an ensemble of possible starting points, but the identification of the one and only one of these which represents the “true” atmosphere is not possible.

In essence, the realization that small errors can grow in a non-linear system like weather and climate leads us to ask what the best method is of forecasting the future. In this paper Epstein takes a look at a few interesting simple problems to illustrate the ensemble approach.

Let’s take a look at one very simple example – the slowing of a body due to friction.

Rate of change of velocity (dv/dt) is proportional to the velocity, v. The “proportional” term is k, which increases with more friction.

dv/dt = -kv, therefore, v = v0.exp(-kt), where v0 = initial velocity

With a starting velocity of 10 m/s and k = 10-4 (in units of 1/s), how does velocity change with time?

Ensemble-velocity-mean

Figure 1 – note the logarithm of time on the time axis, time runs from 10 secs – 100,000 secs

Probably no surprises there.

Now let’s consider in the real world that we don’t know the starting velocity precisely, and also we don’t know the coefficient of friction precisely. Instead, we might have some idea about the possible values, which could be expressed as a statistical spread. Epstein looks at the case for v0 with a normal distribution and k with a gamma distribution (for specific reasons not that important).

 Mean of v0:   <v0> = 10 m/s

Standard deviation of v0:   σv = 1m/s

Mean of k:    <k> = 10-4 /s

Standard deviation of k:   σk = 3×10-5 /s

The particular example he gave has equations that can be easily manipulated, allowing him to derive an analytical result. In 1969 that was necessary. Now we have computers and some lucky people have Matlab. My approach uses Matlab.

What I did was create a set of 1,000 random normally distributed values for v0, with the mean and standard deviation above. Likewise, a set of gamma distributed values for k.

Then we take each pair in turn and produce the velocity vs time curve. Then we look at the stats of the 1,000 curves.

Interestingly the standard deviation increases before fading away to zero. It’s easy to see why the standard deviation will end up at zero – because the final velocity is zero. So we could easily predict that. But it’s unlikely we would have predicted that the standard deviation of velocity would start to increase after 3,000 seconds and then peak at around 9,000 seconds.

Here is the graph of standard deviation of velocity vs time:

Ensemble-std-dev-v

Figure 2

Now let’s look at the spread of results. The blue curves in the top graph (below) are each individual ensemble member and the green is the mean of the ensemble results. The red curve is the calculation of velocity against time using the mean of v0 and k:

Ensemble-velocity-spread-vs-mean

Figure 3

The bottom curve zooms in on one portion (note the time axis is now linear), with the thin green lines being 1 standard deviation in each direction.

What is interesting is the significant difference between the mean of the ensemble members and the single value calculated using the mean parameters. This is quite usual with “non-linear” equations (aka the real world).

So, if you aren’t sure about your parameters or your initial conditions, taking the “best value” and running the simulation can well give you a completely different result from sampling the parameters and initial conditions and taking the mean of this “ensemble” of results..

Epstein concludes in his paper:

In general, the ensemble mean value of a variable will follow a different course than that of any single member of the ensemble. For this reason it is clearly not an optimum procedure to forecast the atmosphere by applying deterministic hydrodynamic equations to any single initial condition, no matter how well it fits the available, but nevertheless finite and fallible, set of observations.

In Epstein’s other 1969 paper, Stochastic Dynamic Prediction, is more involved. He uses Lorenz’s “minimum set of atmospheric equations” and compares the results after 3 days from using the “best value” starting point vs an ensemble approach. The best value approach has significant problems compared with the ensemble approach:

Note that this does not mean the deterministic forecast is wrong, only that it is a poor forecast. It is possible that the deterministic solution would be verified in a given situation but the stochastic solutions would have better average verification scores.

Parameterizations

One of the important points in the earlier work on numerical weather forecasting was the understanding that parameterizations also have uncertainty associated with them.

For readers who haven’t seen them, here’s an example of a parameterization, for latent heat flux, LH:

LH = LρCDEUr(qs-qa)

which says Latent Heat flux = latent heat of vaporization x density x “aerodynamic transfer coefficient” x wind speed at the reference level x ( humidity at the surface – humidity in the air at the reference level)

The “aerodynamic transfer coefficient” is somewhere around 0.001 over ocean to 0.004 over moderately rough land.

The real formula for latent heat transfer is much simpler:

LH = the covariance of upwards moisture with vertical eddy velocity x density x latent heat of vaporization

These are values that vary even across very small areas and across many timescales. Across one “grid cell” of a numerical model we can’t use the “real formula” because we only get to put in one value for upwards eddy velocity and one value for upwards moisture flux and anyway we have no way of knowing the values “sub-grid”, i.e., at the scale we would need to know them to do an accurate calculation.

That’s why we need parameterizations. By the way, I don’t know whether this is a current formula in use in NWP, but it’s typical of what we find in standard textbooks.

So right away it should be clear why we need to apply the same approach of ensembles to the parameters describing these sub-grid processes as well as to initial conditions. Are we sure that over Connecticut the parameter CDE = 0.004, or should it be 0.0035? In fact, parameters like this are usually calculated from the average of a number of experiments. They conceal as much as they reveal. The correct value probably depends on other parameters. In so far as it represents a real physical property it will vary depending on the time of day, seasons and other factors. It might even be, “on average”, wrong. Because “on average” over the set of experiments was an imperfect sample. And “on average” over all climate conditions is a different sample.

The Numerical Naughties

The insights gained in the stochastic sixties weren’t so practically useful until some major computing power came along in the 90s and especially the 2000s.

Here is Palmer et al (2005):

Ensemble prediction provides a means of forecasting uncertainty in weather and climate prediction. The scientific basis for ensemble forecasting is that in a non-linear system, which the climate system surely is, the finite-time growth of initial error is dependent on the initial state. For example, Figure 2 shows the flow-dependent growth of initial uncertainty associated with three different initial states of the Lorenz (1963) model. Hence, in Figure 2a uncertainties grow significantly more slowly than average (where local Lyapunov exponents are negative), and in Figure 2c uncertainties grow significantly more rapidly than one would expect on average. Conversely, estimates of forecast uncertainty based on some sort of climatological-average error would be unduly pessimistic in Figure 2a and unduly optimistic in Figure 2c.

Ensemble-Palmer-2005-Lorenz-fig2

From Palmer et al 2005

 

Figure 4

The authors then provide an interesting example to demonstrate the practical use of ensemble forecasts. In the top left are the “deterministic predictions” using the “best estimate” of initial conditions. The rest of the charts 1-50 are the ensemble forecast members each calculated from different initial conditions. We can see that there was a low yet significant chance of a severe storm:

Ensemble-Palmer-2005-fig3-499px

From Palmer et al 2005

Figure 5 – Click to enlarge

In fact a severe storm did occur so the probabilistic forecast was very useful, in that it provided information not available with the deterministic forecast.

This is a nice illustration of some benefits. It doesn’t tell us how well NWPs perform in general.

One measure is the forecast spread of certain variables as the forecast time increases. Generally single model ensembles don’t do so well – they under-predict the spread of results at later time periods.

Here’s an example of the performance of a multi-model ensemble vs single-model ensembles on saying whether an event will occur or not. (Intuitively, the axes seem the wrong way round). The single model versions are over-confident – so when the forecast probability is 1.0 (certain) the reality is 0.7; when the forecast probability is 0.8, the reality is 0.6; and so on:

From Palmer et al 2005

From Palmer et al 2005

Figure 6

We can see that, at least in this case, the multi-model did a pretty good job. However, similar work on forecasting precipitation events showed much less success.

In their paper, Palmer and his colleagues contrast multi-model vs multi-parameterization within one model. I am not clear what the difference is – whether it is a technicality or something fundamentally different in approach. The example above is multi-model. They do give some examples of multi-parameterizations (with a similar explanation to what I gave in the section above). Their paper is well worth a read, as is the paper by Lewis (see links below).

Discussion

The concept of taking a “set of possible initial conditions” for weather forecasting makes a lot of sense. The concept of taking a “set of possible parameterizations” also makes sense although it might be less obvious at first sight.

In the first case we know that we don’t know the precise starting point because observations have errors and we lack a perfect observation system. In the second case we understand that a parameterization is some empirical formula which is clearly not “the truth”, but some approximation that is the best we have for the forecasting job at hand, and the “grid size” we are working to. So in both cases creating an ensemble around “the truth” has some clear theoretical basis.

Now what is also important for this theoretical basis is that we can test the results – at least with weather prediction (NWP). That’s because of the short time periods under consideration.

A statement from Palmer (1999) will resonate in the hearts of many readers:

A desirable if not necessary characteristic of any physical model is an ability to make falsifiable predictions

When we come to ensembles of climate models the theoretical case for multi-model ensembles is less clear (to me). There’s a discussion in IPCC AR5 that I have read. I will follow up the references and perhaps produce another article.

References

The Role of Initial Uncertainties in Prediction, Edward Epstein, Journal of Applied Meteorology (1969) – free paper

Stochastic Dynamic Prediction, Edward Epstein, Tellus (1969) – free paper

On the possible reasons for long-period fluctuations of the general circulation. Proc. WMO-IUGG Symp. on Research and Development Aspects of Long-Range Forecasting, Boulder, CO, World Meteorological Organization, WMO Tech. EN Lorenz (1965) – cited from Lewis (2005)

Roots of Ensemble Forecasting, John Lewis, Monthly Weather Forecasting (2005) – free paper

Predicting uncertainty in forecasts of weather and climate, T.N. Palmer (1999), also published as ECMWF Technical Memorandum No. 294 – free paper

Representing Model Uncertainty in Weather and Climate Prediction, TN Palmer, GJ Shutts, R Hagedorn, FJ Doblas-Reyes, T Jung & M Leutbecher, Annual Review Earth Planetary Sciences (2005) – free paper

Over the last few years I’ve written lots of articles relating to the inappropriately-named “greenhouse” effect and covered some topics in great depth. I’ve also seen lots of comments and questions which has helped me understand common confusion and misunderstandings.

This article, with huge apologies to regular long-suffering readers, covers familiar ground in simple terms. It’s a reference article. I’ve referenced other articles and series as places to go to understand a particular topic in more detail.

One of the challenges of writing a short simple explanation is it opens you up to the criticism of having omitted important technical details that you left out in order to keep it short. Remember this is the simple version..

Preamble

First of all, the “greenhouse” effect is not AGW. In maths, physics, engineering and other hard sciences, one block is built upon another block. AGW is built upon the “greenhouse” effect. If AGW is wrong, it doesn’t invalidate the greenhouse effect. If the greenhouse effect is wrong, it does invalidate AGW.

The greenhouse effect is built on very basic physics, proven for 100 years or so, that is not in any dispute in scientific circles. Fantasy climate blogs of course do dispute it.

Second, common experience of linearity in everyday life cause many people to question how a tiny proportion of “radiatively-active” molecules can have such a profound effect. Common experience is not a useful guide. Non-linearity is the norm in real science. Since the enlightenment at least, scientists have measured things rather than just assumed consequences based on everyday experience.

The Elements of the “Greenhouse” Effect

Atmospheric Absorption

1. The “radiatively-active” gases in the atmosphere:

  • water vapor
  • CO2
  • CH4
  • N2O
  • O3
  • and others

absorb radiation from the surface and transfer this energy via collision to the local atmosphere. Oxygen and nitrogen absorb such a tiny amount of terrestrial radiation that even though they constitute an overwhelming proportion of the atmosphere their radiative influence is insignificant (note 1).

How do we know all this? It’s basic spectroscopy, as detailed in exciting journals like the Journal of Quantitative Spectroscopy and Radiative Transfer over many decades. Shine radiation of a specific wavelength through a gas and measure the absorption. Simple stuff and irrefutable.

Atmospheric Emission

2. The “radiatively-active” gases in the atmosphere also emit radiation. Gases that absorb at a wavelength also emit at that wavelength. Gases that don’t absorb at that wavelength don’t emit at that wavelength. This is a consequence of Kirchhoff’s law.

The intensity of emission of radiation from a local portion of the atmosphere is set by the atmospheric emissivity and the temperature.

Convection

3. The transfer of heat within the troposphere is mostly by convection. The sun heats the surface of the earth through the (mostly) transparent atmosphere (note 2). The temperature profile, known as the “lapse rate”, is around 6K/km in the tropics. The lapse rate is principally determined by non-radiative factors – as a parcel of air ascends it expands into the lower pressure and cools during that expansion (note 3).

The important point is that the atmosphere is cooler the higher you go (within the troposphere).

Energy Balance

4. The overall energy in the climate system is determined by the absorbed solar radiation and the emitted radiation from the climate system. The absorbed solar radiation – globally annually averaged – is approximately 240 W/m² (note 4). Unsurprisingly, the emitted radiation from the climate system is also (globally annually averaged) approximately 240 W/m². Any change in this and the climate is cooling or warming.

Emission to Space

5. Most of the emission of radiation to space by the climate system is from the atmosphere, not from the surface of the earth. This is a key element of the “greenhouse” effect. The intensity of emission depends on the local atmosphere. So the temperature of the atmosphere from which the emission originates determines the amount of radiation.

If the place of emission of radiation – on average – moves upward for some reason then the intensity decreases. Why? Because it is cooler the higher up you go in the troposphere. Likewise, if the place of emission – on average – moves downward for some reason, then the intensity increases (note 5).

More GHGs

6. If we add more radiatively-active gases (like water vapor and CO2) then the atmosphere becomes more “opaque” to terrestrial radiation and the consequence is the emission to space from the atmosphere moves higher up (on average). Higher up is colder. See note 6.

So this reduces the intensity of emission of radiation, which reduces the outgoing radiation, which therefore adds energy into the climate system. And so the climate system warms (see note 7).

That’s it!

It’s as simple as that. The end.

A Few Common Questions

CO2 is Already Saturated

There are almost 315,000 individual absorption lines for CO2 recorded in the HITRAN database. Some absorption lines are stronger than others. At the strongest point of absorption – 14.98 μm (667.5 cm-1), 95% of radiation is absorbed in only 1m of the atmosphere (at standard temperature and pressure at the surface). That’s pretty impressive.

By contrast, from 570 – 600 cm-1 (16.7 – 17.5 μm) and 730 – 770 cm-1 (13.0 – 13.7 μm) the CO2 absorption through the atmosphere is nowhere near “saturated”. It’s more like 30% absorbed through a 1km path.

You can see the complexity of these results in many graphs in Atmospheric Radiation and the “Greenhouse” Effect – Part Nine – calculations of CO2 transmittance vs wavelength in the atmosphere using the 300,000 absorption lines from the HITRAN database, and see also Part Eight – interesting actual absorption values of CO2 in the atmosphere from Grant Petty’s book

The complete result combining absorption and emission is calculated in Visualizing Atmospheric Radiation – Part Seven – CO2 increases – changes to TOA in flux and spectrum as CO2 concentration is increased

CO2 Can’t Absorb Anything of Note Because it is Only .04% of the Atmosphere

See the point above. Many spectroscopy professionals have measured the absorptivity of CO2. It has a huge variability in absorption, but the most impressive is that 95% of 14.98 μm radiation is absorbed in just 1m. How can that happen? Are spectroscopy professionals charlatans? You need evidence, not incredulity. Science involves measuring things and this has definitely been done. See the HITRAN database.

Water Vapor Overwhelms CO2

This is an interesting point, although not correct when we consider energy balance for the climate. See Visualizing Atmospheric Radiation – Part Four – Water Vapor – results of surface (downward) radiation and upward radiation at TOA as water vapor is changed.

The key point behind all the detail is that the top of atmosphere radiation change (as CO2 changes) is the important one. The surface change (forcing) from increasing CO2 is not important, is definitely much weaker and is often insignificant. Surface radiation changes from CO2 will, in many cases, be overwhelmed by water vapor.

Water vapor does not overwhelm CO2 high up in the atmosphere because there is very little water vapor there – and the radiative effect of water vapor is dramatically impacted by its concentration, due to the “water vapor continuum”.

The Calculation of the “Greenhouse” Effect is based on “Average Surface Temperature” and there is No Such Thing

Simplified calculations of the “greenhouse” effect use some averages to make some points. They help to create a conceptual model.

Real calculations, using the equations of radiative transfer, don’t use an “average” surface temperature and don’t rely on a 33K “greenhouse” effect. Would the temperature decrease 33K if all of the GHGs were removed from the atmosphere? Almost certainly not. Because of feedbacks. We don’t know the effect of all of the feedbacks. But would the climate be colder? Definitely.

See The Rotational Effect – why the rotation of the earth has absolutely no effect on climate, or so a parody article explains..

The Second Law of Thermodynamics Prohibits the Greenhouse Effect, or so some Physicists Demonstrated..

See The Three Body Problem – a simple example with three bodies to demonstrate how a “with atmosphere” earth vs a “without atmosphere earth” will generate different equilibrium temperatures. Please review the entropy calculations and explain (you will be the first) where they are wrong or perhaps, or perhaps explain why entropy doesn’t matter (and revolutionize the field).

See Gerlich & Tscheuschner for the switch and bait routine by this operatic duo.

And see Kramm & Dlugi On Dodging the “Greenhouse” Bullet – Kramm & Dlugi demonstrate that the “greenhouse” effect doesn’t exist by writing a few words in a conclusion but carefully dodging the actual main point throughout their entire paper. However, they do recover Kepler’s laws and point out a few errors in a few websites. And note that one of the authors kindly showed up to comment on this article but never answered the important question asked of him. Probably just too busy.. Kramm & Dlugi also helpfully (unintentionally) explain that G&T were wrong, see Kramm & Dlugi On Illuminating the Confusion of the Unclear – Kramm & Dlugi step up as skeptics of the “greenhouse” effect, fans of Gerlich & Tscheuschner and yet clarify that colder atmospheric radiation is absorbed by the warmer earth..

And for more on that exciting subject, see Confusion over the Basics under the sub-heading The Second Law of Thermodynamics.

Feedbacks overwhelm the Greenhouse Effect

This is a totally different question. The “greenhouse” effect is the “greenhouse” effect. If the effect of more CO2 is totally countered by some feedback then that will be wonderful. But that is actually nothing to do with the “greenhouse” effect. It would be a consequence of increasing temperature.

As noted in the preamble, it is important to separate out the different building blocks in understanding climate.

Miskolczi proved that the Greenhouse Effect has no Effect

Miskolczi claimed that the greenhouse effect was true. He also claimed that more CO2 was balanced out by a corresponding decrease in water vapor. See the Miskolczi series for a tedious refutation of his paper that was based on imaginary laws of thermodynamics and questionable experimental evidence.

Once again, it is important to be able to separate out two ideas. Is the greenhouse effect false? Or is the greenhouse effect true but wiped out by a feedback?

If you don’t care, so long as you get the right result you will be in ‘good’ company (well, you will join an extremely large company of people). But this blog is about science. Not wishful thinking. Don’t mix the two up..

Convection “Short-Circuits” the Greenhouse Effect

Let’s assume that regardless of the amount of energy arriving at the earth’s surface, that the lapse rate stays constant and so the more heat arriving, the more heat leaves. That is, the temperature profile stays constant. (It’s a questionable assumption that also impacts the AGW question).

It doesn’t change the fact that with more GHGs, the radiation to space will be from a higher altitude. A higher altitude will be colder. Less radiation to space and so the climate warms – even with this “short-circuit”.

In a climate without convection, the surface temperature will start off higher, and the GHG effect from doubling CO2 will be higher. See Radiative Atmospheres with no Convection.

In summary, this isn’t an argument against the greenhouse effect, this is possibly an argument about feedbacks. The issue about feedbacks is a critical question in AGW, not a critical question for the “greenhouse” effect. Who can say whether the lapse rate will be constant in a warmer world?

Notes

Note 1 – An important exception is O2 absorbing solar radiation high up above the troposphere (lower atmosphere). But O2 does not absorb significant amounts of terrestrial radiation.

Note 2 – 99% of solar radiation has a wavelength <4μm. In these wavelengths, actually about 1/3 of solar radiation is absorbed in the atmosphere. By contrast, most of the terrestrial radiation, with a wavelength >4μm, is absorbed in the atmosphere.

Note 3 – see:

Density, Stability and Motion in Fluids – some basics about instability
Potential Temperature – explaining “potential temperature” and why the “potential temperature” increases with altitude
Temperature Profile in the Atmosphere – The Lapse Rate – lots more about the temperature profile in the atmosphere

Note 4 – see Earth’s Energy Budget – a series on the basics of the energy budget

Note 5 – the “place of emission” is a useful conceptual tool but in reality the emission of radiation takes place from everywhere between the surface and the stratosphere. See Visualizing Atmospheric Radiation – Part Three – Average Height of Emission – the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions.

Also, take a look at the complete series: Visualizing Atmospheric Radiation.

Note 6 – the balance between emission and absorption are found in the equations of radiative transfer. These are derived from fundamental physics – see Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations – the equations of radiative transfer including the plane parallel assumption and it’s nothing to do with blackbodies. The fundamental physics is not just proven in the lab, spectral measurements at top of atmosphere and the surface match the calculated values using the radiative transfer equations – see Theory and Experiment – Atmospheric Radiation – real values of total flux and spectra compared with the theory.

Also, take a look at the complete series: Atmospheric Radiation and the “Greenhouse” Effect

Note 7 – this calculation is under the assumption of “all other things being equal”. Of course, in the real climate system, all other things are not equal. However, to understand an effect “pre-feedback” we need to separate it from the responses to the system.

If we open an introductory atmospheric physics textbook, we find that the temperature profile in the troposphere (lower atmosphere) is mostly explained by convection. (See for example, Things Climate Science has Totally Missed? – Convection)

We also find that the temperature profile in the stratosphere is mostly determined by radiation. And that the overall energy balance of the climate system is determined by radiation.

Many textbooks introduce the subject of convection in this way:

  • what would the temperature profile be like if there was no convection, only radiation for heat transfer
  • why is the temperature profile actually different
  • how does pressure reduce with height
  • what happens to air when it rises and expands in the lower pressure environment
  • derivation of the “adiabatic lapse rate”, which in layman’s terms is the temperature change when we have relatively rapid movements of air
  • how the real world temperature profile (lapse rate) compares with the calculated adiabatic lapse rate and why

We looked at the last four points in some detail in a few articles:

Density, Stability and Motion in Fluids – some basics about instability
Potential Temperature – explaining “potential temperature” and why the “potential temperature” increases with altitude
Temperature Profile in the Atmosphere – The Lapse Rate – lots more about the temperature profile in the atmosphere

In this article we will look at the first point.

All of the atmospheric physics textbooks I have seen use a very simple model for explaining the temperature profile in a fictitious “radiation only” environment. The simple model is great for giving insight into how radiation travels.

Physics textbooks, good ones anyway, try and use the simplest models to explain a phenomenon.

The simple model, in brief, is the “semi-gray approximation”. This says the atmosphere is completely transparent to solar radiation, but opaque to terrestrial radiation. Its main simplification is having a constant absorption with wavelength. This makes the problem nice and simple analytically – which means we can rewrite the starting equations and plot a nice graph of the result.

However, atmospheric absorption is the total opposite of constant. Here is an example of the absorption vs wavelength of a minor “greenhouse” gas:

From Vardavas & Taylor (2007)

From Vardavas & Taylor (2007)

Figure 1

So from time to time I’ve wondered what the “no convection” atmosphere would look like with real GHG absorption lines. I also thought it would be especially interesting to see the effect of doubling CO2 in this fictitious environment.

This article is for curiosity value only, and for helping people understand radiative transfer a little better.

We will use the Matlab program seen in the series Visualizing Atmospheric Radiation. This does a line by line calculation of radiative transfer for all of the GHGs, pulling the absorption data out of the HITRAN database.

I updated the program in a few subtle ways. Mainly the different treatment of the stratosphere – the place where convection stops – was removed. Because, in this fictitious world there is no convection in the lower atmosphere either.

Here is a simulation based on 380 ppm CO2, 1775 ppb CH4, 319 ppb N2O and 50% relative humidity all through the atmosphere. Top of atmosphere was 100 mbar and the atmosphere was divided into 40 layers of equal pressure. Absorbed solar radiation was set to 240 W/m² with no solar absorption in the atmosphere. That is (unlike in the real world), the atmosphere has been made totally transparent to solar radiation.

The starting point was a surface temperature of 288K (15ºC) and a lapse rate of 6.5K/km – with no special treatment of the stratosphere. The final surface temperature was 326K (53ºC), an increase of 38ºC:

Temp-profile-no-convection-current-GHGs-40-levels-50%RH

Figure 2

The ocean depth was only 5m. This just helps get to a new equilibrium faster. If we change the heat capacity of a system like this the end result is the same, the only difference is the time taken.

Water vapor was set at a relative humidity of 50%. For these first results I didn’t get the simulation to update the absolute humidity as the temperature changed. So the starting temperature was used to calculate absolute humidity and that mixing ratio was kept constant:

wv-conc-no-convection-current-GHGs-40-levels-50%RH

Figure 3

The lapse rate, or temperature drop per km of altitude:

LapseRate-noconvection-current-GHGs-40-levels-50%RH

Figure 4

The flux down and flux up vs altitude:

Flux-noconvection-current-GHGs-40-levels-50%RH

Figure 5

The top of atmosphere upward flux is 240 W/m² (actually at the 500 day point it was 239.5 W/m²) – the same as the absorbed solar radiation (note 1). The simulation doesn’t “force” the TOA flux to be this value. Instead, any imbalance in flux in each layer causes a temperature change, moving the surface and each part of the atmosphere into a new equilibrium.

A bit more technically for interested readers.. For a given layer we sum:

  • upward flux at the bottom of a layer minus upward flux at the top of a layer
  • downward flux at the top of a layer minus downward flux at the bottom of a layer

This sum equates to the “heating rate” of the layer. We then use the heat capacity and time to work out the temperature change. Then the next iteration of the simulation redoes the calculation.

And even more technically:

  • the upwards flux at the top of a layer = the upwards flux at the bottom of the layer x transmissivity of the layer plus the emission of that layer
  • the downwards flux at the bottom of a layer = the downwards flux at the top of the layer x transmissivity of the layer plus the emission of that layer

End of “more technically”..

Anyway, the main result is the surface is much hotter and the temperature drop per km of altitude is much greater than the real atmosphere. This is because it is “harder” for heat to travel through the atmosphere when radiation is the only mechanism. As the atmosphere thins out, which means less GHGs, radiation becomes progressively more effective at transferring heat. This is why the lapse rate is lower higher up in the atmosphere.

Now let’s have a look at what happens when we double CO2 from its current value (380ppm -> 760 ppm):

Temp-profile-no-convection-doubled-GHGs-40-levels-50%RH

Figure 6 – with CO2 doubled instantaneously from 380ppm at 500 days

The final surface temperature is 329.4, increased from 326.2K. This is an increase (no feedback of 3.2K).

The “pseudo-radiative forcing” = 18.9 W/m² (which doesn’t include any change to solar absorption). This radiative forcing is the immediate change in the TOA forcing. (It isn’t directly comparable to the IPCC standard definition which is at the tropopause and after the stratosphere has come back into equilibrium – none of these have much meaning in a world without convection).

Let’s also look at the “standard case” of an increase from pre-industrial CO2 of 280 ppm to a doubling of 560 ppm. I ran this one for longer – 1000 days before doubling CO2 and 2000 days in total- because the starting point was less in balance. At the start, the TOA flux (outgoing longwave radiation) = 248 W/m². This means the climate was cooling quite a bit with the starting point we gave it.

At 180 ppm CO2, 1775 ppb CH4, 319 ppb N2O and 50% relative humidity (set at the starting point of 288K and 6.5K/km lapse rate), the surface temperature after 1,000 days = 323.9 K. At this point the TOA flux was 240.0 W/m². So overall the climate has cooled from its initial starting point but the surface is hotter.

This might seem surprising at first sight – the climate cools but the surface heats up? It’s simply that the “radiation-only” atmosphere has made it much harder for heat to get out. So the temperature drop per km of height is now much greater than it is in a convection atmosphere. Remember that we started with a temperature profile of 6.5K/km – a typical convection atmosphere.

After CO2 doubles to 560 ppm (and all other factors stay the same, including absolute humidity), the immediate effect is the TOA flux drops to 221 W/m² (once again a radiative forcing of about 19 W/m²). This is because the atmosphere is now even more “resistant” to the escape of heat by radiation. The atmosphere is more opaque and so the average emission of radiation of space moves to a higher and colder part of the atmosphere. Colder parts of the atmosphere emit less radiation than warmer parts of the atmosphere.

After the climate moves back into balance – a TOA flux of 240 W/m² – the surface temperature = 327.0 K – an increase (pre-feedback) of 3.1 K.

Compare this with the standard IPCC “with convection” no-feedback forcing of 3.7 W/m² and a “no feedback” temperature rise of about 1.2 K.

Temp-profile-no-convection-280-560ppm-CO2-40-levels-50%RH

Figure 7 – with CO2 doubled instantaneously from 280ppm at 1000 days

Then I introduced a more realistic model with solar absorption by water vapor in the atmosphere (changed parameter ‘solaratm’ in the Matlab program from ‘false’ to ‘true’). Unfortunately this part of the radiative transfer program is not done by radiative transfer, only by a very crude parameterization, just to get roughly the right amount of heating by solar radiation in roughly the right parts of the atmosphere.

The equilibrium surface temperature at 280 ppm CO2 was now “only” 302.7 K (almost 30ºC). Doubling CO2 to 560 ppm created a radiative forcing of 11 W/m², and a final surface temperature of 305.5K – that is, an increase of 2.8K.

Why is the surface temperature lower? Because in the “no solar absorption in the atmosphere” model, all of the solar radiation is absorbed by the ground and has to “fight its way out” from the surface up. Once you absorb solar radiation higher up than the surface, it’s easier for this heat to get out.

Conclusion

One of the common themes of fantasy climate blogs is that the results of radiative physics are invalidated by convection, which “short-circuits” radiation in the troposphere. No one in climate science is confused about the fact that convection dominates heat transfer in the lower atmosphere.

We can see in this set of calculations that when we have a radiation-only atmosphere the surface temperature is a lot higher than any current climate – at least when we consider a “one-dimensional” climate.

Of course, the whole world would be different and there are many questions about the amount of water vapor and the effect of circulation (or lack of it) on moving heat around the surface of the planet via the atmosphere and the ocean.

When we double CO2 from its pre-industrial value the radiative forcing is much greater in a “radiation-only atmosphere” than in a “radiative-convective atmosphere”, with the pre-feedback temperature rise 3ºC vs 1ºC.

So it is definitely true that convection short-circuits radiation in the troposphere. But the whole climate system can only gain and lose energy by radiation and this radiation balance still has to be calculated. That’s what current climate models do.

It’s often stated as a kind of major simplification (a “teaching model”) that with increases in GHGs the “average height of emission” moves up, and therefore the emission is from a colder part of the atmosphere. This idea is explained in more detail and less simplifications in Visualizing Atmospheric Radiation – Part Three – Average Height of Emission – the complex subject of where the TOA radiation originated from, what is the “Average Height of Emission” and other questions.

A legitimate criticism of current atmospheric physics is that convection is poorly understood in contrast to subjects like radiation. This is true. And everyone knows it. But it’s not true to say that convection is ignored. And it’s not true to say that because “convection short-circuits radiation” in the troposphere that somehow more GHGs will have no effect.

On the other hand I don’t want to suggest that because more GHGs in the atmosphere mean that there is a “pre-feedback” temperature rise of about 1K, that somehow the problem is all nicely solved. On the contrary, climate is very complicated. Radiation is very simple by comparison.

All the standard radiative-convective calculation says is: “all other things being equal, an doubling of CO2 from pre-industrial levels, would lead to a 1K increase in surface temperature”

All other things are not equal. But the complication is not that somehow atmospheric physics has just missed out convection. Hilarious. Of course, I realize most people learn their criticisms of climate science from people who have never read a textbook on the subject. Surprisingly, this doesn’t lead to quality criticism..

On more complexity  – I was also interested to see what happens if we readjust absolute humidity due to the significant temperature changes, i.e. we keep relative humidity constant. This led to some surprising results, so I will post them in a followup article.

Notes

Note 1 – The boundary conditions are important if you want to understand radiative heat transfer in the atmosphere.

First of all, the downward longwave radiation at TOA (top of atmosphere) = 0. Why? Because there is no “longwave”, i.e., terrestrial radiation, from outside the climate system. So at the top of the atmosphere the downward flux = 0. As we move down through the atmosphere the flux gradually increases. This is because the atmosphere emits radiation. We can divide up the atmosphere into fictitious “layers”. This is how all numerical (finite element analysis) programs actually work. Each layer emits and each layer also absorbs. The balance depends on the temperature of the source radiation vs the temperature of the layer of the atmosphere we are considering.

At the bottom of the atmosphere, i.e., at the surface, the upwards longwave radiation is the surface emission. This emission is given by the Stefan-Boltzmann equation with an emissivity of 1.0 if we consider the surface as a blackbody which is a reasonable approximation for most surface types – for more on this, see Visualizing Atmospheric Radiation – Part Thirteen – Surface Emissivity – what happens when the earth’s surface is not a black body – useful to understand seeing as it isn’t..

At TOA, the upwards emission needs to equal the absorbed solar radiation, otherwise the climate system has an imbalance – either cooling or warming.

As a friend of mine in Florida says:

You can’t kill stupid, but you can dull it with a 4×2

Some ideas are so comically stupid that I thought there was no point writing about them. And yet, one after another, people who can type are putting forward these ideas on this blog.. At first I wondered if I was the object of a practical joke. Some kind of parody. Perhaps the joke is on me. But, just in case I was wrong about the practical joke..

 

If you pick up a textbook on heat transfer that includes a treatment of radiative heat transfer you find no mention of Arrhenius.

If you pick up a textbook on atmospheric physics none of the equations come from Arrhenius.

Yet there is a steady stream of entertaining “papers” which describe “where Arrhenius went wrong”, “Arrhenius and his debates with Fourier”. Who cares?

Likewise, if you study equations of motion in a rotating frame there is no discussion of where Newton went wrong, or where he got it right, or debates he got right or wrong with contemporaries. Who knows? Who cares?

History is fascinating. But if you want to study physics you can study it pretty well without reading about obscure debates between people who were in the formulation stages of the field.

Here are the building blocks of atmospheric radiation:

  • The emission of radiation – described by Nobel prize winner Max Planck’s equation and modified by the material property called emissivity (this is wavelength dependent)
  • The absorption of radiation by a surface – described by the material property called absorptivity (this is wavelength dependent and equal at the same wavelength and direction to emissivity)
  • The Beer-Lambert law of absorption of radiation by a gas
  • The spectral absorption characteristics of gases – currently contained in the HITRAN database – and based on work carried out over many decades and written up in journals like Journal of Quantitative Spectroscopy and Radiative Transfer
  • The theory of radiative transfer – the Schwarzschild equation – which was well documented by Nobel prize winner Subrahmanyan Chandrasekhar in his 1952 book Radiative Transfer (and by many physicists since)

The steady stream of stupidity will undoubtedly continue, but if you are interested in learning about science then you can rule out blogs that promote papers which earnestly explain “where Arrhenius went wrong”.

Hit them with a 4 by 2.

Or, ask the writer where Subrahmanyan Chandrasekhar went wrong in his 1952 work Radiative Transfer. Ask the writer where Richard M. Goody went wrong. He wrote the seminal Atmospheric Radiation: Theoretical Basis in 1964.

They won’t even know these books exist and will have never read them. These books contain equations that are thoroughly proven over the last 100 years. There is no debate about them in the world of physics. In the world of fantasy blogs, maybe.

There is also a steady stream of people who believe an idea yet more amazing. Somehow basic atmospheric physics is proven wrong because of the last 15 years of temperature history.

The idea seems to be:

More CO2 is believed to have some radiative effect in the climate because of the last 100 years of temperature history, climate scientists saw some link and tried to explain it using CO2, but now there has been no significant temperature increase for the last x years this obviously demonstrates the original idea was false..

If you think this, please go and find a piece of 4×2 and ask a friend to hit you across the forehead with it. Repeat. I can’t account for this level of stupidity but I have seen that it exists.

An alternative idea, that I will put forward, one that has evidence, is that scientists discovered that they can reliably predict:

  • emission of radiation from a surface
  • emission of radiation from a gas
  • absorption of radiation by a surface
  • absorption of radiation by a gas
  • how to add up, subtract, divide and multiply, raise numbers to the power of, and other ninja mathematics

The question I have for the people with these comical ideas:

Do you think that decades of spectroscopy professionals have just failed to measure absorption? Their experiments were some kind of farce? No one noticed they made up all the results?

Do you think Max Planck was wrong?

It is possible that climate is slightly complicated and temperature history relies upon more than one variable?

Did someone teach you that the absorption and emission of radiation was only “developed” by someone analyzing temperature vs CO2 since 1970 and not a single scientist thought to do any other measurements? Why did you believe them?

Bring out the 4×2.

Note – this article is a placeholder so I don’t have to bother typing out a subset of these points for the next entertaining commenter..

Update July 10th with the story of Fred the Charlatan

Let’s take the analogy of a small boat crossing the Atlantic.

Analogies don’t prove anything, they are for illustration. For proof, please review Theory and Experiment – Atmospheric Radiation.

We’ve done a few crossings and it’s taken 45 days, 42 days and 46 days (I have no idea what the right time is, I’m not a nautical person).

We measure the engine output – the torque of the propellors. We want to get across quicker. So Fred the engine guy makes a few adjustments and we remeasure the torque at 5% higher. We also do Fred’s standardized test, which is to zip across a local sheltered bay with no currents, no waves and no wind – the time taken for Fred’s standarized test is 4% faster. Nice.

So we all set out on our journey across the Atlantic. Winds, rain, waves, ocean currents. We have our books to read, Belgian beer and red wine and the time flies. Oh no, when we get to our final destination, it’s actually taken 47 days.

Clearly Fred is some kind of charlatan! No need to check his measurements or review the time across the bay. We didn’t make it across the Atlantic in less time and clearly the ONLY variable involved in that expedition was the output of the propellor.

Well, there’s no point trying to use more powerful engines to get across the Atlantic (or any ocean) faster. Torque has no relationship to speed. Case closed.

Analogy over.

 

Follow

Get every new post delivered to your Inbox.

Join 300 other followers