Feeds:
Posts
Comments

Archive for the ‘Commentary’ Category

Renewable Energy I

This blog is about climate science.

I wanted to take a look at Renewable Energy because it’s interesting and related to climate science in an obvious way. Information from media sources confirms my belief that 99% of what is produced by the media is rehashed press releases from various organizations with very little fact checking. (Just a note for citizens alarmed by this statement – they are still the “go to source” for the weather, footage of disasters and partly-made-up stories about celebrities).

Regular readers of this blog know that the articles and discussion so far have only been about the science – what can be proven, what evidence exists, and so on. Questions about motives, about “things people might have done”, and so on, are not of interest in the climate discussion (not for this blog). There are much better blogs for that – with much larger readerships.

Here’s an extract from About this Blog:

Opinions
Opinions are often interesting and sometimes entertaining. But what do we learn from opinions? It’s more useful to understand the science behind the subject. What is this particular theory built on? How long has the theory been “established”? What lines of evidence support this theory? What evidence would falsify this theory? What do opposing theories say?
Anything else?
This blog will try and stay away from guessing motives and insulting people because of how they vote or their religious beliefs. However, this doesn’t mean we won’t use satire now and again as it can make the day more interesting.

The same principles will apply for this discussion about renewables. Our focus will be on technical and commercial aspects of renewable energy, with a focus on evidence rather than figuring it out from “motive attribution”. And wishful thinking –  wonderful though it is for reducing personal stress – will be challenged.

As always, the moderator reserves the right to remove comments that don’t meet these painful requirements.

Here’s a claim about renewables from a recent media article:

By Bloomberg New Energy Finance’s most recent calculations a new wind farm in Australia would cost $74 a megawatt hour..

..”Wind is already the cheapest, and solar PV [photovoltaic panels] will be cheaper than gas in around two years, in 2017. We project that wind will continue to decline in cost, though at a more modest rate than solar. Solar will become the dominant source in the longer term.”

I couldn’t find any evidence in the article that verified the claim. Only that it came from Bloomberg New Energy Finance and was the opposite of a radio shock jock. Generally I favor my dogs’ opinions over opinionated media people (unless it is about the necessity of an infinite supply of Schmackos starting now, right now). But I have a skeptical mindset and not knowing the wonderful people at Bloomberg I have no idea whether their claim is rock-solid accurate data, or “wishful thinking to promote their products so they can make lots of money and retire early”.

Calculating the cost of anything like this is difficult. What is the basis of the cost calculation? I don’t know if the claim in BNEF’s calculation is “accurate” – but without context it is not such a useful number. The fact that BNEF might have some vested interest in a favorable comparison over coal and gas is just something I assume.

But, like with climate science, instead of discussing motives and political stances, we will just try and figure out how the numbers stack up. We won’t be pitting coal companies (=devils or angels depending on your political beliefs) against wind turbine producers (=devils or angels depending on your political beliefs) or against green activists (=devils or angels depending on your political beliefs).

Instead we will look for data – a crazy idea and I completely understand how very unpopular it is. Luckily, I’m sure I can help people struggling with the idea to find better websites on which to comment.

Calculating the Cost

I’ve read the details of a few business plans and I’m sure that most other business plans also have the same issue – change a few parameters (=”assumptions”, often “reasonable assumptions”) and the outlook goes from amazing riches to destitution and bankruptcy.

The cost per MWHr of wind energy will depend on a few factors:

  • cost of buying a wind turbine
  • land acquisition/land rental costs
  • installation cost
  • grid connection costs
  • the “backup requirement” aka “capacity credit”
  • cost of capital
  • lifetime of equipment
  • maintenance costs
  • % utilization (output energy / nameplate capacity)

And of course, in any discussion about “the future”, favorable assumptions can be made about “the next generation”. Is the calculation of $74/MWHr based on what was shipped 5 years ago and its actuals, or what is suggested for a turbine purchased next year?

If you want wind to look better than gas or coal – or the converse – there are enough variables to get the result you want. I’ll be amazed if you can’t change the relative costs by a factor of 5 by playing around with what appear to be reasonable assumptions.

Perhaps the data is easy to obtain. I’m sure many readers have some or all of this data to hand.

Moore’s Law and Other Industries

Most people are familiar with the now legendary statement from the 1960s about semiconductor performance doubling every 18 months. This revolution is amazing. But it’s unusual.

There are a lot of economies of scale from mass production in a factory. But mostly limiting cases are reached pretty quickly, after which cost reductions of a few percent a year are great results – rather than producing the same product for 1% of what it cost just 10 years before. Semiconductors are the exception.

When a product is made from steel alloys, carbon fiber composites or similar materials we can’t expect Moore’s law to kick in. On the other hand, products that rely on a combination of software, electronic components and “traditional materials” and have been produced on small scales up until now can expect major cost reductions from amortizing costs (software, custom chips, tooling, etc) and general economies of scale (purchasing power, standardizing processes, etc).

In some industries, rapid growth actually causes cost increases. If you want an experienced team to provide project management, installation and commissioning services you might find that the boom in renewables is driving those costs up, not down.

A friend of mine working for a natural gas producer in Queensland, Australia recounted the story of the cost of building a dam a few years ago. Long story short, the internal estimates ranged from $2M to $7M, but when the tenders came in from general contractors the prices were $10M to $25M. The reason was a combination of:

  • escalating contractor costs (due to the boom)
  • compliance with new government environmental regulations
  • compliance with the customer’s many policies / OH&S requirements
  • the contractual risk due to all of the above, along with the significant proliferation of contract terms (i.e., will we get sued, have we taken on liabilities we don’t understand, etc)

The point being that industry insiders – i.e., the customer – with a strong vested interest in understanding current costs was out by a factor of more than three in a traditional enterprise. This kind of inaccuracy is unusual but it can happen when the industry landscape is changing quickly.

Even if you have signed a fixed price contract with an EPC you can only be sure this is the minimum you will be paying.

The only point I’m making is that a lot of costs are unknown even by experienced people in the field. Companies like BNEF might make some assumptions but it’s a low stress exercise when someone else will be paying the actual bills.

Intermittency & Grid Operators

We will discuss this further in future articles. This is a key issue between renewables and fossil fuel / nuclear power stations. The traditional power stations can create energy when it is needed. Wind and solar – mainstays of the renewable revolution – create energy when the sun shines and the wind blows.

As a starting point for any discussion let’s assume that storing energy is massively uneconomic. While new developments might be available “around the corner”, storing energy is very expensive. The only real mechanism is pumped hydro schemes. Of course, we can discuss this.

Grid operators have a challenge – balance demand with supply (because storage capacity is virtually zero). Demand is variable and although there is some predictability, there are unexpected changes even in the short term.

The demand curve depends on the country. For example, the UK has peak demand in the winter evenings. Wealthy hotter countries have peak demand in the summer in the middle of the day (air-conditioning).

There are two important principles:

  • Grid operators already have to deal with intermittency because conventional power stations go off-line with planned outages and with unplanned, last minute, outages
  • Renewables have a “capacity credit” that is usually less than their expected output

The first is a simple one. An example is the Sizewell B nuclear power station in the UK supplying about 10GW out of 80GW of total grid supply. From time to time it shuts down and the grid operator gets very little notice. So grid operators already have to deal with this. They use statistical calculations to ensure excess supply during normal operation, based on an acceptable “loss of load probability”. Total electricity demand is variable and supply is continually adjusted to match that demand. Of course, the scale of intermittency from large penetration of renewables may present challenges that are difficult to deal with by comparison with current intermittency.

The second is the difficult one. Here’s an example from a textbook by Godfrey, that’s actually a collection of articles on (mainly) UK renewables:

 

Godfrey-p19

The essence of the calculation is a probabilistic one. At small penetration levels, the energy input from wind power displaces the need for energy generation from traditional sources. But as the percentage of wind power increases, the “potential down time” causes more problems – requiring more backup generation on standby. In the calculations above, wind going from 0.5 GW to 25 GW only saves 4 GW in conventional “capacity”. This is the meaning of capacity credit – adding 25 GW of wind power (under this simulation) provides a capacity credit of only 4 GW. So you can’t remove 25 GW of conventional from the grid, you can only remove 4 GW of conventional power.

Now the calculation of capacity credit depends on the specifics of the history of wind speeds in the region. Increasing the geographical spread of wind power generation produces better results, dependent on the lower correlation of wind speeds across larger regions. Different countries get different results.

So there’s an additional cost with wind power that someone has to pay for – which increases along with the penetration of wind power. In the immediate future this might not be a problem because perhaps the capacity already exists and is just being put on standby. However, at some stage these older plants will be at end of life and conventional plants will need to be built to provide backup.

Many calculations exist of the estimated $/MWh from providing such a backup. We will dig into those in future articles. My initial impression is that there are a lot of unknowns in the real cost of backup supply because for much potential backup supply the lifetime / maintenance impact of frequent start-stops is unclear. A lot of this is thermal shock issues – each thermal cycle costs $X.. (based on the design of the plant to handle so many thousand starts before a major overhaul is needed).

The Other Side of the Equation – Conventional Power

It will also be interesting to get some data around conventional power. Right now, the cost of displacing conventional power is new investment in renewables, but keeping conventional power is not free. Every existing station has a life and will one day need to be replaced (or demand will need to be reduced). It might be a deferred cost but it will still be a cost.

$ and GHG emissions

There is a cost to adding 1GW of wind power. There is a cost to adding 1GW of solar power. There is also a GHG cost – that is, building a solar panel or a wind turbine is not energy free and must be producing GHGs in the process. It would be interesting to get some data on this also.

Conclusion – Introduction

I wrote this article because finding real data is demanding and many websites focused on the topic are advocacy-based with minimal data. Their starting point is often the insane folly and/or mendacious intent of “the other side”. The approach we will take here is to gather and analyze data.. As if the future of the world was not at stake. As if it was not a headlong rush into lunacy to try and generate most energy from renewables.. As if it was not an unbelievable sin to continue to create electricity from fossil fuels..

This approach might allow us to form conclusions from the data rather than the reverse.

Let’s see how this approach goes.

I am hoping many current (and future) readers can contribute to the discussion – with data, uncertainties, clarifications.

I’m not expecting to be able to produce “a number” for windpower or solar power. I’m hopeful that with some research, analysis and critical questions we might be able to summarize some believable range of values for the different elements of building a renewable energy supply, and also quantify the uncertainties.

Most of what I will write in future articles I don’t yet know. Perhaps someone already has a website where this project is already complete and in my Part Two will just point readers there..

References

Renewable Electricity and the Grid : The Challenge of Variability, Godfrey Boyle, Earthscan (2007)

Read Full Post »

I’ve been a student of history for a long time and have read quite a bit about Nazi Germany and WWII. In fact right now, having found audible.com I’m listening to an audio book The Coming of the Third Reich, by Richard Evans, while I walk, drive and exercise.

It’s heartbreaking to read about the war and to read about the Holocaust. Words fail me to describe the awfulness of that regime and what they did.

But it’s pretty easy for someone who is curious about evidence, or who has had someone question whether or not the Holocaust actually took place, to find and understand the proof.

The photos. The bodies. The survivors’ accounts. The thousands of eyewitness accounts. The army reports. The stated aims of Hitler and many of the leading Nazis in their own words.

We can all understand how to weigh up witness accounts and photos. It’s intrinsic to our nature.

People who don’t believe the Nazis murdered millions of Jews are denying simple and overwhelming evidence.

Let’s compare that with the evidence behind the science of anthropogenic global warming (AGW) and the inevitability of a 2-6ºC rise in temperature if we continue to add CO2 and other GHGs to the atmosphere.

Step 1 – The ‘greenhouse’ effect

To accept AGW of course you need to accept the ‘greenhouse’ effect. It’s fundamental science and not in question but what if you don’t take my word for it? What if you want to check for yourself?

And by the way, the complexity of the subject for many people becomes clear even at this stage, with countless hordes not even clear that the ‘greenhouse’ effect is a just a building block for AGW. It is not itself AGW.

AGW relies on the ‘greenhouse’ effect but also on other considerations.

I wrote The “Greenhouse” Effect Explained in Simple Terms to make it simple, yet not too simple. But that article relies on (and references) many basics – radiation, absorption and emission of radiation through gases, heat transfer and convection. All of those are necessary to understand the greenhouse effect.

Many people have conceptual misunderstandings of “basic” physics. In reading comments on this blog and on other blogs I often see fundamental misunderstanding of how heat transfer works. No space here for that.

But the difficulty of communicating a physics idea is very real. Once someone has a conceptual block because they think some process works a subtly different way, the only way to resolve the question is with equations. It is further complicated because these misunderstandings are often unstated by the commenter – they don’t realize they see the world differently from physics basics.

So when we need to demonstrate that the greenhouse effect is real, and that it increases with more GHGs we need some equations. And by ‘increases’ I mean more GHGs mean a higher surface temperature, all other things being equal. (Which, of course, they never are).

The equations are crystal clear and no one over the age of 10 could possibly be confused. I show the equations for radiative transfer (and their derivation) in Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations:

Iλ(0) = Iλm)em + ∫ Bλ(T)e [16]

The terms are explained in that article. In brief, the equation shows how the intensity of radiation at the top of the atmosphere at one wavelength is affected by the number of absorbing molecules in the atmosphere. And, obviously, you have to integrate it over all wavelengths. Why do I even bring that up, it’s so simple?

Voila.

And equally obviously, anyone questioning the validity of the equation, or the results from the equation, is doing so from evil motives.

I do need to add that we have to prescribe the temperature profile in the atmosphere (and the GHG concentration) to be able to solve this equation. The temperature profile is known as the lapse rate – temperature reduces as you go up in altitude. In the tropical regions where convection is stronger we can come up with a decent equation for the lapse rate.

All you have to know is the first law of thermodynamics, the ideal gas law and the equation for the change in pressure vs height due to the mass of the atmosphere. Everyone can do this in their heads of course. But here it is:

Screen Shot 2015-02-03 at 7.18.53 pm

So with these two elementary principles we can prove that more GHGs means a higher surface temperature before any feedbacks. That’s the ‘greenhouse’ effect.

Step 2 – AGW = ‘Greenhouse effect’ plus feedbacks

This is so simple. Feedbacks are things like – a hotter world probably has more water vapor in the atmosphere, and water vapor is the most important GHG, so this amplifies the ‘greenhouse’ effect of increasing CO2. Calculating the changes is only a little more difficult than the super simple equations I showed earlier.

You just need a GCM – a climate model run on a supercomputer. That’s all.

There are many misconceptions about climate models but only people who are determined to believe a lie can possibly believe them.

As an example, many people think that the amplifying effect, or positive feedback, of water vapor is programmed into the GCMs. All they have to do is have a quick read through the 200-page technical summary of a model like say CAM (community atmosphere model).

Here is an extract from Description of the NCAR Community Atmosphere Model (CAM 3.0), W.D. Collins (2004):

Screen Shot 2015-02-03 at 7.31.24 pm

As soon as anyone reads this – and if they can’t be bothered to find the reference via Google Scholar and read it, well, what can you say about such people – as soon as they read it, of course, it’s crystal clear that positive feedback isn’t “programmed in” to climate models.

So GCMs all come to the conclusion that more GHGs results in a hotter world (2-6ºC). They solve basic physics equations in a “grid” fashion, stepping forward in time, and so the result is clear and indisputable.

Step 3 – Attribution Studies

I recently spent some time reading AR4 and AR5 (the IPCC reports) on Attribution (Natural Variability and Chaos – Seven – Attribution & Fingerprints Or Shadows? and Natural Variability and Chaos – Three – Attribution & Fingerprints).

This is the work of attributing the last century’s rise in temperature to the increases in anthropogenic GHGs. I followed the trail of papers back and found one of the source papers by Hasselmann from 1993. In it we can clearly see the basis for attribution studies:

Screen Shot 2015-02-03 at 7.40.31 pm

Now it’s very difficult to believe that anyone questioning attribution studies isn’t of evil intent. After all, there is the basic principle in black and white. Who could be confused?

As a side note, to excuse my own irredeemable article on the topic, the actual basis of attribution isn’t just in these equations, it is also in the assumption that climate models accurately calculate the statistics of natural variability. The IPCC chapter on attribution doesn’t really make this clear, yet in another chapter (11) different authors suggest completely restating the statistical certainty claimed in the attribution chapter because “..it is explicitly recognized that there are sources of uncertainty not simulated by the models”. Their ad hoc restatement, while more accurate than the executive summary, still needs to be justified.

However, none of this can offer me redemption.

Step 4 – Unprecedented Temperature Rises

(This could probably be switched around with step 3. The order here is not important).

Once people have seen the unprecedented rise in temperature this century, how could they not align themselves with the forces of good?

Anthropogenic warming ‘writ large’ (AR5, chapter 2):

Screen Shot 2015-02-03 at 7.54.49 pm

There’s the problem. The last 400,000 years were quite static by comparison:

Screen Shot 2015-02-03 at 8.03.13 pm

From ‘800,000 Years of Abrupt Climate Variability’, Barker et al (2011)

The red is a Greenland ice core proxy for temperature, the green is a mid-latitude SST estimate – and it’s important to understand that calculating global annual temperatures is quite difficult and not done here.

So no one who looks at climate history can possibly be excused for not agreeing with consensus climate science, whatever that is when we come to “consensus paleoclimate”.. It was helpful to read Chapter 5 of AR5:

There is high confidence that orbital forcing is the primary external driver of glacial cycles (Kawamura et al,. 2007; Cheng et al., 2009; Lisiecki, 2010; Huybers, 2011).

I’ve only read about 350 papers on paleoclimate and I’m confused about the origin of the high confidence as I explained in Ghosts of Climate Past -Eighteen – “Probably Nonlinearity” of Unknown Origin.

Anyway, the key takeaway message is that the recent temperature history is another demonstration that anyone not in line with consensus climate science is clearly acting from evil motives.

Conclusion

I thought about putting a photo of the Holocaust from a concentration camp next to a few pages of mathematical equations – to make a point. But that would be truly awful.

That would trivialize the memory of the terrible suffering of millions of people under one of the most evil regimes the world has seen.

And that, in fact, is my point.

I can’t find words to describe how I feel about the apologists for the Nazi regime, and those who deny that the holocaust took place. The evidence for the genocide is overwhelming and everyone can understand it.

On the other hand, those who ascribe the word ‘denier’ to people not in agreement with consensus climate science are trivializing the suffering and deaths of millions of people. Everyone knows what this word means. It means people who are apologists for those evil jackbooted thugs who carried the swastika and cheered as they sent six million people to their execution.

By comparison, understanding climate means understanding maths, physics and statistics. This is hard, very hard. It’s time consuming, requires some training (although people can be self-taught), actually requires academic access to be able to follow the thread of an argument through papers over a few decades – and lots and lots of dedication.

The worst you could say is people who don’t accept ‘consensus climate science’ are likely finding basic – or advanced – thermodynamics, fluid mechanics, heat transfer and statistics a little difficult and might have misunderstood, or missed, a step somewhere.

The best you could say is with such a complex subject straddling so many different disciplines, they might be entitled to have a point.

If you have no soul and no empathy for the suffering of millions under the Third Reich, keep calling people who don’t accept consensus climate science ‘deniers’.

Otherwise, just stop.

Important Note: The moderation filter on comments is setup to catch the ‘D..r’ word specifically because such name calling is not accepted on this blog. This article is an exception to the norm, but I can’t change the filter for one article.

Read Full Post »

In Part One we had a look at some introductory ideas. In this article we will look at one of the ground-breaking papers in chaos theory – Deterministic nonperiodic flow, Edward Lorenz (1963). It has been cited more than 13,500 times.

There might be some introductory books on non-linear dynamics and chaos that don’t include a discussion of this paper – or at least a mention – but they will be in a small minority.

Lorenz was thinking about convection in the atmosphere, or any fluid heated from below, and reduced the problem to just three simple equations. However, the equations were still non-linear and because of this they exhibit chaotic behavior.

Cencini et al describe Lorenz’s problem:

Consider a fluid, initially at rest, constrained by two infinite horizontal plates maintained at constant temperature and at a fixed distance from each other. Gravity acts on the system perpendicular to the plates. If the upper plate is maintained hotter than the lower one, the fluid remains at rest and in a state of conduction, i.e., a linear temperature gradient establishes between the two plates.

If the temperatures are inverted, gravity induced buoyancy forces tend to rise toward the top the hotter, and thus lighter fluid, that is at the bottom. This tendency is contrasted by viscous and dissipative forces of the fluid so that the conduction state may persist.

However, as the temperature differential exceeds a certain amount, the conduction state is replaced by a steady convection state: the fluid motion consists of steady counter-rotating vortices (rolls) which transport upwards the hot/light fluid in contact with the bottom plate and downwards the cold heavy fluid in contact with the upper one.

The steady convection state remains stable up to another critical temperature difference above which it becomes unsteady, very irregular and hardly predictable.

Willem Malkus and Lou Howard of MIT came up with an equivalent system – the simplest version is shown in this video:

Figure 1

Steven Strogatz (1994), an excellent introduction to dynamic and chaotic systems – explains and derives the equivalence between the classic Lorenz equations and this tilted waterwheel.

L63 (as I’ll call these equations) has three variables apart from time: intensity of convection (x), temperature difference between ascending and descending currents (y), deviation of temperature from a linear profile (z).

Here are some calculated results for L63  for the “classic” parameter values and three very slightly different initial conditions (blue, red, green in each plot) over 5,000 seconds, showing the start and end 50 seconds – click to expand:

Lorenz63-5ksecs-x-y-vs-time-499px

Figure 2 – click to expand – initial conditions x,y,z = 0, 1, 0;  0, 1.001, 0;  0, 1.002, 0

We can see that quite early on the two conditions diverge, and 5000 seconds later the system still exhibits similar “non-periodic” characteristics.

For interest let’s zoom in on just over 10 seconds of ‘x’ near the start and end:

Lorenz63-5ksecs-x-vs-time-zoom-499px

Figure 3

Going back to an important point from the first post, some chaotic systems will have predictable statistics even if the actual state at any future time is impossible to determine (due to uncertainty over the initial conditions).

So we’ll take a look at the statistics via a running average – click to expand:

Lorenz63-5ksecs-x-y-vs-time-average-499px

Figure 4 – click to expand

Two things stand out – first of all the running average over more than 100 “oscillations” still shows a large amount of variability. So at any one time, if we were to calculate the average from our current and historical experience we could easily end up calculating a value that was far from the “long term average”. Second – the “short term” average, if we can call it that, shows large variation at any given time between our slightly divergent initial conditions.

So we might believe – and be correct – that the long term statistics of slightly different initial conditions are identical, yet be fooled in practice.

Of course, surely it sorts itself out over a longer time scale?

I ran the same simulation (with just the first two starting conditions) for 25,000 seconds and then used a filter window of 1,000 seconds – click to expand:

Lorenz63-25ksecs-x-time-1000s-average-499px

 Figure 5 – click to expand

The total variability is less, but we have a similar problem – it’s just lower in magnitude. Again we see that the statistics of two slightly different initial conditions – if we were to view them by the running average at any one time –  are likely to be different even over this much longer time frame.

From this 25,000 second simulation:

  • take 10,000 random samples each of 25 second length and plot a histogram of the means of each sample (the sample means)
  • same again for 100 seconds
  • same again for 500 seconds
  • same again for 3,000 seconds

Repeat for the data from the other initial condition.

Here is the result:

Lorenz-25000s-histogram-of-means-2-conditions

Figure 6

To make it easier to see, here is the difference between the two sets of histograms, normalized by the maximum value in each set:

Lorenz-25000s-delta-histogram

Figure 7

This is a different way of viewing what we saw in figures 4 & 5.

The spread of sample means shrinks as we increase the time period but the difference between the two data sets doesn’t seem to disappear (note 2).

Attractors and Phase Space

The above plots show how variables change with time. There’s another way to view the evolution of system dynamics and that is by “phase space”. It’s a name for a different kind of plot.

So instead of plotting x vs time, y vs time and z vs time – let’s plot x vs y vs z – click to expand:

Lorenz63-first-50s-x-y-z-499px

Figure 8 – Click to expand – the colors blue, red & green represent the same initial conditions as in figure 2

Without some dynamic animation we can’t now tell how fast the system evolves. But we learn something else that turns out to be quite amazing. The system always end up on the same “phase space”. Perhaps that doesn’t seem amazing yet..

Figure 7 was with three initial conditions that are almost identical. Let’s look at three initial conditions that are very different: x,y,z = 0, 1, 0;   5, 5, 5;   20, 8, 1:

Lorenz63-first-50s-x-y-z-3-different-conditions-499px

Figure 9 – Click to expand

Here’s an example (similar to figure 7) from Strogatz – a set of 10,000 closely separated initial conditions and how they separate at 3, 6, 9 and 15 seconds. The two key points:

  1. the fast separation of initial conditions
  2. the long term position of any of the initial conditions is still on the “attractor”
From Strogatz 1994

From Strogatz 1994

Figure 10

A dynamic visualization on Youtube with 500,000 initial conditions:

Figure 11

There’s lot of theory around all of this as you might expect. But in brief, in a “dissipative system” the “phase volume” contracts exponentially to zero. Yet for the Lorenz system somehow it doesn’t quite manage that. Instead, there are an infinite number of 2-d surfaces. Or something. For the sake of a not overly complex discussion a wide range of initial conditions ends up on something very close to a 2-d surface.

This is known as a strange attractor. And the Lorenz strange attractor looks like a butterfly.

Conclusion

Lorenz 1963 reduced convective flow (e.g., heating an atmosphere from the bottom) to a simple set of equations. Obviously these equations are a massively over-simplified version of anything like the real atmosphere. Yet, even with this very simple set of equations we find chaotic behavior.

Chaotic behavior in this example means:

  • very small differences get amplified extremely quickly so that no matter how much you increase your knowledge of your starting conditions it doesn’t help much (note 3)
  • starting conditions within certain boundaries will always end up within “attractor” boundaries, even though there might be non-periodic oscillations around this attractor
  • the long term (infinite) statistics can be deterministic but over any “smaller” time period the statistics can be highly variable

References

Deterministic nonperiodic flow, EN Lorenz, Journal of the Atmospheric Sciences (1963)

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Non Linear Dynamics and Chaos, Steven H. Strogatz, Perseus Books  (1994)

Notes

Note 1: The Lorenz equations:

dx/dt = σ (y-x)

dy/dt = rx – y – xz

dz/dt = xy – bz

where

x = intensity of convection

y = temperature difference between ascending and descending currents

z = devision of temperature from a linear profile

σ = Prandtl number, ratio of momentum diffusivity to thermal diffusivity

r = Rayleigh number

b = “another parameter”

And the “classic parameters” are σ=10, b = 8/3, r = 28

Note 2: Lorenz 1963 has over 13,000 citations so I haven’t been able to find out if this system of equations is transitive or intransitive. Running Matlab on a home Mac reaches some limitations and I maxed out at 25,000 second simulations mapped onto a 0.01 second time step.

However, I’m not trying to prove anything specifically about the Lorenz 1963 equations, more illustrating some important characteristics of chaotic systems

Note 3: Small differences in initial conditions grow exponentially, until we reach the limits of the attractor. So it’s easy to show the “benefit” of more accurate data on initial conditions.

If we increase our precision on initial conditions by 1,000,000 times the increase in prediction time is a massive 2½ times longer.

Read Full Post »

There are many classes of systems but in the climate blogosphere world two ideas about climate seem to be repeated the most.

In camp A:

We can’t forecast the weather two weeks ahead so what chance have we got of forecasting climate 100 years from now.

And in camp B:

Weather is an initial value problem, whereas climate is a boundary value problem. On the timescale of decades, every planetary object has a mean temperature mainly given by the power of its star according to Stefan-Boltzmann’s law combined with the greenhouse effect. If the sources and sinks of CO2 were chaotic and could quickly release and sequester large fractions of gas perhaps the climate could be chaotic. Weather is chaotic, climate is not.

Of course, like any complex debate, simplified statements don’t really help. So this article kicks off with some introductory basics.

Many inhabitants of the climate blogosphere already know the answer to this subject and with much conviction. A reminder for new readers that on this blog opinions are not so interesting, although occasionally entertaining. So instead, try to explain what evidence is there for your opinion. And, as suggested in About this Blog:

And sometimes others put forward points of view or “facts” that are obviously wrong and easily refuted.  Pretend for a moment that they aren’t part of an evil empire of disinformation and think how best to explain the error in an inoffensive way.

Pendulums

The equation for a simple pendulum is “non-linear”, although there is a simplified version of the equation, often used in introductions, which is linear. However, the number of variables involved is only two:

  • angle
  • speed

and this isn’t enough to create a “chaotic” system.

If we have a double pendulum, one pendulum attached at the bottom of another pendulum, we do get a chaotic system. There are some nice visual simulations around, which St. Google might help interested readers find.

If we have a forced damped pendulum like this one:

Pendulum-forced

Figure 1 – the blue arrows indicate that the point O is being driven up and down by an external force

-we also get a chaotic system.

What am I talking about? What is linear & non-linear? What is a “chaotic system”?

Digression on Non-Linearity for Non-Technical People

Common experience teaches us about linearity. If I pick up an apple in the supermarket it weighs about 0.15 kg or 150 grams (also known in some countries as “about 5 ounces”). If I take 10 apples the collection weighs 1.5 kg. That’s pretty simple stuff. Most of our real world experience follows this linearity and so we expect it.

On the other hand, if I was near a very cold black surface held at 170K (-103ºC) and measured the radiation emitted it would be 47 W/m². Then we double the temperature of this surface to 340K (67ºC) what would I measure? 94 W/m²? Seems reasonable – double the absolute temperature and get double the radiation.. But it’s not correct.

The right answer is 758 W/m², which is 16x the amount. Surprising, but most actual physics, engineering and chemistry is like this. Double a quantity and you don’t get double the result.

It gets more confusing when we consider the interaction of other variables.

Let’s take riding a bike [updated thanks to Pekka]. Once you get above a certain speed most of the resistance comes from the wind so we will focus on that. Typically the wind resistance increases as the square of the speed. So if you double your speed you get four times the wind resistance. Work done = force x distance moved, so with no head wind power input has to go up as the cube of speed (note 4). This means you have to put in 8x the effort to get 2x the speed.

On Sunday you go for a ride and the wind speed is zero. You get to 25 km/hr (16 miles/hr) by putting a bit of effort in – let’s say you are producing 150W of power (I have no idea what the right amount is). You want your new speedo to register 50 km/hr – so you have to produce 1,200W.

On Monday you go for a ride and the wind speed is 20 km/hr into your face. Probably should have taken the day off.. Now with 150W you get to only 14 km/hr, it takes almost 500W to get to your basic 25 km/hr, and to get to 50 km/hr it takes almost 2,400W. No chance of getting to that speed!

On Tuesday you go for a ride and the wind speed is the same so you go in the opposite direction and take the train home. Now with only 6W you get to go 25 km/hr, to get to 50km/hr you only need to pump out 430W.

In mathematical terms it’s quite simple: F = k(v-w)², Force = (a constant, k) x (road speed – wind speed) squared. Power, P = Fv = kv(v-w)². But notice that the effect of the “other variable”, the wind speed, has really complicated things.

To double your speed on the first day you had to produce eight times the power. To double your speed the second day you had to produce almost five times the power. To double your speed the third day you had to produce just over 70 times the power. All with the same physics.

The real problem with nonlinearity isn’t the problem of keeping track of these kind of numbers. You get used to the fact that real science – real world relationships – has these kind of factors and you come to expect them. And you have an equation that makes calculating them easy. And you have computers to do the work.

No, the real problem with non-linearity (the real world) is that many of these equations link together and solving them is very difficult and often only possible using “numerical methods”.

It is also the reason why something like climate feedback is very difficult to measure. Imagine measuring the change in power required to double speed on the Monday. It’s almost 5x, so you might think the relationship is something like the square of speed. On Tuesday it’s about 70 times, so you would come up with a completely different relationship. In this simple case know that wind speed is a factor, we can measure it, and so we can “factor it out” when we do the calculation. But in a more complicated system, if you don’t know the “confounding variables”, or the relationships, what are you measuring? We will return to this question later.

When you start out doing maths, physics, engineering.. you do “linear equations”. These teach you how to use the tools of the trade. You solve equations. You rearrange relationships using equations and mathematical tricks, and these rearranged equations give you insight into how things work. It’s amazing. But then you move to “nonlinear” equations, aka the real world, which turns out to be mostly insoluble. So nonlinear isn’t something special, it’s normal. Linear is special. You don’t usually get it.

..End of digression

Back to Pendulums

Let’s take a closer look at a forced damped pendulum. Damped, in physics terms, just means there is something opposing the movement. We have friction from the air and so over time the pendulum slows down and stops. That’s pretty simple. And not chaotic. And not interesting.

So we need something to keep it moving. We drive the pivot point at the top up and down and now we have a forced damped pendulum. The equation that results (note 1) has the massive number of three variables – position, speed and now time to keep track of the driving up and down of the pivot point. Three variables seems to be the minimum to create a chaotic system (note 2).

As we increase the ratio of the forcing amplitude to the length of the pendulum (β in note 1) we can move through three distinct types of response:

  • simple response
  • a “chaotic start” followed by a deterministic oscillation
  • a chaotic system

This is typical of chaotic systems – certain parameter values or combinations of parameters can move the system between quite different states.

Here is a plot (note 3) of position vs time for the chaotic system, β=0.7, with two initial conditions, only different from each other by 0.1%:

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Forced damped harmonic pendulum, b=0.7: Start angular speed 0.1; 0.1001

Figure 1

It’s a little misleading to view the angle like this because it is in radians and so needs to be mapped between 0-2π (but then we get a discontinuity on a graph that doesn’t match the real world). We can map the graph onto a cylinder plot but it’s a mess of reds and blues.

Another way of looking at the data is via the statistics – so here is a histogram of the position (θ), mapped to 0-2π, and angular speed (dθ/dt) for the two starting conditions over the first 10,000 seconds:

Histograms for 10k seconds

Histograms for 10,000 seconds

Figure 2

We can see they are similar but not identical (note the different scales on the y-axis).

That might be due to the shortness of the run, so here are the results over 100,000 seconds:

Pendulum-0.7-100k seconds-2 conditions-hist

Histogram for 100,000 seconds

Figure 3

As we increase the timespan of the simulation the statistics of two slightly different initial conditions become more alike.

So if we want to know the state of a chaotic system at some point in the future, very small changes in the initial conditions will amplify over time, making the result unknowable – or no different from picking the state from a random time in the future. But if we look at the statistics of the results we might find that they are very predictable. This is typical of many (but not all) chaotic systems.

Orbits of the Planets

The orbits of the planets in the solar system are chaotic. In fact, even 3-body systems moving under gravitational attraction have chaotic behavior. So how did we land a man on the moon? This raises the interesting questions of timescales and amount of variation. Planetary movement – for our purposes – is extremely predictable over a few million years. But over 10s of millions of years we might have trouble predicting exactly the shape of the earth’s orbit – eccentricity, time of closest approach to the sun, obliquity.

However, it seems that even over a much longer time period the planets will still continue in their orbits – they won’t crash into the sun or escape the solar system. So here we see another important aspect of some chaotic systems – the “chaotic region” can be quite restricted. So chaos doesn’t mean unbounded.

According to Cencini, Cecconi & Vulpiani (2010):

Therefore, in principle, the Solar system can be chaotic, but not necessarily this implies events such as collisions or escaping planets..

However, there is evidence that the Solar system is “astronomically” stable, in the sense that the 8 largest planets seem to remain bound to the Sun in low eccentricity and low inclination orbits for time of the order of a billion years. In this respect, chaos mostly manifest in the irregular behavior of the eccentricity and inclination of the less massive planets, Mercury and Mars. Such variations are not large enough to provoke catastrophic events before extremely large time. For instance, recent numerical investigations show that for catastrophic events, such as “collisions” between Mercury and Venus or Mercury failure into the Sun, we should wait at least a billion years.

And bad luck, Pluto.

Deterministic, non-Chaotic, Systems with Uncertainty

Just to round out the picture a little, even if a system is not chaotic and is deterministic we might lack sufficient knowledge to be able to make useful predictions. If you take a look at figure 3 in Ensemble Forecasting you can see that with some uncertainty of the initial velocity and a key parameter the resulting velocity of an extremely simple system has quite a large uncertainty associated with it.

This case is quantitively different of course. By obtaining more accurate values of the starting conditions and the key parameters we can reduce our uncertainty. Small disturbances don’t grow over time to the point where our calculation of a future condition might as well just be selected from a randomly time in the future.

Transitive, Intransitive and “Almost Intransitive” Systems

Many chaotic systems have deterministic statistics. So we don’t know the future state beyond a certain time. But we do know that a particular position, or other “state” of the system, will be between a given range for x% of the time, taken over a “long enough” timescale. These are transitive systems.

Other chaotic systems can be intransitive. That is, for a very slight change in initial conditions we can have a different set of long term statistics. So the system has no “statistical” predictability. Lorenz 1968 gives a good example.

Lorenz introduces the concept of almost intransitive systems. This is where, strictly speaking, the statistics over infinite time are independent of the initial conditions, but the statistics over “long time periods” are dependent on the initial conditions. And so he also looks at the interesting case (Lorenz 1990) of moving between states of the system (seasons), where we can think of the precise starting conditions each time we move into a new season moving us into a different set of long term statistics. I find it hard to explain this clearly in one paragraph, but Lorenz’s papers are very readable.

Conclusion?

This is just a brief look at some of the basic ideas.

Other Articles in the Series

Part Two – Lorenz 1963

References

Chaos: From Simple Models to Complex Systems, Cencini, Cecconi & Vulpiani, Series on Advances in Statistical Mechanics – Vol. 17 (2010)

Climatic Determinism, Edward Lorenz (1968) – free paper

Can chaos and intransivity lead to interannual variation, Edward Lorenz, Tellus (1990) – free paper

Notes

Note 1 – The equation is easiest to “manage” after the original parameters are transformed so that tω->t. That is, the period of external driving, T0=2π under the transformed time base.

Then:

Pendulum-forced-equation

where θ = angle, γ’ = γ/ω, α = g/Lω², β =h0/L;

these parameters based on γ = viscous drag coefficient, ω = angular speed of driving, g = acceleration due to gravity = 9.8m/s², L = length of pendulum, h0=amplitude of driving of pivot point

Note 2 – This is true for continuous systems. Discrete systems can be chaotic with less parameters

Note 3 – The results were calculated numerically using Matlab’s ODE (ordinary differential equation) solver, ode45.

Note 4 – Force = k(v-w)2 where k is a constant, v=velocity, w=wind speed. Work done = Force x distance moved so Power, P = Force x velocity.

Therefore:

P = kv(v-w)2

If we know k, v & w we can find P. If we have P, k & w and want to find v it is a cubic equation that needs solving.

Read Full Post »

In The “Greenhouse” Effect Explained in Simple Terms I list, and briefly explain, the main items that create the “greenhouse” effect. I also explain why more CO2 (and other GHGs) will, all other things remaining equal, increase the surface temperature. I recommend that article as the place to go for the straightforward explanation of the “greenhouse” effect. It also highlights that the radiative balance higher up in the troposphere is the most important component of the “greenhouse” effect.

However, someone recently commented on my first Kramm & Dlugi article and said I was “plainly wrong”. Kramm & Dlugi were in complete agreement with Gerlich and Tscheuschner because they both claim the “purported greenhouse effect simply doesn’t exist in the real world”.

If it’s just about flying a flag or wearing a football jersey then I couldn’t agree more. However, science does rely on tedious detail and “facts” rather than football jerseys. As I pointed out in New Theory Proves AGW Wrong! two contradictory theories don’t add up to two theories making the same case..

In the case of the first Kramm & Dlugi article I highlighted one point only. It wasn’t their main point. It wasn’t their minor point. They weren’t even making a point of it at all.

Many people believe the “greenhouse” effect violates the second law of thermodynamics, these are herein called “the illuminati”.

Kramm & Dlugi’s equation demonstrates that the illuminati are wrong. I thought this was worth pointing out.

The “illuminati” don’t understand entropy, can’t provide an equation for entropy, or even demonstrate the flaw in the simplest example of why the greenhouse effect is not in violation of the second law of thermodynamics. Therefore, it is necessary to highlight the (published) disagreement between celebrated champions of the illuminati – even if their demonstration of the disagreement was unintentional.

Let’s take a look.

Here is the one of the most popular G&T graphics in the blogosphere:

From Gerlich & Tscheuschner

From Gerlich & Tscheuschner

Figure 1

It’s difficult to know how to criticize an imaginary diagram. We could, for example, point out that it is imaginary. But that would be picky.

We could say that no one draws this diagram in atmospheric physics. That should be sufficient. But as so many of the illuminati have learnt their application of the second law of thermodynamics to the atmosphere from this fictitious diagram I feel the need to press forward a little.

Here is an extract from a widely-used undergraduate textbook on heat transfer, with a little annotation (red & blue):

From Incropera & DeWitt (2007)

From “Fundamentals of Heat and Mass Transfer” by Incropera & DeWitt (2007)

Figure 2

This is the actual textbook, before the Gerlich manoeuvre as I would like to describe it. We can see in the diagram and in the text that radiation travels both ways and there is a net transfer which is from the hotter to the colder. The term “net” is not really capable of being confused. It means one minus the other, “x-y”. Not “x”. (For extracts from six heat transfer textbooks and their equations read Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics).

Now let’s apply the Gerlich manoeuvre (compare fig. 2):

Fundamentals-of-heat-and-mass-transfer-post-G&T

Not from “Fundamentals of Heat and Mass Transfer”, or from any textbook ever

Figure 3

So hopefully that’s clear. Proof by parody. This is “now” a perpetual motion machine and so heat transfer textbooks are wrong. All of them. Somehow.

Just for comparison, we can review the globally annually averaged values of energy transfer in the atmosphere, including radiation, from Kiehl & Trenberth (I use the 1997 version because it is so familiar even though values were updated more recently):

From Kiehl & Trenberth (1997)

From Kiehl & Trenberth (1997)

Figure 4

It should be clear that the radiation from the hotter surface is higher than the radiation from the colder atmosphere. If anyone wants this explained, please ask.

I could apply the Gerlich manoeuvre to this diagram but they’ve already done that in their paper (as shown above in figure 1).

So lastly, we return to Kramm & Dlugi, and their “not even tiny point”, which nevertheless makes a useful point. They don’t provide a diagram, they provide an equation for energy balance at the surface – and I highlight each term in the equation to assist the less mathematically inclined:

Kramm-Dlugi-2011-eqn-highlight

 

Figure 5

The equation says, the sum of all fluxes – at one point on the surface = 0. This is an application of the famous first law of thermodynamics, that is, energy cannot be created or destroyed.

The red term – absorbed atmospheric radiation – is the radiation from the colder atmosphere absorbed by the hotter surface. This is also known as “DLR” or “downward longwave radiation, and as “back-radiation”.

Now, let’s assume that the atmospheric radiation increases in intensity over a small period. What happens?

The only way this equation can continue to be true is for one or more of the last 4 terms to increase.

  • The emitted surface radiation – can only increase if the surface temperature increases
  • The latent heat transfer – can only increase if there is an increase in wind speed or in the humidity differential between the surface and the atmosphere just above
  • The sensible heat transfer – can only increase if there is an increase in wind speed or in the temperature differential between the surface and the atmosphere just above
  • The heat transfer into the ground – can only increase if the surface temperature increases or the temperature below ground spontaneously cools

So, when atmospheric radiation increases the surface temperature must increase (or amazingly the humidity differential spontaneously increases to balance, but without a surface temperature change). According to G&T and the illuminati this surface temperature increase is impossible. According to Kramm & Dlugi, this is inevitable.

I would love it for Gerlich or Tscheuschner to show up and confirm (or deny?):

  • yes the atmosphere does emit thermal radiation
  • yes the surface of the earth does absorb atmospheric thermal radiation
  • yes this energy does not disappear (1st law of thermodynamics)
  • yes this energy must increase the temperature of the earth’s surface above what it would be if this radiation did not exist (1st law of thermodynamics)

Or even, which one of the above is wrong. That would be outstanding.

Of course, I know they won’t do that – even though I’m certain they believe all of the above points. (Likewise, Kramm & Dlugi won’t answer the question I have posed of them).

Well, we all know why

Hopefully, the illuminati can contact Kramm & Dlugi and explain to them where they went wrong. I have my doubts that any of the illuminati have grasped the first law of thermodynamics or the equation for temperature change and heat capacity, but who could say.

Read Full Post »

It is not surprising that the people most confused about basic physics are the ones who can’t write down an equation for their idea.

The same people are the most passionate defenders of their beliefs and I have no doubts about their sincerity.

I’ll meander into what it is I want to explain..

I found an amazing resource recently – iTunes U short for iTunes University. Now I confess that I have been a little confused about angular momentum. I always knew what it was, but in the small discussion that followed The Coriolis Effect and Geostrophic Motion I found myself wondering whether conservation of angular momentum was something independent of, or a consequence of, linear momentum or some aspect of Newton’s laws of motion.

It seemed as if conservation of angular momentum was an orphan of Newton’s three laws of motion. How could that be? Perhaps this conservation is just another expression of these laws in a way that I hadn’t appreciated? (Knowledgeable readers please explain).

Just around this time I found iTunes U and searched for “mechanics” and found the amazing series of lectures from MIT by Prof. Walter Lewin. A series of videos. I recommend them to anyone interested in learning some basics about forces, motion and energy. Lewin has a gift, along with an engaging style. It’s nice to see chalk boards and overhead projectors because they are probably no more in use (? young people please advise).

These lectures are not just for iPhone and iTunes people – here is the weblink.

The gift of teaching science is not in accuracy – that’s a given – the gift is in showing the principle via experiment and matching it with a theoretical derivation, and “why this should be so” and thereby producing a conceptual idea in the student.

I haven’t got to Lecture 20: Angular Momentum yet, I’m at about lecture 11. It’s basic stuff but so easy to forget (yes, quite a lot of it has been forgotten). Especially easy to forget how different principles link together and which principle is used to derive the next principle.

What caught my attention for the purposes of this article was how every principle had an equation.

For example, in deriving the work done on an object, Lewin integrates force over the distance traveled and comes up with the equation for kinetic energy.

While investigating the oscillation of a mass on a spring, the equation for its harmonic motion is derived.

Every principle has an equation that can be written down.

Over the last few days, as at many times over the past two years, people have arrived on this blog to explain how radiation from the atmosphere can’t affect the surface temperature because of blah blah blah. Where blah blah blah sounds like it might be some kind of physics but is never accompanied by an equation.

Here’s the equation I find in textbooks.

Energy absorbed from the atmosphere by the surface, Ea:

Ea = αRL↓ ….[eqn 1]

where α = absorptivity of the surface at these wavelengths, RL↓ = downward radiation from the atmosphere

And this energy absorbed, once absorbed, is indistinguishable from the energy absorbed from the sun. 1 W/m² absorbed from the atmosphere is identical to 1 W/m² absorbed from the sun.

That’s my equation. I have provided six textbooks to explain this idea in a slightly different way in Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics.

It’s also produced by Kramm & Dlugi, who think the greenhouse effect is some unproven idea:

Now the equation shown is a pretty simple equation. The equation reproduced in the graphic above from Kramm & Dlugi looks a little more daunting but is simply adding up a number of fluxes at the surface.

Here’s what it says:

Solar radiation absorbed + longwave radiation absorbed – thermal radiation emitted – latent heat emitted – sensible heat emitted + geothermal energy supplied = 0

Or another way of thinking about it is energy in = energy out (written as “energy in – energy out = 0“)

Now one thing is not amazing to me –  of the tens (hundreds?) of concerned citizens commenting on the many articles on this subject who have tried to point out my “basic mistake” and tell me that the atmosphere can’t blah blah blah, not a single one has produced an equation.

The equation might look something like this:

Ea = f(α,Tatm-Tsur).RL↓ ….[eqn 2]
where Tatm = temperature of the atmosphere, Tsur = temperature of the surface

With the function f being defined like this:

f(α,Tatm-Tsur) = α, when Tatm ≥ Tsur and

f(α,Tatm-Tsur) = 0, when Tatm < Tsur

In English, it says something like energy from the atmosphere absorbed by the surface = 0 when the temperature of the atmosphere is less than the temperature of the surface.

I’m filling in the blanks here. No one has written down such ridiculous unphysical nonsense because it would look like ridiculous unphysical nonsense. Or perhaps I’m being unkind. Another possibility is that no one has written down such ridiculous unphysical nonsense because the proponents have no idea what an equation is, or how one can be constructed.

My Prediction

No one will produce an equation which shows how no atmospheric energy can be absorbed by the surface. Or how atmospheric energy absorbed cannot affect internal energy.

This is because my next questions will be:

  1. Please supply a textbook or paper with this equation
  2. Please explain from fundamental physics how this can take place

My Challenge

Here’s my challenge to the many people concerned about the “dangerous nonsense” of the atmospheric radiation affecting surface temperature –

Supply an equation.

If you can’t, it is because you don’t understand the subject.

It won’t stop you talking, but everyone who is wondering and reads this article will be able to join the dots together.

The Usual Caveat

If there were only two bodies – the warmer earth and the colder atmosphere (no sun available) – then of course the earth’s temperature would decrease towards that of the atmosphere and the atmosphere’s temperature would increase towards that of the earth until both were at the same temperature – somewhere between the two starting temperatures.

However, the sun does actually exist and the question is simply whether the presence of the (colder) atmosphere affects the surface temperature compared with if no atmosphere existed. It is The Three Body Problem.

My Second Prediction

The people not supplying the equation, the passionate believers in blah blah blah, will not explain why an equation is not necessary or not available. Instead, continue to blah blah blah.

Read Full Post »

The Rotational Effect

Climate scientists think that the rotation of the earth is responsible for a lot of the atmospheric and ocean effects that we see. In fact, most climate scientists think it is easy to prove. (Although not as simple as proving that radiatively-active gases affect the climate).

Now suppose the earth’s rotation speed was reducing by X% per year as a result of some important human activity (just suppose, for the sake of this mental exercise) and had been for 100 years or so.

Then atmospheric physics papers and textbooks would comment on the effect of the current speed of rotation of the planet – quantifying its effect by analyzing what climate would be like without rotation. This would be just as an introduction to the effect of rotation on climate. Let’s say that the mean annual equator-arctic temperature differential is currently 35°C (I haven’t checked the exact value) but without rotation it might be thought to be 45°C. So we will describe the rotational effect as being responsible for a 10°C arctic-equatorial temperature differential.

More specifically the rotational effect might be quantified as the number of petawatts of equatorial to polar heat transported vs the value calculated for a “no rotational” earth. But by way of introduction the temperature differential is an easier value to grasp than the change in petawatts.

Various researchers would attempt to calculate the much smaller changes likely to occur in the climate as a result of the rotational changes that might take place over the next 10-20 years. They would use GCMs and other models that would be exactly like the current ones.

And of course there would be many justifiable questions about how accurate the models are – like now.

And many from the general public, not understanding how to follow the equations of motion in rotational frames, or the thermal wind equation, or Ekman pumping, or baroclinic instability, or pretty much anything relating to atmospheric & ocean dynamics might start saying:

The rotational effect doesn’t exist

Many of these people would be skeptical about the small changes to climate that could result from an impercetible change in the rotation rate.

Many blogs would spring up with people using hand-waving arguments about the climatic effects of rotation being vastly overstated.

Other blogs would write that climate science makes massively simplistic assumptions in its calculations and uses the geostrophic balance as its complete formula for climate dynamics. Many other people unencumbered with any knowledge from climate science textbooks, or any desire to read one, would curiously label themselves as skeptics and happily repeat these “facts” without ever checking them.

People with some scientific qualifications, but without solid understanding of the complete field of oceanic or atmospheric dynamics, would write poor quality papers explaining how the rotational effect was much less than climate science calculated and produce some incomplete or incorrectly derived equations to demonstrate this.

These scientists and their new work would be lauded by many blogs as being free from the simplistic assumptions that has dogged climate science and yes, finally, accurate and high quality work has been done!

Other blogs would claim that climate science was ignoring the huge effects of absorption and emission of radiation on the climate.

Then some more serious scientists would come along and write lengthy papers to argue that the rotational effect as defined by climate science does not exist because the “no rotation” result is incorrectly defined, or is not possible to accurately calculate.

Papers of incalculable value.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 373 other followers