Or perhaps confidence intervals that are a lot too small. Simple auto-correlation may not be a good model for the behavior of temperature. If there’s long term persistence, then a better model would be ARFIMA, autoregressive, fractionally integrated moving average. That drops degrees of freedom even more. That can also mean that longer time series do not produce better estimates of the climate as the apparent standard deviation can actually increase with an increasing number of samples as the low frequency noise makes itself felt. 1/f noise or drift is the bane of instrumental analytical chemistry.

]]>HR and BB: A more rigorous way to deal with trends where the beginning and endpoints appear to have been cherry-picked (such as the 1998 El Nino) is to only trust sources that tell you the slope and the 95% confidence interval around the slope. If one takes statistical uncertainty into account, there is nothing innately wrong with wanting to know the temperature trend for the past 10 or 12 years (little or no increase) and comparing it to the trend for 1975-1995 (+0.2? degC/decade) when the scientific “consensus” about climate sensitivity was coalescing. It would be even better to know the 95% confidence interval about the difference or have some other statistical measure concerning the significance of the difference. Unfortunately, temperature data shows significant auto-correlation – if this month’s temperature anomaly is above average (due to El Nino), next month’s will be also. From a statistical point of view, there aren’t 120 months of independent temperature data per decade, so simple statistical analysis results in 95% confidence intervals that somewhat too small.

]]>Thank you for your comment. The point I am making is that given the scarcity of climate blogs which don’t have an axe to grind, it can be a good idea to cast one’s net widely so as to identify common ground. I mentioned two (out of many possible) online sources which I had found useful for a newcomer looking for an introduction to the basics of climate science. What those sources, and their authors, have or have not said or written (rightly or wrongly) in other contexts and on other topics doesn’t diminish their usefulness for that particular purpose. ]]>

A couple of points from my own experience:

(a) Even tho’ you can make progress without following all the equations, there’s a limit to how far you can get with purely qualitative understanding. I think you will still need to try to understand what some of the numbers mean.

(b) Most climate blogs (this one excepted, which is why it is so valuable) have an openly biassed standpoint, usually but not always either broadly sceptical or broadly alarmist. This divergence not surprisingly colours the way the topic is usually introduced and discussed. Without necessarily being untruthful, a site may, for instance, avoid discussion of serious objections to its point of view.

And, given that the subject is of immense complexity, I suspect there are few if any individual scientists who can truthfully claim to have a deep understanding of it right across the board. Ideally the IPCC should have been able to fill this gap. However its ‘Summaries for policmakers’ and ‘Technical summaries’ 2001 and 2007 (available free online at http://www.ipcc.ch) seem to me suffer from the same tendency to bias as do most climate blogs. It’s more what they leave out than the errors that they contain.

Until about 4 years ago I knew little about climate (and nothing about the global warming debate) but I had a bit of spare time and wanted to learn. Scienceofdoom wasn’t here then.

I spent time at the outset reading parts of the IPCC reports and educating myself from two websites at nearly opposite ends of the spectrum: the alarmist site http://www.realclimate.org and the sceptic site http://www.junkscience.com/greenhouse/index I found useful material on both these sites, mixed up in both cases with possibly more dubious statements and maybe not-so-well-founded assertions. However I reckoned that where they agreed (and I found substantial agreement) I could at least take the common ground as a starting point.

I also read a number of books….

I am now beginning to appreciate how little is really known about climate. I’m hoping scienceofdoom will enable us to sketch in some of the boundaries of that knowledge.

Rather than mess with the integrated forms of radiation transfer equations (which typically have no analytical solution), the differential form or Schwartzchild equation is much simpler and contains a complete description of the physics. You introduced me to this physics. To save you some work and review what I learned, hopefully correctly:

The intensity of light (I) leaving a thin layer of atmosphere at a certain wavelength depends on the energy entering that layer, the amount of light absorbed by that layer and the amount of light emitted at that wavelength by that layer (and in some situations we need to include the amount of light scattered or reflected by the layer and clouds).

dI = -absorption + emission

dI = -[kprI]ds + [B(T)kpr]ds

The change in light intensity (dI) at a given wavelength as it passes through a short distance of air (ds) is the absorption minus the emission. Absorption is proportional to:

a) the amount of light entering the layer (I),

b) the density of the molecules absorbing light = atmosphere density (p or rho) times the mixing ratio or weight fraction of the absorber (r, rising from 300 to 400 ppm for CO2),

c) an absorption coefficient (k, absorption per unit mass) that varies with wavelength (often dramatically and rapidly). Since nitrogen, oxygen and argon don’t have a dipole moment, their absorption coefficient is zero at most visible and infrared wavelengths. We often restrict our discussion to the major absorbers, mainly water vapor and CO2. Since they absorb a greater fraction of outgoing (infrared) than incoming (mostly visible), these molecules are called GHGs.

Emission is proportional to:

a) the density of GHG molecules (pr) emitting light (the same pr factor as for absorption),

b) an emission coefficient (k) which is identical to the absorption coefficient,

c) the Boltzmann function (B(T)) which tells us how the probability of emitting of the a photon varies with temperature and wavelength.

Absorption in our atmosphere doesn’t change appreciably with T, but emission varies with T^4 or about 3X because of B(T).

Integrated forms of the Schwartzschild equation are extremely complicated and usually include:

a) a term, tau, which I think represents the distance 1/e of the light is lost (net absorption minus emission?).

b) a geometric factor (1/u?) which corrects for the fact that we want to know how about energy flow towards and away from the earth, while B(T) describes light radiating in all directions from a point source.

c) Integrating over all wavelengths to determine energy flux.

SOD’s August post on “Vanishing Nets” presented an integrated solution to the Schwartzschild equation assuming equilibrium between incoming and outgoing energy, a fixed value of k for all outgoing wavelengths, and a constant mixing ratio at all altitudes. This is called the grey atmosphere model.

Absorption varies with wavelength and the concentration of water vapor varies with altitude, so there is no simple equation that describes energy flux through the Earth’s atmosphere. Climate models calculate energy flux through the earth’s atmosphere by numerical integration of the Schwartzschild equation – breaking the atmosphere up into grid cells and calculating the change in light intensity over all wavelengths in each grid cell assuming an average value for the parameters in each cell.

]]>Harold:

Amorphous clouds.

If the cloud is minimal, its impact on the radiation balance of a grid sized 100×100 km^2 is, obviously, minimal. If it covers a significant part of a grid cell, a fraction of the radiative transfer is explicitly computed in models similar to the ones presented by Steve Carson in the links above. Example: Thomas and Stamnes 1999: Radiative Transfer in the Atmosphere and Ocean, Cambridge University Press, page 468, for instance.

CO2 contained in water droplets.

Sure the dilution of CO2 in rain affects the concentration of CO2 on a microscopic scale close to the cloud droplets. However, the observed concentration of CO2 (see data from Mauna Loa and similar observatories) do NOT show any significant dependence of the measured values with rainfall. This is an effect that has not to be considered at all on a climate time scale in terms of the radiative transfer equation, since CO2 concentration is prescribed.

Mass and water vapor.

Climate and weather forecast models explicitly consider mass (pressure) and water vapor as prognostic variables, together with temperature and other ones such as wind, density and so on. The spatial anisotropy of mass, temperature, water vapor mixing ratio are explicitly considered by the models.

Mixture of gases

Any first years’ undergraduate student on atmospheric physics (Wallace and Hobbs is excellent) already explains that air is a mixture of gases. The equations consider all of the most important ones. VOC and many other gases’ concentrations are not considered, obviously, their concentrations are minimal and the impact on the value of the gas constant that must be used in the equation of state is completely meaningless.

Clouds

Clouds are difficult, it is certain, but some of them are simulated (imperfectly but simulated) and other ones’ radiative effects are also considered. Other ones are not, but nobody says models are perfect. Work is currently in progress to improve this. But it is to some extent being considered right now.

Fancy radiative transfer equations.

They apply, with some modifications, including clouds if needed, to any window of the atmosphere, given appropriate data on constitution, vertical profile of temperature or constituents and so on. Examples are usually presented on the easiest cases (no clouds), but real computations exist for the complex situations too. This is similar with the way physics is taught. During early years movements are analyzed for free particles. Until this is not mastered, it has no sense to consider friction. The same applies here. If people don’t understand the core of radiative transfer for clear-sky conditions, which is the point in using examples considering clouds?

Hope this helps

]]>Thank you for not whacking my comment as Joe, John and Gavin always do.

]]>