Your calculation of the volume of the top 700m of the ocean assumes that the ocean area is constant throughout the top 700m, whereas it obviously reduces as depth increases. (I have been trying to find the volume of the top 3000m of he ocean, for which this is a bigger issue.)

Also, the specific heat of seawater is about 3,990 J/K/kg, not 4180 – you are thinking of fresh water.

]]>While the presence of a unit root would indeed imply that regular OLS regression is strictly speaking not valid (the error bounds will be underestimated, though the central estimate of the slope is probably not affected much), your comparison with ‘an evening of casino winnings’ misses the mark.

Exactly because, as you also say, the temp’s are governed by physical processes (i.e. the planetary energy balance and internal modes of variability such as ENSO).

A better analogy would be if a timeseries of your body weight would contain a unit root. We all know that our body weight is governed by phsyical-biological processes (our personal ‘energy balance’). And if we eat more than our body needs, we’ll gain weight, irrespective of the presence of a unit root.

See also http://ourchangingclimate.wordpress.com/2010/04/01/a-rooty-solution-to-my-weight-gain-problem/

Of course, that doesn’t negate the need for appropriate statistics in analyzing time series.

]]>The uncertainty in measurements covering the top 3000 m is ridiculously large. The gray error bars should be at least 5 times bigger for 3000 m than for 700 m, with one standard deviation – half of the 95% confidence interval – covering the full vertical scale of any graph. Can you post a graph of ocean heat content down to 3000 m WITH ERROR BARS? Why should anyone pay the slightest attention to the 3000 m data?

The problem with data for the top 300 m is that the temperature of the top 100 m varies with the season, the local weather, and the strength of the wind (which mixes the top layer). Other than sea surface temperature, we probably don’t have historical data with the temporal resolution to accurately track the change in temperature with depth in the top 100 m. Therefore even though we may have the ability to accurately measure a 0.38 degK change in the top 300 m, this change is occurring against a background of high natural variability in the top 100 m.

Uncertainty is probably the reason the majority of analyses track the heat content in the top 700 m of the ocean. Still the 40-year change is only a total of 0.16 degK. Most of this data was collected before global warming became a major scientific concern, possibly with equipment and procedures that weren’t designed to provide the precision needed for today’s needs.

Until we have 10-20 years of data from the Argo buoys, our understanding of energy flux from the surface into the deep oceans is completely inadequate. Developers of GCM’s don’t have in observational evidence (energy flux into the top 300 m, then the top 700 m, then the top 3000 m) to demonstrate that their models accurately reproduce “thermal diffusivity”. Thermal diffusivity is a critical parameter of GCMs that has a major impact on climate sensitivity and whether global temperature lags radiative forcing by a few years or by a few decades.

]]>*On the calculation*:

Using your value of surface area and converting to m^2:

A= 3.4×10^14 m^2

The average depth of the ocean around the world is about 4km=4000m.

Volume of ocean = 1.36×10^18 m^3

Density of water = 1000 kg/m^3 (approx)

Mass of ocean = 1.36×10^21 kg

And specific heat capacity, c = 4200 J / kg.K

And dT= Q/mc, where dT = temp change and Q=energy

For Q = 10^22 J:

dT = 0.002’C – so you are correct,

and for the top 300m:

dT = 0.002 * 4000/300 = 0.02’C

So over 50 years, dT = 1’C (if all the heat is in the top 300m)

*On your comments*

Clearly there are measurement issues, but it’s an important subject.

The error bars are big going back, but that’s the data available.

If no one tried to measure it then no one would know whether it was 1’C or 0.1’C or 5’C.

At least now a few scientists have tried to put a value on the problem and the rest of us can decide how useful it is.

]]>Are my calculations correct? Are scientists really drawing important conclusions from such tiny changes in temperature? Surely no one really believes that we have reliable ocean heat content data going back to 1950! The error bars on Figure 1 that represent sampling variability are bad enough and they don’t include systematic errors. If thousands of Argo buoys designed to measure ocean heat content are having a difficult time tracking changes in ocean heat content, why should anyone pay attention to this primitive data?

]]>A long discussion about time series analysis and unit root as it relates to temperature is here:

http://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared

You will get the point about a quarter of the way into the blog. After seeing the last post by VS there is no need to read further.

For those not statistically inclined:

A trend line through temperature data is a lot like a trend line through an evening of casino winnings. It tells you correctly what happened, but has no predictive power.

Of course there is a physical process generating the observed temperatures. However, even simple non-linear systems can display chaotic behavior. See for a classic example: http://en.wikipedia.org/wiki/Lorenz_attractor.

So the task of extracting meaningful inferences from a very noisy signal with a unit root which represents a chaotic process is indeed very difficult. At a minimum, it requires state of the art statistics. [snip – please check the etiquette ].

Will

]]>Very good question. Levitus says:

The starting year is chosen because data coverage improved after the mid 1960s when XBT measurements of the upper ocean began. The linear trends

(with 95% confidence intervals) of OHC700 are 0.40 x

10^22 ± 0.05 J/yr for 1969–2008 and is 0.27 x 10^22 ±

0.04 J/yr for 1955–2008.

So running an earlier trend line would, in their opinion, reduce the confidence interval. It seems plausible at least.

You can see the error “shading” in the earlier graph by Domingues – much larger than any trend in the 1950-1970 period.

The trend line does coincide with land surface temperature measurement (and SST measurement) increases from 1970 onwards. Between 1940 and 1970 GMST was more or less flat.

]]>While it’s obvious the trend would still be up, it seems like that would exaggerate the trend by quite a bit! ]]>

I stop reading when I hit the word “novel!”

]]>