You might like to consider Beenstock’s reply here

]]>How, by the way, can you do ARIMA without detrending?

How is not the problem. You simply plug the time series into the black box such as the auto.arima function in the R language. Out come numbers. You then can use a Monte Carlo technique to generate a few thousand random series with those numbers and find the 95% confidence limits of the envelope. Then, probably, you claim that there is no significant trend because the confidence limits are nearly wall to wall. In fact, I’m pretty sure that if you do that with the atmospheric CO2 concentration starting around 1900 using relatively recent ice core data plus Mauna Loa data and blindly follow the method, the envelope includes zero, or at least substantial decreases in CO2 concentration are likely. But that didn’t seem to bother the unit root idi0t$ like Beenstock, et. al., 2012. . While checking that reference, I found a paper that totally demolishes their thesis.

]]>Conceptually I would describe the idea as follows:

In case of AR(n) future values depend the more on the earlier ones the larger the absolute value of the root of the characteristic equation is. For values less than one we have a converging behavior while values larger than one make specific sequences of values lead to amplification and blowout.

The case of MA(n) is in a sense inverse to AR(n) (also formally under proper conditions). In that case the characteristic equation does not tell explicitly how the future values depend on the earlier ones, but set constraints for their values. The larger the roots are, the stronger the constraints on future values relative to the earlier ones. Any root less than one tells about a wormhole through which the process can go blowing out while it goes. If all roots are larger than one then no wormholes can exist.

]]>What I find odd is (a) the insistence upon statistical tests, and (b) the presumption that ARIMA models suffice, because of their distributional assumptions. In fairness, need to admit I’m an unabashed Bayesian, so that will color my views. How, by the way, can you do ARIMA without detrending? I’ve always found that aspect of ARIMA puzzling, because in any nontrivial system, you independently commit to building a trend model of some kind.

]]>Over the long term, CO2 emissions, for one, will be stationary because eventually we’ll run out of fossil carbon. Eventually could be quite a long time, though. But CO2 emissions aren’t a random process. We have a fairly good handle on the amount of fossil CO2 released into the atmosphere as well as any change in CO2 from land use/land cover changes. The econometricians that claim that the atmospheric CO2 concentration time series is I(2) while the temperature series is I(1) so you can’t cointegrate them are misapplying the statistical tests. You can’t get a valid result from a unit root test where there is an underlying non-linear deterministic trend. A time series calculated using a cubic polynomial in t, for example, will test as I(2) with drift. Testing for unit roots or fitting an ARIMA model of a series without detrending is assuming the conclusion that there is, in fact, no non-linear underlying deterministic trend.

]]>Take, for example, a classic example of a non-stationary process, a tank where a small volume of water is removed or added randomly. The water level in the tank will be non-stationary. However, to make the example more like the climate, take a tank with a small hole in the bottom. The flow out the bottom will be proportional to the level in the tank. For a constant flow rate into the tank, assuming an infinitely tall tank, there will be a constant level in the tank. Now if we vary the flow into the tank randomly while maintaining a constant average flow, i.e. no non-stationary forcing, the level in the tank will still be a stationary process. The hole in the tank is analogous to the Planck feedback where emission to space is proportional to temperature.

]]>