Feeds:
Posts
Comments

Archive for the ‘Ocean Physics’ Category

FitzGerald et al 2008:

Sea-level rise (SLR) poses a particularly ominous threat because 10% of the world’s population (634 million people) lives in low-lying coastal regions within 10 m elevation of sea level (McGranahan et al. 2007). Much of this population resides in portions of 17 of the world’s 30 largest cities, including Mumbai, India; Shanghai, China; Jakarta, Indonesia; Bangkok, Thailand; London; and New York.

In the last article – Sinking Megacities – we saw that some of these cities are sinking due to ground water depletion. To those megacities, this is a much more serious threat than global sea level rise (probably why we see so many marches and protests about ground water depletion).

The paper continues:

..The potential loss of life in low-lying areas is even more graphically illustrated by the 1970 Bhola cyclone that traveled northward through the Bay of Bengal producing a 12-m-high wall of water that drowned a half million people in East Pakistan (now Bangladesh) (Garrison 2005).

In Bangladesh, storms and cyclones are much more of a threat than sea level rise. Here is Karim and Mimura (2008) listing the serious cyclones over the last 60 years:

From Karim and Mimura 2008

Figure 1 – Click to expand

There is an interesting World Bank Report from 2011. First on floods:

In an average year, nearly one quarter of Bangladesh is inundated, with more than three-fifths of land area at risk of floods of varying intensity (Ahmed and Mirza 2000). Every four or five years, a severe flood occurs during the monsoon season, submerging more than three-fifths of the land..

The most recent exceptional flood, which occurred in 2007, inundated 62,300 km² or 42 percent of total land area, causing 1,110 deaths and affecting 14 million people; 2.1 million ha of standing crop land were submerged, 85,000 houses completely destroyed, and 31,533 km of roads damaged. Estimated asset losses from this one event totaled US$1.1 billion (BWDB 2007).

Flooding in Bangladesh results from a complex set of factors, key among which are extremely low and flat topography, uncertain transboundary flow, heavy monsoon rainfall, and high vulnerability to tidal waves and congested drainage channels. Two-thirds of Bangladesh’s land area is less than 5 m above sea level. Each year, an average flow of 1,350 billion m³ of water from the GBM [Ganges, Brahmaputra, and Meghna] basin drains through the country.

From World Bank 2011

Figure 2

I recommend this World Bank report, very interesting, and you can see some idea of the costs of mitigating against floods. These problems are already present – floods are a regular occurrence, some mitigation has already taken place, and more mitigation continues.

I read the entire report and all I could find was that rising sea levels would exacerbate the problems already faced from storm surges: p.6:

Increase in ocean surface temperature and rising sea levels are likely to intensify cyclonic storm surges and further increase the depth and extent of storm surge induced coastal inundation.

However, the projections indicate that sea level rise is much less of a problem compared with possible increases in future storm surges and possible increases in future flooding. And compared with current storm surges and current flooding. We will look at floods and storm surges in future articles.

In the report it’s clear that floods and storms are already major problems. Sea level is harder to analyze. Trying to account for a sea level rise of 0.3m by 2050 when severe storm surges are already 5-10m is not going to make much of a difference. If we had accurate prediction of storm surges, to +/- 0.3m, then sea level rise of 0.3m should definitely be accounted for. But we don’t have anything like that kind of accuracy.

Well, they do some calculations of adaption against storm surges for projected changes up to 2050:

Under the baseline scenario, the adaptation costs total $2.46 billion. In a changing climate, the additional adaptation cost totals US$892 million.

In essence the question is “what is the storm surge for a once in a 10 year storm in 2050”? (I’m sure Bangladesh would really prefer to build protection against a once in 100 year storm). An extra $1bn for future problems, or a total of $3.5bn to cover existing and future problems, seems like money that would be very well spent, representing excellent value.

Nicholls and Cazenave (2010), in relation to the susceptible coastline of Asia and Africa, comment on adaption:

Many impact studies do not consider adaptation, and hence determine worst-case impacts. Yet, the history of the human relationship with the coast is one of an increasing capacity to adapt to adverse change. In addition, the world’s populated coasts became increasingly managed and engineered over the 20th century. The subsiding cities discussed above all remain protected to date, despite large relative SLR.

Analysis based on benefit-cost methods show that protection would be widespread as well-populated coastal areas have a high value and actual impacts would only be a small fraction of the potential impacts, even assuming high-SLR (>1 m/century) scenarios. This suggests that the common assumption of a widespread forced retreat from the shore in the face of SLR is not inevitable. In many densely populated coastal areas, communities advanced the coast seaward via land claim owing to the high value of land (e.g., Singapore).

Yet, protection often attracts new development in low lying areas, which may not be desirable, and coastal defense failures have occurred, such as New Orleans in 2005. Hence, we must choose between protection, accommodation, and planned retreat adaptation options. This choice is both technical and sociopolitical, addressing which measures are desirable, affordable, and sustainable in the long term. Adaptation remains a major uncertainty concerning the actual impacts of SLR.

In the World Bank 2011 report, in chapter 4, after their analysis on risks and costs of storm-induced inundations in 2050 resulting from projected higher cyclonic wind speeds and a projected increase in sea level of 0.27m, they comment, p.24:

As a cautionary note, it should be noted that this analysis did not address the out-migration from coastal zones that a rise in sea level and intensified cyclonic storm surges might induce.

In fact the cost data assumes population growth in the vulnerable regions.

Likewise, here is Hinkel et al (2014):

Coastal flood damages are expected to increase significantly during the 21st century as sea levels rise and socioeconomic development increases the number of people and value of assets in the coastal floodplain.

[Emphasis added].

This assumption bias creates an interpretation challenge. It would be useful to see notes to the effect: “If the population migrates away from this area due to the higher risk, instead the cost will be $X assuming a reduction of Y% in population in this region by 2050“. This extra item of data would create a useful contrast and I’m guessing that we would see impact assessments reduce by a factor of 5 or 10.

It is difficult to see realistic global sea level changes, even to the end of the century, having a big impact on Bangladesh compared with their current problems of annual flooding and frequent large storm surges. Of course, adding an extra 0.5m to the sea level doesn’t improve the situation, but it is an order of magnitude smaller than storm surges.

The adaption costs estimated by the World Bank to protect against storm surges (already required but at least a work in progress) seem moderate in value.

Lastly, I wasn’t able to find a detailed elevation map (with, say, 0.5m resolution), instead the ones I found graded the elevation with respect to sea level in fairly coarse steps. I’m sure the information exists but may be proprietary (in GIS data for example):

Figure 2 – Click to expand

I have to admit that I believed something like 25% of the Bangladesh population were around 1.0m or less above current sea level. This map says that the 0-3m area is quite small. If anyone does have a better resolution map I will post it up.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

References

Coastal Impacts Due to Sea-Level Rise, Duncan M. FitzGerald et al, Annual Rev. Earth Planet. Sci. (2008)

Impacts of climate change and sea-level rise on cyclonic storm surge floods in Bangladesh, Mohammed Fazlul Karim & Nobuo Mimura, Global Environmental Change (2008) – free paper

The Cost of Adapting to Extreme Weather Events in a Changing Climate – Bangladesh, World Bank (2011) – free report

Sea-Level Rise and Its Impact on Coastal Zones, Robert J Nicholls & Anny Cazenave, Science (2010) – free paper

Coastal flood damage and adaptation costs under 21st century sea-level rise, Jochen Hinkel et al, PNAS (2014) – free paper

Advertisements

Read Full Post »

In Impacts – VIII – Sea level 3 – USA I suggested this conclusion:

So the cost of sea level rise for 2100 in the US seems to be a close to zero cost problem.

Probably the provocative way I wrote the conclusion confused some people. I should have said that it was a very expensive problem. But that it wasn’t a problem that society should pay for, given that anyone moving to the coast since 2005 at the latest would have known that future sea level was considered to be a major problem. By 2100 the youngest people still living right on the sea front, who bought property there before 2005, would be at least 115 years old.

The idea is that “externalities” as economists call them should be paid by the creators of the problem, not the people that incur the problem. In this case, the “victims” are people who ignored the evidence and moved to the coast anyway. Are they still victims? That was my point.

Well, what about outside the US?

Some mega cities have huge problems. Here is Nicholls 2011:

Coastal areas constitute important habitats, and they contain a large and growing population, much of it located in economic centers such as London, New York, Tokyo, Shanghai, Mumbai, and Lagos. The range of coastal hazards includes climate-induced sea level rise, a long-term threat that demands broad response.

Global sea levels rose 17 cm through the twentieth century, and are likely to rise more rapidly through the twenty-first century when a rise of more than 1 m is possible.

In some locations, these changes may be exacerbated by

(1) increases in storminess due to climate change, although this scenario is less certain
(2) widespread human-induced subsidence due to ground fluid withdrawal from, and drainage of, susceptible soils, especially in deltas.

Subsidence?

Over the twentieth century, the parts of Tokyo and Osaka built on deltaic areas subsided up to 5 m and 3 m, respectively, a large part of Shanghai subsided up to 3 m, and Bangkok subsided up to 2 m.

This human-induced subsidence can be mitigated by stopping shallow, subsurface fluid withdrawals and managing water levels, but natural “background” rates of subsidence will continue, and RSLR will still exceed global trends in these areas. A combination of policies to mitigate subsidence has been instituted in the four delta cities mentioned above, combined with improved flood defenses and pumped drainage systems designed to avoid submergence and/ or frequent flooding.

In contrast, Jakarta and Metro Manila are subsiding significantly, with maximum subsidence of 4 m and 1 m to date, respectively (e.g., Rodolfo and Siringan, 2006; Ward et al., 2011), but little systematic policy response is in place in either city, and future flooding problems are anticipated.

Subsidence graphic:

From Nicholls 2011

Figure 1

To put these figures in context, sea level rise from 1900-2000 was about 0.2m and according to the latest IPCC report the forecast of sea level rise by 2100 might be around an additional 0.5m (for RCP 6.0, see earlier article). In the light of the idea that global society should pay for problems to people caused by global society, perhaps the problems of Shanghai, Bangkok and other sinking cities are not global problems?

Here is Wang et al from 2012:

Shanghai is low-lying, with an elevation of 3–4 m. A quarter of the area lies below 3 m. The city’s flood-control walls are currently more than 6 m high. However, given the trend of sea level rise and land subsidence, this is inadequate. Shanghai is frequently affected by extreme tropical storm surges. The risk of flooding from overtopping is considerable..

..From 1921 to 1965, the average cumulative subsidence of the city center was 1.76 m, with a maximum of 2.63 m. From 1966 to 1985, a monitoring network was established and subsidence was mitigated through artificial recharge. Land subsidence was stabilized at an average of 0.9 mm/year. As a result of rapid urban development and large-scale construction projects between 1986 and 1997, subsidence of the downtown area increased rapidly, at an average rate of 10.2 mm/year..

..In 2100, sea level rise and land subsidence will be far greater than before. Sea level rise is estimated to be 43 cm, while land subsidence is estimated to be 3–229 cm, and neotectonic subsidence is estimated to be 14 cm. Flooding will be severe in 2100 (Fig. 8).

[Note I changed the data in the last paragraph cited to round numbers in cm from their values quoted to 0.01cm – for example, 43cm instead of the paper’s values of 43.31 etc].

So for Shanghai at least global sea level rise is not really the problem.

Given that I don’t pay much attention to media outlets I probably missed the big Marches against Ground Water Depletion Slightly Accentuating Global Warming’s Sea Level Rise in Threatened Megacities.

As with the USA data the question of increased storm surges accentuating global sea level rise is still on the agenda (i.e., has not yet been discussed).

References

Planning for the impacts of sea level rise, RJ Nicholls, Oceanography (2011)

Evaluation of the combined risk of sea level rise, land subsidence, and storm surges on the coastal areas of Shanghai, China, Jun Wang, Wei Gao, Shiyuan Xu & Lizhong Yu, Climatic Change (2012)

 

Read Full Post »

In Part V we looked at the IPCC, an outlier organization, that claimed floods, droughts and storms had not changed in a measurable way globally in the last 50 -100 years (of course, some regions have seen increases and some have seen decreases, some decades have been bad, some decades have been good).

This puts them at a disadvantage compared with the overwhelming mass of NGOs, environmental groups, media outlets and various government departments who claim the opposite, but the contrarian in me found their research too interesting to ignore. Plus, they come with references to papers in respectable journals.

We haven’t looked at future projections of these events as yet. Whenever there are competing effects to create a result we can expect it to be difficult to calculate future effects. In contrast, one climate effect that we can be sure about is sea level. If the world warms, as it surely will with more GHGs, we can expect sea level to rise.

In my own mental list of “bad stuff to happen”, I had sea level rise as an obvious #1 or #2. But ideas and opinions need to be challenged and I had not really investigated the impacts.

The world is a big place and rising sea level will have different impacts in different places. Generally the media presentation on sea level is unrelentingly negative, probably following the impact of the impressive 2004 documentary directed by Roland Emmerich, and the dramatized adaption by Al Gore in 2006 (directed by Davis Guggenheim).

Let’s start by looking at some sea level basics.

Like everything else related to climate, getting an accurate global dataset on sea level is difficult – especially when we want consistency over decades.

The world is a big place and past climatological measurements were mostly focused on collecting local weather data for the country or region in question. Satellites started measuring climate globally in the late 1970s, but satellites for sea level and mass balance didn’t begin measurements until 10-20 years ago. So, climate scientists attempt to piece together disparate data systems, to reconcile them, and to match up the results with what climate models calculate – call it “a sea level budget”.

“The budget” means balancing two sides of the equation:

  • how has sea level changed year by year and decade by decade?
  • what contributions to sea level do we calculate from the effects of warming climate?

Components of Sea Level Rise

If we imagine sea level as the level in a large bathtub it is relatively simple conceptually. As the ocean warms the level rises for two reasons:

  • warmer water expands (increasing the volume of existing mass)
  • ice melts (adding mass)

The “material properties” of water are well known and not in doubt. With lots of measurements of ocean temperature around the globe we can be relatively sure of the expansion. Ocean temperature has increasing coverage over the last 100 years, especially since the Argo project that started a little more than 10 years ago. But if we go back 30 years we have a lot less measurements and usually only at the surface. If we go back 100 years we have less again. So there are questions and uncertainties.

Arctic ice melting has no impact on sea level because it is already floating. Water or ice that is already floating doesn’t change the sea level by melting/freezing. Ice on a continent that melts and runs into the ocean increases sea level due to increasing the mass. Some Antarctic ice shelves are in the ocean but are part of the Antarctic ice sheet that is supported by the continent of Antarctica – melt these ice sheets and they will add to ocean level.

Sea level over the last 100 years has increased by about 0.20m (about 8 inches, if we use advanced US units).

To put it into one perspective, 20,000 years ago the sea level was about 120m lower than today – this was the end of the last ice age. About 130,000 years ago the sea level was a few meters higher (no one is certain of the exact figure). This was the time of the last “interglacial” (called the Eemian interglacial).

If we melted all of Greenland’s ice sheet we would see a further 7m rise from today, and Greenland and Antarctica together would lead to a 70m rise. Over millennia (but not a century), the complete Greenland ice sheet melting is a possibility, but Antarctica is not (at around -30ºC, it is a very long way below freezing).

Complications

Why not use tide gauges to measure sea level rise? Some have been around for 100 years and a few have been around for 200 years.

There aren’t many tide gauges going back a long time, and anyway in many places the ground is moving relative to the ocean. Take Scandinavia. At the end of the last ice age Stockholm was buried under perhaps 2km of ice. No wonder Scandinavians today appear so cheerful – life is all about contrasts. As the ice melted, the load on the ground was removed and it is “springing back” into a pre-glacial position. So in many places around the globe the land is moving vertically relative to sea level.

In Nedre Gavle, Sweden, the land is moving up twice as fast as the average global sea level rise (so relative sea level is falling). In Galveston, Texas the land is moving down faster than sea level rise (more than doubling apparent sea level rise).

That is the first complication.

The second complication is due to wind and local density from salinity changes. So as an example, picture a constant sea level but Pacific winds change from W->E to E->W. The water will “pile up” in the west instead of the east, due to the force of the wind. Relative sea level will increase in the west and decrease in the east. Likewise, if the local density changes from melting ice (or ocean currents with different salinity) we can adjust the local sea level relative to the reference.

Here is AR5, chapter 3, p. 288:

Large-scale spatial patterns of sea level change are known to high precision only since 1993, when satellite altimetry became available.

These data have shown a persistent pattern of change since the early 1990s in the Pacific, with rates of rise in the Warm Pool of the western Pacific up to three times larger than those for GMSL, while rates over much of the eastern Pacific are near zero or negative.

The increasing sea level in the Warm Pool started shortly before the launch of TOPEX/Poseidon, and is caused by an intensification of the trade winds since the late 1980s that may be related to the Pacific Decadal Oscillation (PDO).

The lower rate of sea level rise since 1993 along the western coast of the United States has also been attributed to changes in the wind stress curl over the North Pacific associated with the PDO..

Measuring Systems

We can find a little about the new satellite systems in IPCC, AR5, chapter 3, p. 286:

Satellite radar altimeters in the 1970s and 1980s made the first nearly global observations of sea level, but these early measurements were highly uncertain and of short duration. The first precise record began with the launch of TOPEX/Poseidon (T/P) in 1992. This satellite and its successors (Jason-1, Jason-2) have provided continuous measurements of sea level variability at 10-day intervals between approximately ±66° latitude. Additional altimeters in different orbits (ERS-1, ERS-2, Envisat, Geosat Follow-on) have allowed for measurements up to ±82° latitude and at different temporal sampling (3 to 35 days), although these measurements are not as accurate as those from the T/P and Jason satellites.

Unlike tide gauges, altimetry measures sea level relative to a geodetic reference frame (classically a reference ellipsoid that coincides with the mean shape of the Earth, defined within a globally realized terrestrial reference frame) and thus will not be affected by VLM, although a small correction that depends on the area covered by the satellite (~0.3 mm yr–1) must be added to account for the change in location of the ocean bottom due to GIA relative to the reference frame of the satellite (Peltier, 2001; see also Section 13.1.2).

Tide gauges and satellite altimetry measure the combined effect of ocean warming and mass changes on ocean volume. Although variations in the density related to upper-ocean salinity changes cause regional changes in sea level, when globally averaged their effect on sea level rise is an order of magnitude or more smaller than thermal effects (Lowe and Gregory, 2006).

The thermal contribution to sea level can be calculated from in situ temperature measurements (Section 3.2). It has only been possible to directly measure the mass component of sea level since the launch of the Gravity Recovery and Climate Experiment (GRACE) in 2002 (Chambers et al., 2004). Before that, estimates were based either on estimates of glacier and ice sheet mass losses or using residuals between sea level measured by altimetry or tide gauges and estimates of the thermosteric component (e.g., Willis et al., 2004; Domingues et al., 2008), which allowed for the estimation of seasonal and interannual variations as well. GIA also causes a gravitational signal in GRACE data that must be removed in order to determine present-day mass changes; this correction is of the same order of magnitude as the expected trend and is still uncertain at the 30% level (Chambers et al., 2010).

The GRACE satellite lets us see how much ice has melted into the ocean. It’s not easy to calculate this otherwise.

The fourth assessment report from the IPCC in 2007 reported that sea level rise from the Antarctic ice sheet for the previous decade was between -0.3mm/yr and +0.5mm/yr. That is, without the new satellite measurements, it was very difficult to confirm whether Antarctica had been gaining or losing ice.

Historical Sea Level Rise

From AR5, chapter 3, p. 287:

From AR5, chapter 3

From AR5, chapter 3

Figure 1 – Click to expand

  • The top left graph shows that various researchers are fairly close in their calculations of overall sea level rise over the past 130 years
  • The bottom left graph shows that over the last 40 years the impact of melting ice has been more important than the expansion of a warmer ocean (“thermosteric component” = the effect of a warmer ocean expanding)
  • The bottom right graph shows that over the last 7 years the measurements are consistent – satellite measurement of sea level change matches the sum of mass loss (melting ice) plus an expanding ocean (the measurements from Argo turned into sea level rise).

This gives us the mean sea level. Remember that local winds, ocean currents and changes in salinity can change this trend locally.

Many people have written about the recent accelerating trends in sea level rise. Here is AR5 again, with a graph of the 18-year trend at each point in time. We can see that different researchers reach different conclusions and that the warming period in the first half of the 20th century created sea level rise comparable to today:

From AR5, chapter 3

From AR5, chapter 3

The conclusion in AR5:

It is virtually certain that globally averaged sea level has risen over the 20th century, with a very likely mean rate between 1900 and 2010 of 1.7 [1.5 to 1.9] mm/yr and 3.2 [2.8 and 3.6] mm/yr between 1993 and 2010.

This assessment is based on high agreement among multiple studies using different methods, and from independent observing systems (tide gauges and altimetry) since 1993.

It is likely that a rate comparable to that since 1993 occurred between 1920 and 1950, possibly due to a multi-decadal climate variation, as individual tide gauges around the world and all reconstructions of GMSL show increased rates of sea level rise during this period.

Forecast Future Sea Level Rise

AR5, chapter 13 is the place to find predictions of the future on sea level, p. 1140:

For the period 2081–2100, compared to 1986–2005, global mean sea level rise is likely (medium confidence) to be in the 5 to 95% range of projections from process-based models, which give:

  • 0.26 to 0.55 m for RCP2.6
  • 0.32 to 0.63 m for RCP4.5
  • 0.33 to 0.63 m for RCP6.0
  • 0.45 to 0.82 m for RCP8.5

For RCP8.5, the rise by 2100 is 0.52 to 0.98 m..

We have considered the evidence for higher projections and have concluded that there is currently insufficient evidence to evaluate the probability of specific levels above the assessed likely range. Based on current understanding, only the collapse of marine-based sectors of the Antarctic ice sheet, if initiated, could cause global mean sea level to rise substantially above the likely range during the 21st century.

This potential additional contribution cannot be precisely quantified but there is medium confidence that it would not exceed several tenths of a meter of sea level rise during the 21st century.

I highlighted RCP6.0 as this seems to correspond to past development pathways with little CO2 mitigation policies. No one knows the future, this is just my pick, barring major changes from the recent past.

In the next article we will consider impacts of future sea level rise in various regions.

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

References

Observations: Oceanic Climate Change and Sea Level. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, NL Bindoff et al (2007)

Observations: Ocean. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, M Rhein et al (2013)

Sea Level Change. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, JA Church et al (2013)

Read Full Post »

In Thirteen – Terminator II we had a cursory look at the different “proxies” for temperature and ice volume/sea level. And we’ve considered some issues around dating of proxies.

There are two main proxies we have used so far to take a look back into the ice ages:

  • δ18O in deep ocean cores in the shells of foraminifera – to measure ice volume
  • δ18O in the ice in ice cores (Greenland and Antarctica) – to measure temperature

Now we want to take a closer look at the proxies themselves. It’s a necessary subject if we want to understand ice ages, because the proxies don’t actually measure what they might be assumed to measure. This is a separate issue from the dating: of ice; of gas trapped in ice; and of sediments in deep ocean cores.

If we take samples of ocean water, H2O, and measure the proportion of the oxygen isotopes, we find (Ferronsky & Polyakov 2012):

  • 16O – 99.757 %
  • 17O –   0.038%
  • 18O –   0.205%

There is another significant water isotope, Deuterium – aka, “heavy hydrogen” – where the water molecule is HDO, also written as 1H2HO – instead of H2O.

The processes that affect ratios of HDO are similar to the processes that affect the ratios of H218O, and consequently either isotope ratio can provide a temperature proxy for ice cores. A value of δD equates, very roughly, to 10x a value of δ18O, so mentally you can use this ratio to convert from δ18O to δD (see note 1).

In Note 2 I’ve included some comments on the Dole effect, which is the relationship between the ocean isotopic composition and the atmospheric oxygen isotopic composition. It isn’t directly relevant to the discussion of proxies here, because the ocean is the massive reservoir of 18O and the amount in the atmosphere is very small in comparison (1/1000). However, it might be of interest to some readers and we will return to the atmospheric value later when looking at dating of Antarctic ice cores.

Terminology and Definitions

The isotope ratio, δ18O, of ocean water = 2.005 ‰, that is, 0.205 %. This is turned into a reference, known as Vienna Standard Mean Ocean Water. So with respect to VSMOW, δ18O, of ocean water = 0. It’s just a definition. The change is shown as δ, the Greek symbol for delta, very commonly used in maths and physics to mean “change”.

The values of isotopes are usually expressed in terms of changes from the norm, that is, from the absolute standard. And because the changes are quite small they are expressed as parts per thousand = per mil = ‰, instead of percent, %.

So as δ18O changes from 0 (ocean water) to -50‰ (typically the lowest value of ice in Antarctica), the proportion of 18O goes from 0.20% (2.0‰) to 0.19% (1.9‰).

If the terminology is confusing think of the above example as a 5% change. What is 5% of 20? Answer is 1; and 20 – 1 = 19. So the above example just says if we reduce the small amount, 2 parts per thousand of 18O by 5% we end up with 1.9 parts per thousand.

Here is a graph that links the values together:

From Hoef 2009

From Hoefs 2009

Figure 1

Fractionation, or Why Ice Sheets are So Light

We’ve seen this graph before – the δ18O (of ice) in Greenland (NGRIP) and Antarctica (EDML) ice sheets against time:

From EPICA 2006

From EPICA 2006

Figure 2

Note that the values of δ18O from Antarctica (EDML – top line) through the last 150 kyrs are from about -40 to -52 ‰. And the values from Greenland (NGRIP – black line in middle section) are from about -32 to -44 ‰.

There are some standard explanations around – like this link – but the I’m not sure the graphic alone quite explains it – unless you understand the subject already..

If we measure the 18O concentration of a body of water, then we measure the 18O concentration of the water vapor above it, we find that the water vapor value has 18O at about -10 ‰ compared with the body of water. We write this as δ18O = -10 ‰. That is, the water vapor is a little lighter, isotopically speaking, than the ocean water.

The processes (fractionation) that cause this are easy to reproduce in the lab:

  • during evaporation, the lighter isotopes evaporate preferentially
  • during precipitation, the heavier isotopes precipitate preferentially

(See note 3).

So let’s consider the journey of a parcel of water vapor evaporated somewhere near the equator. The water vapor is a little reduced in 18O (compared with the ocean) due to the evaporation process. As the parcel of air travels away from the equator it rises and cools and some of the water vapor condenses. The initial rain takes proportionately more 18O than is in the parcel – so the parcel of air gets depleted in 18O. It keeps moving away from the equator, the air gets progressively colder, it keeps raining out, and the further it goes the less the proportion of 18O remains in the parcel of air. By the time precipitation forms in polar regions the water or ice is very light isotopically, that is, δ18O is the most negative it can get.

As a very simplistic idea of water vapor transport, this explains why the ice sheets in Greenland and Antarctica have isotopic values that are very low in 18O. Let’s take a look at some data to see how well such a simplistic idea holds up..

The isotopic composition of precipitation:

From Gat 2010

From Gat 2010

Figure 3 – Click to Enlarge

We can see the broad result represented quite well – the further we are in the direction of the poles the lower the isotopic composition of precipitation.

In contrast, when we look at local results in some detail we don’t see such a tidy picture. Here are some results from Rindsberger et al (1990) from central and northern Israel:

From Rindsberger et al 1990

From Rindsberger et al 1990

Figure 4

From Rindsberger et al 1990

From Rindsberger et al 1990

Figure 5

The authors comment:

It is quite surprising that the seasonally averaged isotopic composition of precipitation converges to a rather well-defined value, in spite of the large differences in the δ value of the individual precipitation events which show a range of 12‰ in δ18O.. At Bet-Dagan.. from which we have a long history.. the amount weighted annual average is δ18O = 5.07 ‰ ± 0.62 ‰ for the 19 year period of 1965-86. Indeed the scatter of ± 0.6‰ in the 19-year long series is to a significant degree the result of a 4-year period with lower δ values, namely the years 1971-75 when the averaged values were δ18O = 5.7 ‰ ± 0.2 ‰. That period was one of worldwide climate anomalies. Evidently the synoptic pattern associated with the precipitation events controls both the mean isotopic values of the precipitation and its variability.

The seminal 1964 paper by Willi Dansgaard is well worth a read for a good overview of the subject:

As pointed out.. one cannot use the composition of the individual rain as a direct measure of the condensation temperature. Nevertheless, it has been possible to show a simple linear correlation between the annual mean values of the surface temperature and the δ18O content in high latitude, non-continental precipitation. The main reason is that the scattering of the individual precipitation compositions, caused by the influence of numerous meteorological parameters, is smoothed out when comparing average compositions at various locations over a sufficiently long period of time (a whole number of years).

The somewhat revised and extended correlation is shown in fig. 3..

From Dansgaard 1964

From Dansgaard 1964

Figure 6

So we appear to have a nice tidy picture when looking at annual means, a little bit like the (article) figure 3 from Gat’s 2010 textbook.

Before “muddying the waters” a little, let’s have a quick look at ocean values.

Ocean δ18O

We can see that the ocean, as we might expect, is much more homogenous, especially the deep ocean. Note that these results are δD (think, about 10x the value of δ18O):

From Ferronsky & Polyakov (2012)

From Ferronsky & Polyakov (2012)

Figure 7 – Click to enlarge

And some surface water values of δD (and also salinity), where we see a lot more variation, again as might expect:

From Ferronsky & Polyakov 2012

From Ferronsky & Polyakov 2012

Figure 8

If we do a quick back of the envelope calculation, using the fact that the sea level change between the last glacial maximum (LGM) and the current interglacial was about 120m, the average ocean depth is 3680m we expect a glacial-interglacial change in the ocean of about 1.5 ‰.

This is why the foraminifera near the bottom of the ocean, capturing 18O from the ocean, are recording ice volume, whereas the ice cores are recording atmospheric temperatures.

Note as well that during the glacial, with more ice locked up in ice sheets, the value of ocean δ18O will be higher. So colder atmospheric temperatures relate to lower values of δ18O in precipitation, but – due to the increase in ice, depleted in 18O – higher values of ocean δ18O.

Muddying the Waters

Hoefs 2009, gives a good summary of the different factors in isotopic precipitation:

The first detailed evaluation of the equilibrium and nonequilibrium factors that determine the isotopic composition of precipitation was published by Dansgaard (1964). He demonstrated that the observed geographic distribution in isotope composition is related to a number of environmental parameters that characterize a given sampling site, such as latitude, altitude, distance to the coast, amount of precipitation, and surface air temperature.

Out of these, two factors are of special significance: temperature and the amount of precipitation. The best temperature correlation is observed in continental regions nearer to the poles, whereas the correlation with amount of rainfall is most pronounced in tropical regions as shown in Fig. 3.15.

The apparent link between local surface air temperature and the isotope composition of precipitation is of special interest mainly because of the potential importance of stable isotopes as palaeoclimatic indicators. The amount effect is ascribed to gradual saturation of air below the cloud, which diminishes any shift to higher δ18O-values caused by evaporation during precipitation.

[Emphasis added]

From Hoefs 2009

From Hoefs 2009

Figure 9

The points that Hoefs make indicate some of the problems relating to using δ18O as the temperature proxy. We have competing influences that depend on the source and journey of the air parcel responsible for the precipitation. What if circulation changes?

For readers who have followed the past discussions here on water vapor (e.g., see Clouds & Water Vapor – Part Two) this is a similar kind of story. With water vapor, there is a very clear relationship between ocean temperature and absolute humidity, so long as we consider the boundary layer. But what happens when the air rises high above that – then the amount of water vapor at any location in the atmosphere is dependent on the past journey of air, and as a result the amount of water vapor in the atmosphere depends on large scale circulation and large scale circulation changes.

The same question arises with isotopes and precipitation.

The ubiquitous Jean Jouzel and his colleagues (including Willi Dansgaard) from their 1997 paper:

In Greenland there are significant differences between temperature records from the East coast and the West coast which are still evident in 30 yr smoothed records. The isotopic records from the interior of Greenland do not appear to follow consistently the temperature variations recorded at either the east coast or the west coast..

This behavior may reflect the alternating modes of the North Atlantic Oscillation..

They [simple models] are, however, limited to the study of idealized clouds and cannot account for the complexity of large convective systems, such as those occurring in tropical and equatorial regions. Despite such limitations, simple isotopic models are appropriate to explain the main characteristics of dD and d18O in precipitation, at least in middle and high latitudes where the precipitation is not predominantly produced by large convective systems.

Indeed, their ability to correctly simulate the present-day temperature-isotope relationships in those regions has been the main justification of the standard practice of using the present day spatial slope to interpret the isotopic data in terms of records of past temperature changes.

Notice that, at least for Antarctica, data and simple models agree only with respect to the temperature of formation of the precipitation, estimated by the temperature just above the inversion layer, and not with respect to the surface temperature, which owing to a strong inversion is much lower..

Thus one can easily see that using the spatial slope as a surrogate of the temporal slope strictly holds true only if the characteristics of the source have remained constant through time.

[Emphases added]

If all the precipitation occurs during warm summer months, for example, the “annual δ18O” will naturally reflect a temperature warmer than Ts [annual mean]..

If major changes in seasonality occur between climates, such as a shift from summer-dominated to winter- dominated precipitation, the impact on the isotope signal could be large..it is the temperature during the precipitation events that is imprinted in the isotopic signal.

Second, the formation of an inversion layer of cold air up to several hundred meters thick over polar ice sheets makes the temperature of formation of precipitation warmer than the temperature at the surface of the ice sheet. Inversion forms under a clear sky.. but even in winter it is destroyed rapidly if thick cloud moves over a site..

As a consequence of precipitation intermittancy and of the existence of an inversion layer, the isotope record is only a discrete and biased sampling of the surface temperature and even of the temperature at the atmospheric level where the precipitation forms. Current interpretation of paleodata implicitly assumes that this bias is not affected by climate change itself.

Now onto the oceans, surely much simpler, given the massive well-mixed reservoir of 18O?

Mix & Ruddiman (1984):

The oxygen-isotopic composition of calcite is dependent on both the temperature and the isotopic composition of the water in which it is precipitated

..Because he [Shackleton] analyzed benthonic, instead of planktonic, species he could assume minimal temperature change (limited by the freezing point of deep-ocean water). Using this constraint, he inferred that most of the oxygen-isotope signal in foraminifera must be caused by changes in the isotopic composition of seawater related to changing ice volume, that temperature changes are a secondary effect, and that the isotopic composition of mean glacier ice must have been about -30 ‰.

This estimate has generally been accepted, although other estimates of the isotopic composition have been made by Craig (-17‰); Eriksson (-25‰), Weyl (-40‰) and Dansgaard & Tauber (≤30‰)

..Although Shackleton’s interpretation of the benthonic isotope record as an ice-volume/sea- level proxy is widely quoted, there is considerable disagreement between ice-volume and sea- level estimates based on δ18O and those based on direct indicators of local sea level. A change in δ18O of 1.6‰ at δ(ice) = – 35‰ suggests a sea-level change of 165 m.

..In addition, the effect of deep-ocean temperature changes on benthonic isotope records is not well constrained. Benthonic δ18O curves with amplitudes up to 2.2 ‰ exist (Shackleton, 1977; Duplessy et al., 1980; Ruddiman and McIntyre, 1981) which seem to require both large ice- volume and temperature effects for their explanation.

Many other heavyweights in the field have explained similar problems.

We will return to both of these questions in the next article.

Conclusion

Understanding the basics of isotopic changes in water and water vapor is essential to understand the main proxies for past temperatures and past ice volumes. Previously we have looked at problems relating to dating of the proxies, in this article we have looked at the proxies themselves.

There is good evidence that current values of isotopes in precipitation and ocean values give us a consistent picture that we can largely understand. The question about the past is more problematic.

I started looking seriously at proxies as a means to perhaps understand the discrepancies for key dates of ice age terminations between radiometric dating and ocean cores (see Thirteen – Terminator II). Sometimes the more you know, the less you understand..

Articles in the Series

Part One – An introduction

Part Two – Lorenz – one point of view from the exceptional E.N. Lorenz

Part Three – Hays, Imbrie & Shackleton – how everyone got onto the Milankovitch theory

Part Four – Understanding Orbits, Seasons and Stuff – how the wobbles and movements of the earth’s orbit affect incoming solar radiation

Part Five – Obliquity & Precession Changes – and in a bit more detail

Part Six – “Hypotheses Abound” – lots of different theories that confusingly go by the same name

Part Seven – GCM I – early work with climate models to try and get “perennial snow cover” at high latitudes to start an ice age around 116,000 years ago

Part Seven and a Half – Mindmap – my mind map at that time, with many of the papers I have been reviewing and categorizing plus key extracts from those papers

Part Eight – GCM II – more recent work from the “noughties” – GCM results plus EMIC (earth models of intermediate complexity) again trying to produce perennial snow cover

Part Nine – GCM III – very recent work from 2012, a full GCM, with reduced spatial resolution and speeding up external forcings by a factors of 10, modeling the last 120 kyrs

Part Ten – GCM IV – very recent work from 2012, a high resolution GCM called CCSM4, producing glacial inception at 115 kyrs

Pop Quiz: End of An Ice Age – a chance for people to test their ideas about whether solar insolation is the factor that ended the last ice age

Eleven – End of the Last Ice age – latest data showing relationship between Southern Hemisphere temperatures, global temperatures and CO2

Twelve – GCM V – Ice Age Termination – very recent work from He et al 2013, using a high resolution GCM (CCSM3) to analyze the end of the last ice age and the complex link between Antarctic and Greenland

Thirteen – Terminator II – looking at the date of Termination II, the end of the penultimate ice age – and implications for the cause of Termination II

Fourteen – Concepts & HD Data – getting a conceptual feel for the impacts of obliquity and precession, and some ice age datasets in high resolution

Fifteen – Roe vs Huybers – reviewing In Defence of Milankovitch, by Gerard Roe

Sixteen – Roe vs Huybers II – comparing the results if we take the Huybers dataset and tie the last termination to the date implied by various radiometric dating

Eighteen – “Probably Nonlinearity” of Unknown Origin – what is believed and what is put forward as evidence for the theory that ice age terminations were caused by orbital changes

Nineteen – Ice Sheet Models I – looking at the state of ice sheet models

References

Isotopes of the Earth’s Hydrosphere, VI Ferronsky & VA Polyakov, Springer (2012)

Isotope Hydrology – A Study of the Water Cycle, Joel R Gat, Imperial College Press (2010)

Stable Isotope Geochemistry, Jochen Hoefs, Springer (2009)

Patterns of the isotopic composition of precipitation in time and space: data from the Israeli storm water collection program, M Rindsberger, Sh Jaffe, Sh Rahamim and JR Gat, Tellus (1990) – free paper

Stable isotopes in precipitation, Willi Dansgaard, Tellus (1964) – free paper

Validity of the temperature reconstruction from water isotopes in ice cores, J Jouzel, RB Alley, KM Cuffey, W Dansgaard, P Grootes, G Hoffmann, SJ Johnsen, RD Koster, D Peel, CA Shuman, M Stievenard, M Stuiver, J White, Journal of Geophysical Research (1997) – free paper

Oxygen Isotope Analyses and Pleistocene Ice Volumes, Mix & Ruddiman, Quaternary Research (1984)  – free paper

– and on the Dole effect, only covered in Note 2:

The Dole effect and its variations during the last 130,000 years as measured in the Vostok ice core, Michael Bender, Todd Sowers, Laurent Labeyrie, Global Biogeochemical Cycles (1994) – free paper

A model of the Earth’s Dole effect, Georg Hoffmann, Matthias Cuntz, Christine Weber, Philippe Ciais, Pierre Friedlingstein, Martin Heimann, Jean Jouzel, Jörg Kaduk, Ernst Maier-Reimer, Ulrike Seibt & Katharina Six, Global Biogeochemical Cycles (2004) – free paper

The isotopic composition of atmospheric oxygen Boaz Luz & Eugeni Barkan, Global Biogeochemical Cycles (2011) – free paper

Notes

Note 1: There is a relationship between δ18O and δD which is linked to the difference in vapor pressures between H2O and HDO in one case and H216O and H218O in the other case.

δD = 8 δ18O + 10 – known as the Global Meteoric Water Line.

The equation is more of a guide and real values vary sufficiently that I’m not really clear about its value. There are lengthy discussions of it and the variations from it in Ferronsky & Polyakov.

Note 2: The Dole effect

When we measure atmospheric oxygen, we find that the δ18O = 23.5 ‰ with respect to the oceans (VSMOW) – this is the Dole effect

So, oxygen in the atmosphere has a greater proportion of 18O than the ocean

Why?

How do the atmosphere and ocean exchange oxygen? In essence, photosynthesis turns sunlight + water (H2O) + carbon dioxide (CO2) –> sugar + oxygen (O2).

Respiration turns sugar + oxygen –> water + carbon dioxide + energy

The isotopic composition of the water in photosynthesis affects the resulting isotopic composition in the atmospheric oxygen.

The reason the Dole effect exists is well understood, but the reason why the value comes out at 23.5‰ is still under investigation. This is because the result is the global aggregate of lots of different processes. So we might understand the individual processes quite well, but that doesn’t mean the global value can be calculated accurately.

It is also the case that δ18O of atmospheric O2 has varied in the past – as revealed first of all in the Vostok ice core from Antarctica.

Michael Bender and his colleagues had a go at calculating the value from first principles in 1994. As they explain (see below), although it might seem as though their result is quite close to the actual number it is not a very successful result at all. Basically due to the essential process you start at 20‰ and should get to 23.5‰, but they o to 20.8‰.

Bender et al 1994:

The δ18O of O2.. reflects the global responses of the land and marine biospheres to climate change, albeit in a complex manner.. The magnitude of the Dole effect mainly reflects the isotopic composition of O2 produced by marine and terrestrial photosynthesis, as well as the extent to while the heavy isotope is discriminated against during respiration..

..Over the time period of interest here, photosynthesis and respiration are the most important reactions producing and consuming O2. The isotopic composition of O2 in air must therefore be understood in terms of isotope fractionation associated with these reactions.

The δ18O of O2 produced by photosynthesis is similar to that of the source water. The δ18O of O2 produced by marine plants is thus 0‰. The δ18O of O2 produced on the continents has been estimated to lie between +4 and +8‰. These elevated δ18O values are the result of elevated leaf water δ18O values resulting from evapotranspiration.

..The calculated value for the Dole effect is then the productivity-weighted values of the terrestrial and marine Dole effects minus the stratospheric diminution: +20.8‰. This value is considerably less than observed (23.5‰). The difference between the expected value and the observed value reflects errors in our estimates and, conceivably, unrecognized processes.

Then they assess the Vostok record, where the main question is less about why the Dole effect varies apparently with precession (period of about 20 kyrs), than why the variation is so small. After all, if marine and terrestrial biosphere changes are significant from interglacial to glacial then surely those changes would reflect more strongly in the Dole effect:

Why has the Dole effect been so constant? Answering this question is impossible at the present time, but we can probably recognize the key influences..

They conclude:

Our ability to explain the magnitude of the contemporary Dole effect is a measure of our understanding of the global cycles of oxygen and water. A variety of recent studies have improved our understanding of many of the principles governing oxygen isotope fractionation during photosynthesis and respiration.. However, our attempt to quantitively account for the Dole effect in terms of these principles was not very successful.. The agreement is considerably worse than it might appear given the fact that respiratory isotope fractionation alone must account for ~20‰ of the stationary enrichment of the 18O of O2 compared with seawater..

..[On the Vostok record] Our results show that variation in the Dole effect have been relatively small during most of the last glacial-interglacial cycle. These small changes are not consistent with large glacial increases in global oceanic productivity.

[Emphasis added]

Georg Hoffmann and his colleagues had another bash 10 years later and did a fair bit better:

The Earth’s Dole effect describes the isotopic 18O/16O-enrichment of atmospheric oxygen with respect to ocean water, amounting under today’s conditions to 23.5‰. We have developed a model of the Earth’s Dole effect by combining the results of three- dimensional models of the oceanic and terrestrial carbon and oxygen cycles with results of atmospheric general circulation models (AGCMs) with built-in water isotope diagnostics.

We obtain a range from 22.4‰ to 23.3‰ for the isotopic enrichment of atmospheric oxygen. We estimate a stronger contribution to the global Dole effect by the terrestrial relative to the marine biosphere in contrast to previous studies. This is primarily caused by a modeled high leaf water enrichment of 5–6‰. Leaf water enrichment rises by ~1‰ to 6–7‰ when we use it to fit the observed 23.5‰ of the global Dole effect.

Very recently Luz & Barkan (2011), backed up by lots of new experimental work produced a slightly closer estimate with some revisions of the Hoffman et al results:

Based on the new information on the biogeochemical mechanisms involved in the global oxygen cycle, as well as new and more precise experimental data on oxygen isotopic fractionation in various processes obtained over the last 15 years, we have reevaluated the components of the Dole effect.Our new observations on marine oxygen isotope effects, as well as, new findings on photosynthetic fractionation by marine organisms lead to the important conclusion that the marine, terrestrial and the global Dole effects are of similar magnitudes.

This result allows answering a long‐standing unresolved question on why the magnitude of the Dole effect of the last glacial maximum is so similar to the present value despite enormous environmental differences between the two periods. The answer is simple: if DEmar [marine Dole effect] and DEterr [terrestrial Dole effect] are similar, there is no reason to expect considerable variations in the magnitude of the Dole effect as the result of variations in the ratio terrestrial to marine O2 production.

Finally, the widely accepted view that the magnitude of the Dole effect is controlled by the ratio of land‐to‐sea productivity must be changed. Instead of the land‐sea control, past variations in the Dole effect are more likely the result of changes in low‐latitude hydrology and, perhaps, in structure of marine phytoplankton communities.

[Emphasis added]

Note 3:

Jochen Hoefs (2009):

Under equilibrium conditions at 25ºC, the fractionation factors for evaporating water are 1.0092 for 18O and 1.074 for D. However under natural conditions, the actual isotopic composition of water is more negative than the predicted equilibrium values due to kinetic effects.

The discussion of kinetic effects gets a little involved and I don’t think is really necessary to understand – the values of isotopic fractionation during evaporation and condensation are well understood. The confounding factors around what the proxies really measure relate to the journey (i.e. temperature history) and mixing of the various air parcels as well as the temperature of air relating to the precipitation event – is the surface temperature, the inversion temperature, both?

Read Full Post »

In the last article we had a look at the depth of the “mixed ocean layer” (MLD) and its implications for the successful measurement of climate sensitivity (assuming such a parameter exists as a constant).

In Part One I created a Matlab model which reproduced the same problems as Spencer & Braswell (2008) had found. This model had one layer  (an “ocean slab” model) to represent the MLD with a “noise” flux into the deeper ocean (and a radiative noise flux at top of atmosphere). Murphy & Forster claimed that longer time periods require an MLD of increased depth to “model” the extra heat flow into the deeper ocean over time:

Because heat slowly penetrates deeper into the ocean, an appropriate depth for heat capacity depends on the length of the period over which Eq. (1) is being applied (Watterson 2000; Held et al. 2010). For 80-yr global climate model runs, Gregory (2000) derived an optimum mixed layer depth of 150 m. Watterson (2000) found an initial global heat capacity equivalent to a mixed layer of 200 m and larger values for longer simulations.

This seems like it might make sense – if we wanted to keep a “zero dimensional model”. But it’s questionable whether the model retains any value with this “fudge”. So because heat actually moves from the mixed layer into the deeper ocean (rather than the mixed layer increasing in depth) I instead enhanced the model to create a heat flux from the MLD through a number of ocean layers with a parameter called the vertical eddy diffusivity to determine this heat flux.

So the model is now a 1D model with a parameterized approach to ocean convection.

Eddy Diffusivity

The concept here is the analogy of conductivity but when convection is instead the primary mover of heat.

Heat flow by conduction is governed by a material property called conductivity and by the temperature difference. Changes in temperature are governed by heat flow and by the heat capacity. The result is this equation for reference and interest – so don’t worry if you don’t understand it:

∂T / ∂tα∂²T / ∂z²  – the 1-d version (see note 1)

where T = temperature, t = time, α = thermal diffusivity and z = depth

What it says in almost plain English is that the change in temperature with respect to time is equal to the thermal diffusivity times the change in gradient of temperature with depth. Don’t worry if that’s not clear (and there is a explanation of the simple steps required to calculate this in note 1).

Now the thermal diffusivity, α:

α = k/cpρ, where k = conductivity, cp = heat capacity and ρ = density

So, an important bit to understand..

  • if the conductivity is high and the heat capacity is low then temperature can change quickly
  • if the conductivity is high and the heat capacity is high then it slows down temperature change, and
  • if the conductivity is low and the heat capacity is high then temperature takes much longer to change

Many researchers have attempted to measure an average value for eddy diffusivity in the ocean (and in lakes). The concept here, an explained in Part Two, is that turbulent motions of the ocean move heat much more effectively than conduction. The value can’t be calculated from first principles because that would mean solving the problem of turbulence, which is one of the toughest problems in physics. Instead it has to be estimated from measurements.

There is an inherent problem with eddy diffusivity for vertical heat transfer that we will come back to shortly.

There is also a minor problem of notation that is “solved” here by changing the notation. Usually conductivity is written as “k”. However, most papers on eddy diffusivity write diffusivity as “k”, sometimes “K”, sometimes “κ” (Greek ‘kappa’) – creating potential confusion so I revert back to “α”. And to make it clear that it is the convective value rather than the conductive value, I use αeddy. And for the equivalent parameter to conductivity, keddy.

keddy = αeddycpρ

because cp= 4200 J/K.kg and ρ ≈ 1000 kg/m³:

keddy =4.2 x 106  αeddy – it’s useful to be able to see what the diffusivity means in terms of an equivalent “conductivity” type parameter

Measurements of Eddy Diffusivity

Oeschger et al (1975):

α is an apparent global eddy diffusion coefficient which helps to reproduce an average transport phenomenon consisting of a series of distinct and overlapping mechanisms.

Oeschger and his co-workers studied the problem via the diffusion into the ocean of 14C from nuclear weapons testing.

The range they calculated for αeddy = 1 x 10-4 – 1.8 x 10-4 m²/s.

This equates to keddy = 420 – 760 W/m.K, and by comparison, the conductivity of still water, k = 0.6 W/m.K – making convection around 1,000 times more effective at moving heat vertically through the ocean.

Broecker et al (1980) took a similar approach to estimating this value and commented:

We do not mean to imply that the process of vertical eddy mixing actually occurs within the body of the main oceanic thermocline. Indeed, the values we require are an order of magnitude greater than those permitted by conventional oceanographic wisdom (see Garrett, 1979, for summary).

The vertical eddy coefficients used here should rather be thought of as parameters that take into account all the processes that transfer tracers across density horizons. In addition to vertical mixing by eddies, these include mixing induced by sediment friction at the ocean margins and mixing along the surface in the regions where density horizons outcrop.

Their calculation, like Oeschger’s, used a simple model with the observed values plugged in to estimate the parameter:

Anyone familiar with the water mass structure and ventilation dynamics of the ocean will quickly realize that the box-diffusion model is by no means a realistic representation. No simple modification to the model will substantially improve the situation.

To do so we must take a giant step in complexity to a new generation of models that attempt to account for the actual geometry of ventilation of the sea. We are as yet not in a position to do this in a serious way. At least a decade will pass before a realistic ocean model can be developed.

The values they calculated for eddy diffusivity were broken up into different regions:

  • αeddy(equatorial) = 3.5 x 10-5 m²/s
  • αeddy(temperate) = 2.0 x 10-4 m²/s
  • αeddy(polar) = 3.0 x 10-4 m²/s

We will use these values from Broecker to see what happens to the measurement problems of climate sensitivity when used in my simple model.

These two papers were cited by Hansen et al in their 1985 paper with the values for vertical eddy diffusivity used to develop the value of the “effective mixed depth” of the ocean.

In reviewing these papers and searching for more recent work in the field, I tapped into a rich vein of research that will be the subject of another day.

First, Ledwell et al (1998) who measured eddy diffusivity via SF6 that they injected into the ocean:

The diapycnal eddy diffusivity K estimated for the first 6 months was 0.12 ± 0.02 x10-4 m²/s, while for the subsequent 24 months it was 0.17 ± 0.02 x10-4 m²/s.

[Note: units changed from cm²/s into m²/s for consistency]

It is worth reading their comment on this aspect of ocean dynamics. (Note that isopycnal = contact density surfaces and diapycnal = across isopycnal):

The circulation of the ocean is severely constrained by density stratification. A water parcel cannot move from one surface of constant potential density to another without changing its salinity or its potential temperature. There are virtually no sources of heat outside the sunlit zone and away from the bottom where heat diffuses from the lithosphere, except for the interesting hydrothermal vents in special regions. The sources of salinity changes are similarly confined to the boundaries of the ocean. If water in the interior is to change potential density at all, it must be by mixing across density surfaces (diapycnal mixing) or by stirring and mixing of water of different potential temperature and salinity along isopycnal surfaces (isopycnal mixing).

Most inferences of dispersion parameters have been made from observations of the large-scale fields or from measurements of dissipation rates at very small scales. Unambiguously direct measurements of the mixing have been rare. Because of the stratification of the ocean, isopycnal mixing involves very different processes than diapycnal mixing, extending to much greater length scales. A direct approach to the study of both isopycnal and diapycnal mixing is to release a tracer and measure its subsequent dispersal. Such an experiment, lasting 30 months and involving more than 105 km² of ocean, is the subject of this paper.

From Jayne (2009):

For example, the Community Climate Simulation Model (CCSM) ocean component model uses a form similar to Eq. (1), but with an upper-ocean value of 0.1 x 10-4 m²/s and a deep-ocean value of 1.0 x 10-4 m²/s, with the transition depth at 1000 m.

However, there is no observational evidence to suggest that the mixing in the ocean is horizontally uniform, and indeed there is significant evidence that it is heterogeneous with spatial variations of several orders of magnitude in its intensity (Polzin et al. 1997; Ganachaud 2003).

More on eddy diffusivity measurements in another article – the parameter has a significant impact on modeling of the ocean in GCMs and there is a lot of current research into this subject.

Eddy Diffusivity and Buoyancy Gradient

Sarmiento et al (1976) measured isotopes near the ocean floor:

Two naturally occurring isotopes can be applied to the determination of the rate of vertical turbulent mixing in the deep sea: 222Rn (half-life 3.824 days) and 228Ra (half-life 5.75 years). In this paper we discuss the results from fourteen 222Rn and two 228Ra profiles obtained as part of the GEOSECS program.

From these results we conclude that the most important factor influencing the vertical eddy diffusivity is the buoyancy gradient [(g/p)(∂ρpot/∂z)]. The vertical diffusivity shows an inverse proportionality to the buoyancy gradient.

Their paper is very much about the measurements and calculations of the deeper ocean, but is relevant for anywhere in the ocean, and helps explain why the different values for different regions were obtained by Broecker that we saw earlier. (Prof. Wallace S. Broecker was a co-author on this paper as well, and has authored/co-authored 100’s of papers on the ocean).

What is the buoyancy gradient and why does it matter?

Cold fluids sink and hot fluids rise. This is because cold substances contract and so are more dense. So in general, in the ocean, the colder water is below and the warmer water above. Probably everyone knows this.

The buoyancy gradient is a measure of how strong this effect is. The change in density with depth determines how resistant the ocean is to being overturned. If the ocean was totally stable no heat would ever penetrate below the mixed layer. But it does. And if the ocean was totally stable then the measurements of 14C from nuclear testing would be zero below the mixed layer.

But it is not surprising that the more stable the ocean is due to the buoyancy gradient the less heat diffuses down by turbulent motion.

And this is why the estimates by Broecker shown earlier have a much lower value of diffusivity for the tropics than for the poles. In general the poles are where deep convection takes place – lots of cold water sinks, mixing the ocean – and the tropics are where much weaker upwelling takes place – because the ocean surface is strongly heated. This is part of the large scale motion of the ocean, known as the thermohaline circulation. More on this another day.

Now water is largely incompressible which means that the density gradient is only determined by temperature and salinity. This creates the problem that eddy diffusivity is a value which is not only parameterized, but also dependent on the vertical temperature difference in the ocean.

Heat flow also depends on temperature difference, but with the opposite relationship. This is not something to untangle today. Today we will just see what happens to our simple model when we use the best estimates of vertical eddy diffusivity.

Modeling, Non-Linearity and Climate Sensitivity Measurement Problems

Murphy & Forster agreed in part with Spencer & Braswell about the variation in radiative noise from CERES measurements. I quote at length, because the Murphy & Forster paper is not freely available:

For the parameter N, SB08 use a random daily shortwave flux scaled so that the standard deviation of monthly averages of outgoing radiation (N – λT) is 1.3 W/m².

They base this on the standard deviation of CERES shortwave data between March 2000 and December 2005 for the oceans between 20 °Nand 20 °S.

We have analyzed the same dataset and find that, after the seasonal cycle and slow changes in forcing are removed, the standard deviation of monthly means of the shortwave radiation is 1.24 W/m², close to the 1.3 W/m² specified by SB08. However, longwave (infrared) radiation changes the energy budget just as effectively from the earth as shortwave radiation (reflected sunlight). Cloud systems that might induce random fluctuations in reflected sunlight also change outgoing longwave radiation. In addition, the feedback parameter λ is due to both longwave and shortwave radiation.

Modeled total outgoing radiation should therefore be compared with the observed sum of longwave and shortwave outgoing radiation, not just the shortwave component. The standard deviation of the sum of longwave and shortwave radiation in the same CERES dataset is 0.94 W/m². Even this is an upper limit, since imperfect spatial sampling and instrument noise contribute to the standard deviation.

[Note I change their α (climate feedback) to λ for consistency with previous articles].

And they continue:

We therefore use 0.94 W/m² as an upper limit to the standard deviation of outgoing radiation over the tropical oceans. For comparison, the standard deviation of the global CERES outgoing radiation is about 0.55 W/m².

All of these points seem valid (however, I am still in the process of examining CERES data, and can’t comment on their actual values of standard deviation. Apart from the minor challenge of extracting the data from the netCDF format there is a lot to examine. A lot of data and a lot of issues surrounding data quality).

However, it raised an interesting idea about non-linearity. Readers who remember on Part One will know that as radiative noise increases and ocean MLD decreases the measurement problem gets worse. And as the radiative noise decreases and ocean MLD increases the measurement problem goes away.

If we average global radiative noise and global MLD, plug these values into a zero-dimensional model and get minimal measurement problem what does this mean?

Due to non-linearity, it tells us nothing.

Averaging the inputs, applying them to a global model (i.e., a zero-dimensional model) and calculating λest (from the regression) gets very different results from applying the inputs separately to each region, averaging the results and calculating λest

I tested this with a simple model – created two regions, one 10% of the surface area, the other 90%. In the larger region the MLD was 200m and the radiative noise was zero; and in the smaller region the MLD was 20m and the (standard deviation of) radiative noise was varied from 0 – 2. The temperature and radiative flux were converted into an area weighted time series and the regression produced large deviations from the real value of λ.

A similar run on a global model with an MLD of 180m and radiative noise of 0-0.2 shows an accurate assessment of λ.

This is to be expected of course.

So with this in mind I tested the new 1D model with different values of ocean depth eddy diffusivity,  radiative noise, and an AR(1) model for the radiative noise. I used values for the tropical region as this is clearly the area most likely to upset the measurement – shallow MLD, higher radiative noise and weaker eddy diffusivity.

As best as I could determine from de Boyer Montegut’s paper, the average MLD for the 20°N – 20°S region is approximately 30m.

Here are the results using Oeschger’s value of eddy diffusivity for the tropics and the tropical value of radiative noise from MF2010 – varying ocean depth around 30m and the value of the AR(1) model for radiative noise:

Figure 1

For reference, as it’s hard to read off the graph, the value at 30m and φ=0.5 is λest = 2.3.

Using the current CCSM value of eddy diffusivity for the upper ocean:

Figure 2

For reference,  the value at 30m and φ=0.5 is λest = 0.2. (Compared with the real value of 3.0)

Note that these values are only for one region, not for the whole globe.

Another important point is that I have used the radiative noise value as the standard deviation of daily radiative noise. I have started to dig into CERES data to see whether such a value can be calculated, and also what typical value of autoregressive parameter should be used (and what kind of ARMA model), but this might take some time.

Yet smaller values of eddy diffusivity are possible for smaller regions, according to Jochum (2009). This would likely cause the problems of estimating climate sensitivity to become worse.

Simple Models

Murphy & Forster comment:

Although highly simplified, a single box model of the earth has some pedagogic value. One must remember that the heat capacity c and feedback parameter λ are not really constants, since heat penetrates more deeply into the ocean on long time scales and there are fast and slow climate feedbacks (Knutti et al. 2008).

It is tempting to add a few more boxes to account for land, ocean, different latitudes, and so forth. Adding more boxes to an energy balancemodel can be problematic because one must ensure that the boxes are connected in a physically consistent way. A good option is to instead consider a global climate model that has many boxes connected in a physically consistent manner.

The point being that no one believes a slab model of the ocean to be a model that gives really useful results. Spencer & Braswell likewise don’t believe that the slab model is in any way an accurate model of the climate.

They used such a model just to demonstrate a possible problem. Murphy & Forster’s criticism doesn’t seem to have solved the problem of “can we measure climate sensitivity?

Or at least, it appears easy to show that slightly different enhancements of the simple model demonstrate continued problems in measuring climate sensitivity – due to the impact of radiative noise in the climate system.

Conclusion

I have produced a simple model and apparently demonstrated continued climate sensitivity measurement problems. This is in contrast to Murphy & Forster who took a different approach and found that the problem went away. However, my model has a more realistic approach to moving heat from the mixed layer into the ocean depths than theirs.

My model does have the drawback that the massive army of Science of Doom model testers and quality control champions are all away on their Xmas break. So the model might be incorrectly coded.

It’s also likely that someone else can come along and take a slightly enhanced version of this model and make the problem vanish.

I have used values for MLD and eddy diffusivity that seem to represent real-world values but I have no idea as to the correct values for standard deviation and auto-correlation of daily radiative noise (or appropriate ARMA model). These values have a big impact on the climate sensitivity measurement problem for reasons explained in Part One.

A useful approach to determining the effect of radiative noise on climate sensitivity measurement might be to use a coupled atmosphere-ocean GCM with a known climate sensitivity and an innovative way of removing radiative noise. These kind of experiments are done all the time to isolate one effect or one parameter.

Perhaps someone has already done this specific test?

I see other potential problems in measuring climate sensitivity. Here is one obvious problem – as the temperature of the mixed layer increases with continued increases in radiative forcing the buoyancy gradient increases and the eddy diffusivity reduces. We can calculate radiative forcing due to “greenhouse” gases quite accurately and therefore remove it from the regression analysis (see Spencer & Braswell 2008 for more on this). But we can’t calculate the change in eddy diffusivity and heat loss to the deeper ocean. This adds another “correlated” term that seems impossible to disentangle from the climate sensitivity calculation.

An alternative way of looking at this is that climate sensitivity might not be a constant – as already noted in Part One.

Articles in this Series

Measuring Climate Sensitivity – Part One

Measuring Climate Sensitivity – Part Two – Mixed Layer Depths

References

Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008) – FREE

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

A box diffusion model to study the carbon dioxide exchange in nature, Oeschger et al, Tellus (1975)

Modeling the carbon system, Broecker et al, Radiocarbon (1980) – FREE

Climate response times: dependence on climate sensitivity and ocean mixing, Hansen et al, Science (1985)

The study of mixing in the ocean: A brief history, MC Gregg, Oceanography (1991) – FREE

Spatial Variability of Turbulent Mixing in the Abyssal Ocean, Polzin et al, Science (1997) – FREE

The Impact of Abyssal Mixing Parameterizations in an Ocean General Circulation Model, Steven R. Jayne, Journal of Physical Oceanography (2009)

The relationship between vertical eddy diffusion and buoyancy gradient in the deep sea, Sarmiento et al, Earth & Planetary Science Letters (1976)

Mixing of a tracer in the pycnocline, Ledwell et al, JGR (1998)

Impact of latitudinal variations in vertical diffusivity on climate simulations, Jochum, JGR (2009) – FREE

Mixed layer depth over the global ocean: An examination of profile data and a profile-based climatology, de Boyer Montegut et al, JGR (2004)

Notes

Note 1: The 1D version is really:

∂T / ∂t = ∂/∂z (α.∂T/∂z)

due to the fact that α can be a function of z (and definitely is in the case of the ocean).

Although this looks tricky – and it is tricky to find analytical solutions – solving the 1D version numerically is very straightforward and anyone can do it.

In plain English is looks something like:

– Heat flow into cell X = temperature difference between cell X and cell X-1

– Heat flow out of cell X = temperature difference between cell X and cell X+1

– Change in temperature = (Heat flow into cell X – Heat flow out of cell X) x time / heat capacity

Note 2: I am in the process of examining CERES data. Apart from the challenge of extracting the data from the netCDF format there is a lot to examine. A lot of data and a lot of issues surrounding data quality.

Read Full Post »

In Measuring Climate Sensitivity – Part One we saw that there can be potential problems in attempting to measure the parameter called “climate sensitivity”.

Using a simple model Spencer & Braswell (2008) had demonstrated that even when the value of “climate sensitivity” is constant and known, measurement of it can be obscured for a number of reasons.

The simple model was a “slab model” of the ocean with a top of atmosphere imbalance in radiation.

Murphy & Forster (2010) criticized Spencer & Braswell for a few reasons including the value chosen for the depth of this ocean mixed layer. As the mixed layer depth increases the climate sensitivity measurement problems are greatly reduced.

First, we will consider the mixed layer in the context of that simple model. Then we will consider what it means in real life.

The Simple Model of Climate Sensitivity

The simple model used by Spencer & Braswell has a “mixed ocean layer” of depth 50m.

Figure 1

In the model the mixed layer is where all of the imbalance in top of atmosphere radiation gets absorbed.

The idea in the simple model is that the energy absorbed from the top of atmosphere gets mixed into the top layer of the ocean very quickly. In reality, as we will see, there isn’t such a thing as one layer but it is a handy approximation.

Murphy & Forster commented:

For the heat capacity parameter c, SB08 use the heat capacity of a 50-m ocean mixed layer. This is too shallow to be realistic.

Because heat slowly penetrates deeper into the ocean, an appropriate depth for heat capacity depends on the length of the period over which Eq. (1) is being applied (Watterson 2000; Held et al. 2010).

For 80-yr global climate model runs, Gregory (2000) derived an optimum mixed layer depth of 150 m. Watterson (2000) found an initial global heat capacity equivalent to a mixed layer of 200 m and larger values for longer simulations.

Held et al. (2010) found an initial time constant τ = c/α of about four yr in the Geophysical Fluid Dynamics Laboratory global climate model. Schwartz (2007) used historical data to estimate a globally averaged mixed layer depth of 150 m, or 106 m if the earth were only ocean.

The idea is an attempt to keep the simplicity of one mixed layer for the model, but increase the depth of this mixed layer for longer time periods.

There is always a point where models – simplified versions of the real world – start to break down. This might be the case here.

The initial model was of a mixed layer of ocean, all at the same temperature because the layer is well-mixed – and with some random movement of heat between this mixed layer and the ocean depths. In a more realistic scenario, more heat flows into the deeper ocean as the length of time increases.

What Murphy & Forster are proposing is to keep the simple model and “account” for the ever increasing heat flow into the deeper ocean by using a depth of the mixed layer that is dependent on the time period.

If we do this perhaps the model will work, perhaps it won’t. By “work” we mean provide results that tell us something useful about the real world.

So I thought I would introduce some more realism (complexity) into the model and see what happened. This involves a bit of a journey.

Real Life Ocean Mixed Layer

Water is a very bad conductor of heat – as are plastic and other insulators. Good conductors of heat include metals.

However, in the ocean and the atmosphere conduction is not the primary heat transfer mechanism. It isn’t even significant. Instead, in the ocean it is convection – the bulk movement of fluids – that moves heat. Think of it like this – if you move a “parcel” of water, the heat in that parcel moves with it.

Let’s take a look at the temperature profile at the top of the ocean. Here the first graph shows temperature:

Soloviev & Lukas (1997)

Soloviev & Lukas (1997)

Figure 2

Note that the successive plots are not at higher and higher temperatures – they are just artificially separated to make the results easier to see. During the afternoon the sun heats the top of the ocean. As a result we get a temperature gradient where the surface is hotter than a few meters down. At night and early morning the temperature gradient disappears. (No temperature gradient means that the water is all at the same temperature)

Why is this?

Once the sun sets the ocean surface cools rapidly via radiation and convection to the atmosphere. The result is colder water, which is heavier. Heavier water sinks, so the ocean gets mixed. This same effect takes place on a larger scale for seasonal changes in temperature.

And the top of the ocean is also well mixed due to being stirred by the wind.

A comment from de Boyer Montegut and his coauthors (2004):

A striking and nearly universal feature of the open ocean is the surface mixed layer within which salinity, temperature, and density are almost vertically uniform. This oceanic mixed layer is the manifestation of the vigorous turbulent mixing processes which are active in the upper ocean.

Here is a summary graphic from the excellent Marshall & Plumb:

From Marshall & Plumb (2008)

Figure 3

There’s more on this subject in Does Back-Radiation “Heat” the Ocean? – Part Three.

How Deep is the Ocean Mixed Layer?

This is not a simple question. Partly it is a measurement problem, and partly there isn’t a sharp demarcation between the ocean mixed layer and the deeper ocean. Various researchers have made an effort to map it out.

Here is a global overview, again from Marshall & Plumb:

Figure 4

You can see that the deeper mixed layers occur in the higher latitudes.

Comment from de Boyer Montegut:

The main temporal variabilities of the MLD [mixed layer depth] are directly linked to the many processes occurring in the mixed layer (surface forcing, lateral advection, internal waves, etc), ranging from diurnal [Brainerd and Gregg, 1995] to interannual variability, including seasonal and intraseasonal variability [e.g., Kara et al., 2003a; McCreary et al., 2001]. The spatial variability of the MLD is also very large.

The MLD can be less than 20 m in the summer hemisphere, while reaching more than 500 m in the winter hemisphere in subpolar latitudes [Monterey and Levitus, 1997].

Here is a more complete map by month. Readers probably have many questions about methodology and I recommend reading the free paper:

From de Boyer Montegut et al (2004)

Figure 5 – Click for a larger image

Seeing this map definitely had me wondering about the challenge of measuring climate sensitivity. Spencer & Braswell had used 50m MLD to identify some climate sensitivity measurement problems. Murphy & Forster had reproduced their results with a much deeper MLD to demonstrate that the problems went away.

But what happens if instead we retest the basic model using the actual MLD which varies significantly by month and by latitude?

So instead of “one slab of ocean” at MLD = choose your value, we break up the globe into regions, have different values in each region each month and see what happens to climate sensitivity problems.

By the way, I also attempted to calculate the global annual (area weighted) average of MLD from the maps above, by eye. I also emailed the author of the paper to get some measurement details but no response.

My estimate of the data in this paper was a global annual area weighted average of 62 meters.

Trying Simple Models with Varying MLD

I updated the Matlab program from Measuring Climate Sensitivity – Part One. The globe is now broken up into 30º latitude bands, with the potential for a different value of mixed layer depth for each month of the year.

I created a number of different profiles:

Depth Type 0 – constant with month and latitude, as in the original article

Type 1 – using the values from de Boyer’s paper, as best as can be estimated from looking at the monthly maps.

Type 2 – no change each month, with scaling of 60ºN-90ºN = 100x the value for 0ºN – 30ºN, and 30ºN – 60ºN = 10x the value for 0ºN – 30ºN – similarly for the southern hemisphere.

Type 3 – alternating each month between Type 2 and its inverse, i.e., scaling of 0ºN – 30ºN = 100x the value for 60ºN-90ºN and 30ºN – 60ºN = 10x the value for 60ºN-90ºN.

Type 4 – no variation by latitude, but  month 1 = 1000x month 4, month 2 = 100x month 4, month 3 = 10x month 4, repeating 3 times  per year.

In each case the global annual (area weighted) average = 62m.

Essentially types 2-4 are aimed at creating extreme situations.

Here are some results (review the original article for some of the notation), recalling that the actual climate sensitivity, λ = 3.0:

Figure 6

Figure 7 – as figure 6 without 30-day averaging

Figure 8

Figure 9

Figure 10

Figure 11

Figure 12

What’s the message from these results?

In essence, type 0 (the original) and type 1 (using actual MLDs vs latitude and month from de Boyer’s paper) are quite similar – but not exactly the same.

However, if we start varying the MLD by latitude and month in a more extreme way the results come out very differently – even though the global average MLD is the same in each case.

This demonstrates that the temporal and area variation of MLD can have a significant effect and modeling the ocean as one slab – for the purposes of this enterprise – may be risky.

Non-Linearity

We haven’t considered the effect of non-linearity in these simple models. That is, what about interactions between different regions and months. If we created a yet more complex model where heat flowed between regions dependent on the relative depths of the mixed layers what would we find?

Losing the Plot?

Now, in case anyone has lost the plot by this stage – and it’s possible that I have – don’t get confused into thinking that we are evaluating GCM’s and gosh aren’t they simplistic.. No, GCM’s have very sophisticated modeling.

What we have been doing is tracing a path that started with a paper by Spencer & Braswell. This paper used a very simple model to show that with some random daily fluctuations in top of atmosphere radiative flux, perhaps due to clouds, the measurement of climate sensitivity doesn’t match the actual climate sensitivity.

We can do this in a model – prescribe a value and then test whether we can measure it. This is where this simple model came in. It isn’t a GCM.

However, Murphy & Forster came along and said if you use a deeper mixed ocean layer (which they claim is justified) then the measurement of climate sensitivity does more or less match the actual climate sensitivity (they also had comment on the values chosen for radiative flux anomalies, a subject for another day).

What struck me was that the test model needs some significant improvement to be able to assess whether or not climate sensitivity can be measured. And this is with the caveat – if climate sensitivity is a constant.

The Next Phase – More Realistic Ocean Model

As Murphy & Forster have pointed out, the longer the time period, the more heat is “injected” into the deeper ocean from the mixed layer.

So a better model would capture this better than just creating a deeper mixed layer for a longer time. Modeling true global ocean convection is an impossible task.

As a recap, conducted heat flow:

q” = k.ΔT/d

where q” = heat flow per unit area, k = conductivity, ΔT = temperature difference, and d = depth of layer

Take a look at Heat Transfer Basics – Part Zero for more on these basics.

For water, k = 0.6 W/m².K. So, as an example, if we have a 10ºC temperature difference across 1 km depth of water, q” = 0.006 W/m². This is tiny. Heat flow via conduction is insignificant. Convection is what moves heat in the ocean.

Many researchers have measured and estimated vertical heat flow in the ocean to come up with a value for vertical eddy diffusivity. This allows us to make some rough estimates of vertical heat flow via convection.

In the next version of the Matlab program (“in press”) the ocean is modeled with different eddy diffusivities below the mixed ocean layer to see what happens to the measurement of climate sensitivity. So far, the model comes up with wildly varying results when the eddy diffusivity is low, i.e., heat cannot easily move into the ocean depths. And it comes up with normal results when the eddy diffusivity is high, i.e., heat moves relatively quickly into the ocean depths.

Due to shortness of time, this problem has not yet been resolved. More in due course.

This article is already long enough, so the next part will cover the estimated values for eddy diffusivity because it’s an interesting subject

Conclusion

Regular readers of this blog understand that navigating to any kind of conclusion takes some time on my part. And that’s when the subject is well understood. I’m finding that the signposts on the journey to measuring climate sensitivity are confusing and hard to read.

And that said, this article hasn’t shed any more light on the measurement of climate sensitivity. Instead, we have reviewed more ways in which measurements of it might be wrong. But not conclusively.

Next up we will take a detour into eddy diffusivity, hoping in the meantime that the Matlab model problems can be resolved. Finally a more accurate model incorporating eddy diffusivity to model vertical heat flow in the ocean will show us whether or not climate sensitivity can be accurately measured.

Possibly.

Articles in this Series

Measuring Climate Sensitivity – Part One

Measuring Climate Sensitivity – Part Three – Eddy Diffusivity

References

Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration, Spencer & Braswell, Journal of Climate (2008)

On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Murphy & Forster, Journal of Climate (2010)

Observation of large diurnal warming events in the near-surface layer of the western equatorial Pacific warm pool, Soloviev & Lukas, Deep Sea Research Part I: Oceanographic Research Papers (1997)

Atmosphere, Ocean and Climate Dynamics: An Introductory Text, Marshall & Plumb, Elsevier Academic Press (2008)

Mixed layer depth over the global ocean: An examination of profile data and a profile-based climatology, de Boyer Montegut et al, JGR (2004)

Read Full Post »

In the ensuing discussion on Does Back Radiation “Heat” the Ocean? – Part Four, the subject of the cool skin of the ocean surface came up a number of times.

It’s not a simple subject, but it’s an interesting one so I’m going to plough on with it anyway.

Introduction

The ocean surface is typically something like 0.1°C – 0.6°C cooler than the temperature just below the surface. And this “skin”, or ultra-thin region, is less than a 1mm thick.

Here’s a diagram I posted in the comments of Does Back Radiation “Heat” the Ocean? – Part Three:

Kawai & Wada (2007)

from Kawai & Wada (2007)

Figure 1

There is a lot of interest in this subject because of the question: “When we say ‘sea surface temperature’ what do we actually mean?“.

As many climate scientists note in their papers, the relevant sea surface temperature for heat transfer between ocean and atmosphere is the very surface, the skin temperature.

In figure 1 you can see that during the day the temperature increases up to the surface and then, in the skin layer, reduces again. Note that the vertical axis is a logarithmic scale.

Then at night the temperature below the skin layer is mostly all at the same temperature (isothermal). This is because the surface cools rapidly at night, and therefore becomes cooler than the water below, so sinks. This diurnal mixing can also be seen in some graphs I posted in the comments of Does Back Radiation “Heat” the Ocean? – Part Four.

Before we look at the causes, here are a series of detailed measurements from Near-surface ocean temperature by Ward (2006):

Figure 2

Note: The red text and arrow is mine, to draw attention to the lower skin temperature. The measurements on the right were taken just before midday “local solar time”. I.e., just before the sun was highest in the sky.

And in the measurements below I’ve made it a bit easier to pick out the skin temperature difference with blue text “Skin temp“. The blue value in each graph is what is identified as ΔTc in the schematic above. The time is shown as local solar time.

Figure 3

Figure 4

Figure 5

Figure 6

Figure 7

The measurements of the skin surface temperature were made by MAERI, a passive infrared radiometric interferometer. The accuracy of the derived SSTs from M-AERI is better than 0.05 K.

Below the skin, the high-resolution temperature measurements were measured by SkinDeEP, an autonomous vertical profiler. This includes the “sub-skin” measurement, from which the sea surface temperature was subtracted to calculate ΔTc (see figure 1).

The Theory

The existence of the temperature gradient is explained by the way heat is transferred: within the bulk waters, heat transfer occurs due to turbulence, but as the surface is approached, viscous forces dominate and molecular processes prevail. Because heat transfer by molecular conduction is less efficient than by turbulence, a strong temperature gradient is established across the boundary layer.

Ward & Minnett (2001)

Away from the interface the temperature gradient is quickly destroyed by turbulent mixing. Thus the cool-skin temperature change is confined to a region of thickness, which is referred to as the molecular sublayer.

Fairall et al (1996)

What do they mean?

Here’s an insight into what happens at fluid boundaries from an online textbook (thanks to Dan Hughes for letting me know about it) – this textbook is freely available online:

From "A Heat Transfer Textbook", by Prof Lienhard & Prof Lienhard (2008)

From "A Heat Transfer Textbook", by Prof Lienhard & Prof Lienhard (2008)

Figure 8

The idea behind turbulent mixing in fluids is that larger eddies “spawn” smaller eddies, which in turn spawn yet smaller eddies until you are up against an interface for that fluid (or until energy is dissipated by other effects).

In the atmosphere, for example, large scale turbulence moves energy across many 100’s of kilometers. A few tens of meters above the ground you might measure eddies of a few hundreds of meters in size, and in the last meter above the ground, eddies might be measured in cms or meters, if they exist at all. And by the time we measure the fluid flow 1mm from the ground there is almost no turbulence.

For some basic background over related terms, check out Heat Transfer Basics – Convection – Part One, with some examples of fluid flowing over flat plates, boundary layers, laminar flow and turbulent flow.

Therefore, very close to a boundary the turbulent effects effectively disappear, and heat transfer is carried out via conduction. Generally conduction is less effective than turbulence movement of fluids at heat transfer.

A Note on Very Basic Theory

The less effectively heat can move through a body, the higher the temperature differential needed to “drive” that heat through.

This is described by the equation for conductive heat transfer, which in (relatively) plain English says:

The heat flow in W/m² is proportional to the temperature difference across the body and the “conductivity” of the body, and is inversely proportional to the distance across the body

Now during the day a significant amount of heat moves up through the ocean to the surface. This is the solar radiation absorbed below the surface. Near the surface where turbulent mixing reduces in effectiveness we should expect to see a larger temperature gradient.

Taking the example of 1m down, if for some reason heat was not able to move effectively from 1m to the surface, then the absorbed solar radiation would keep heating the 1m depth and its temperature would keep rising. Eventually this temperature gradient would cause greater heat flow.

An example of a flawed model where heat was not able to move effectively was given in Does Back-Radiation “Heat” the Ocean? – Part Two:

A Flawed Model

Note how the 1m & 3m depth keep increasing in temperature. See that article for more explanation.

The Skin Layer in Detail

If the temperature increases closer to the surface, why does it “change direction” in the last millimeter?

In brief, the temperature generally rises in the last few meters as you get closer to the surface because hotter fluids rise. They rise because they are less dense.

So why doesn’t that continue to the very last micron?

The surface is where (almost) all of absorbed ocean energy is transferred to the atmosphere.

  • Radiation from the surface takes place from the top few microns.
  • Latent heat – evaporation of water into water vapor – is taken from the very top layer of the ocean.
  • Sensible heat is moved by conduction from the very surface into the atmosphere

And in general the ocean is moving heat into the atmosphere, rather than the reverse. The atmosphere is usually a few degrees cooler than the ocean surface.

Because turbulent motion is reduced the closer we get to the boundary with the atmosphere, this means that conduction is needed to transfer heat. This needs a temperature differential.

I could write it another way – because “needing a temperature differential” isn’t the same as “getting a temperature differential”.

If the heat flow up from below cannot get through to the surface, the energy will keep “piling up” and, therefore, keep increasing the temperature. Eventually the temperature will be high enough to “drive the heat” out to the surface.

The Simple 1-d Model

We saw a simple 1-d model in Does Back Radiation “Heat” the Ocean? – Part Four.

Just for the purposes of checking the theory relating to skin layers here is what I did to improve on it:

1. Increased the granularity of the model – with depths for each layer of: 100μm, 300μm, 1mm, 5mm, 20mm, 50mm, 200mm, 1m, 10m, 100m (note values are the lower edge of each layer).

2. Reduced the “turbulent conductivity” values as the surface was reached – instead of one “turbulent conductivity” value (used when the layer below was warmer than the layer above), these values were reduced closer to the surface, e.g. for the 100μm layer, kt=10; for the 300μm layer, kt=10; for the 1mm layer, kt=100; for the 5mm layer, kt=1000; for the 20mm layer, kt=100,000. Then the rest were 200,000 = 2×105 – the standard value used in the earlier models.

3. Reduced the time step to 5ms. This is necessary to make the model work and of course does reduce the length of run significantly.

The results for a 30 day run showed the beginnings of a cooler skin. And the starting temperatures for the top layer down to the 20mm layer were the same. The values of kt were not “tuned” to make the model work, I just threw some values in to see what happened.

As a side note for those following the discussion from Part Four, the ocean temperature also increased for DLR increases with these changes.

Now I can run it for longer but the real issue is that the model is not anywhere near complex enough.

Further Reading on Complexity

There are some papers for people who want to follow this subject further. This is not a “literature review”, just some papers I found on the journey. The subject is not simple.

Free

Saunders, Peter M. (1967), The Temperature at the Ocean-Air InterfaceJ. Atmos. Sci.

Tu and Tsuang (2005), Cool-skin simulation by a one-column ocean model, Geophys. Res. Letters

Paywall

McAlister, E. D., and W. McLeish (1969), Heat Transfer in the Top Millimeter of the OceanJ. Geophys. Res.

Fairall et al, reference below

GA Wick, WJ Emery, LH Kantha & P Schlussel (1996), The behavior of the bulk-skin sea surface temperature difference under varying wind speed and heat fluxJournal of Physical Oceanography

Hartmut Grassl, (1976), The dependence of the measured cool skin of the ocean on wind stress and total heat flux, Boundary Layer Meteorology

Conclusion

The temperature profile of the top mm of the ocean is a challenging subject. Tu & Tsuang say:

Generally speaking, the structure of the viscous layer is known to be related to the molecular viscosity, surface winds, and air-sea flux exchanges. Both Saunders’ formulation [Saunders, 1967; Grassl, 1976; Fairall et al.,1996] and the renewal theory [Liu et al., 1979; Wick et al.,1996; Castro et al., 2003; Horrocks et al., 2003] have been developed and applied to study the cool-skin effect.

But the exact factors and processes determining the structure is still not well known.

However, despite the complexity, an understanding of the basics helps to give some insight into why the temperature profile is like it is.

I welcome commenters who can make the subject easier to understand. And also commenters who can explain the more complex elements of this subject.

References

A Heat Transfer Textbook, by Prof Lienhard & Prof Lienhard, Phlogiston Press, 3rd edition (2008)

Cool-skin and warm-layer effects on sea surface temperature, Fairall, Bradley, Godfrey, Wick, Edson & Young, Journal of Geophysical Research (1996)

Near-surface ocean temperature, Ward, Journal of Geophysical Research (2006)

An Autonomous Profiler for Near Surface Temperature Measurements, Ward & Minnett, Accepted for the Proceedings Gas Transfer at Water Surfaces 4th International Symposium (2000)

Read Full Post »

Older Posts »