Feeds:
Posts
Comments

In Part Five – More on Tuning & the Magic Behind the Scenes and also in the earlier Part Four we looked at the challenge of selecting parameters in climate models. A recent 2017 paper on this topic by Frédéric Hourdin and colleagues is very illuminating. One of the co-authors is Thorsten Mauritsen, the principal author of the 2012 paper we reviewed in Part Four. Another co-author is Jean-Christophe Golaz, the principal author of the 2013 paper we reviewed in Part Five.

The topics are similar but there is some interesting additional detail and commentary. The paper is open and, as always, I recommend reading the whole paper.

One of the key points is that climate models need to be specific about their “target” – were they trying to get the model to match recent climatology? top of atmosphere radiation balance? last 100 years of temperature trends? If we know that a model was developed with an eye on a particular target then it doesn’t demonstrate model skill if they get that target right.

Because of the uncertainties in observations and in the model formulation, the possible parameter choices are numerous and will differ from one modeling group to another. These choices should be more often considered in model intercomparison studies. The diversity of tuning choices reflects the state of our current climate understanding, observation, and modeling. It is vital that this diversity be maintained. It is, however, important that groups better communicate their tuning strategy. In particular, when comparing models on a given metric, either for model assessment or for understanding of climate mechanisms, it is essential to know whether some models used this metric as tuning target.

They comment on the paper by Jeffrey Kiehl from 2007 (referenced in The Debate is Over – 99% of Scientists believe Gravity and the Heliocentric Solar System so therefore..) which showed how models with higher sensitivity to CO2 have higher counter-balancing negative forcing from aerosols.

And later in the paper:

The question of whether the twentieth-century warming should be considered a target of model development or an emergent property is polarizing the climate modeling community, with 35% of modelers stating that twentieth-century warming was rated very important to decisive, whereas 30% would not consider it at all during development.

Some view the temperature record as an independent evaluation dataset not to be used, while others view it as a valuable observational constraint on the model development. Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming.

The question of developing toward the twentieth- century warming therefore is an area of vigorous debate within the community..

..The fact that some models are explicitly, or implicitly, tuned to better match the twentieth-century warming, while others may not be, clearly complicates the interpretation of the results of combined model ensembles such as CMIP. The diversity of approaches is unavoidable as individual modeling centers pursue their model development to seek their specific scientific goals.

It is, however, essential that decisions affecting forcing or feedback made during model development be transparently documented.

And so, onto another recent paper by Sumant Nigam and colleagues. They review the temperature trends by season over the last 100 years and review that against models. They look only at the northern hemisphere over land, due to the better temperature dataset available (compared with the southern hemisphere).

Here are the observations of the trends for each of the four seasons, I find it fascinating to see the difference between the seasonal trends:

From Nigam et al 2017

Figure 1 – Click to enlarge

Then they compare the observations to some of the models used in IPCC AR5 (from the model intercomparison project, CMIP5) – top line is observations, each line below is a different model. When we compare the geographical distribution of winter-summer trend (right column) we can see that the models don’t do very well:

From Nigam et al 2017

Figure 2 – Click to enlarge

From their conclusion:

The urgent need for shifting the evaluative and diagnostic focus away from the customary annual mean toward the seasonal cycle of secular warming is manifest in the inability of the leading climate models (whose simulations inform the IPCC’s Fifth Assessment Report) to generate realistic and robust (large signal-to noise ratio) twentieth-century winter and summer SAT trends over the northern continents. The large intra-ensemble SD of century-long SAT trends in some IPCC AR5 models (e.g., GFDL-CM3) moreover raises interesting questions: If this subset of climate models is realistic, especially in generation of ultra-low-frequency variability, is the century-long (1902–2014) linear trend in observed SAT—a one-member ensemble of the climate record—a reliable indicator of the secular warming signal?

I’ve commented a number of times in various articles – people who don’t read climate science papers often have some idea that climate scientists are monolithically opposed to questioning model results or questioning “the orthodoxy”. This is contrary to what you find if you read lots of papers. It might be that press releases that show up in The New York Times, CNN or the BBC (or pick another ideological bellwether) have some kind of monolithic sameness but this just demonstrates that no one interested in finding out anything important (apart from the weather and celebrity news) should ever watch/read media outlets.

They continue:

The relative contribution of both mechanisms to the observed seasonality in century-long SAT trends needs further assessment because of uncertainties in the diagnosis of evapotranspiration and sea level pressure from the century-long observational records. Climate system models—ideal tools for investigation of mechanisms through controlled experimentation—are unfortunately not yet ready given their inability to simulate the seasonality of trends in historical simulations.

Subversive indeed.

Their investigation digs into evapotranspiration – the additional water vapor, available from plants, to be evaporated and therefore to remove heat from the surface during the summer months.

Conclusion

All models are wrong but some are useful” – a statement attributed to a modeler from a different profession (statistical process control) and sometimes quoted also by climate modelers.

This is always a good way to think about models. Perhaps the inability of climate models to reproduce seasonal trends is inconsequential – or perhaps it is important. Models fail on many levels. The question is why, and the answers lead to better models.

Climate science is a real science, contrary to the claims of many people who don’t read much climate science papers, because many published papers ask important and difficult questions, and critique the current state of the science. That is, falsifiability is being addressed. These questions might not become media headlines, or even make it into the Summary for Policymakers in IPCC reports, but papers with these questions are not outliers.

I found both of these papers very interesting. Hourdin et al because they ask valuable questions about how models are tuned, and Nigam et al because they point out that climate models do a poor job of reproducing an important climate trend (seasonal temperature) which provides an extra level of testing for climate models.

References

Striking Seasonality in the Secular Warming of the Northern Continents: Structure and Mechanisms, Sumant Nigam et al, Journal of Climate (2017)

The Art and Science of Climate Model Tuning, Frédéric Hourdin et al, American Meteorological Society (2017) – free paper

Advertisements

I’ve been digging through some statistics for my own benefit.

When you read or hear a statistic that country X is generating Y% of electricity via renewables it can sound wonderful, but the headline number can conceal or overstate useful progress. A few tips for readers new to the subject:

  • Energy is not electricity. So you need to know – were they quoting energy or electricity. For most developed nations, electricity accounts for something around 40% of total energy.
  • “Renewables” includes two components that are important to separate out:
    • hydroelectric – this is “tapped out” in most developed countries. If the “share of renewables” is say 30%, but hydro is 20% (i.e. 2/3 of the total renewables) then the expandable renewables are only 10%. This can help you see recent progress and extrapolate to possible future progress (different story in developing countries, but there is often a large human cost to creating hydroelectric projects)
    • biomass – if you stop burning coal and you burn wood chip instead this tips the reporting scales from “the work of Satan” to “green and renewable”, even though burning wood chip generates more CO2 emissions per unit of electricity generated. Not all biomass is like this, but as a rule of thumb, put the biomass entry into the “more investigation needed” pile before declaring victory
  • Nameplate is not actual – if you have a gas plant (designed to run all the time) the actual output will be about 90% or more of the nameplate (the maximum output under normal conditions), but if you have a wind farm the actual output across a year will be about 20% of the nameplate in Germany, 30% in Ireland and over 40% in Oklahoma. So if you read that “10GW of wind power” was added to Germany’s generating capacity you need to mentally convert that to about 2GW. Similar story for solar – there is a conversion factor.

If you mentally take account of these points when you hear an update, you will be with the 1% of journalists who could pass the literacy test on the progress of renewables. It’s an elite club.

Once again I’ll state that I’m not trying to knock renewables, I’m trying to promote “literacy”. Instead of hapless cheerleaders, think informed citizens..

So, onto recent data.

I’m using two stalwarts of energy reporting: IEA and BP.

IEA produce data to 2015 and quote useful units like electricity consumed in TWh. This is a unit of energy – a TWh is a billion kWh. You find kWh on your electricity bill.

BP produce data to 2016 – which is better – and breakdown renewables much better, but quote units of Mtoe – millions tons of oil equivalent. If you delve into energy industry reports, you often find mixed together in one report: kWh/TWh (energy), GJ (energy), GW (power), tcf (volume of gas), barrels of oil, mmBtu (energy in obscure British units)..

In the case of the BP report it’s not clear to me how to convert from Mtoe to GWh – they do provide a footnote but when I do the conversion I can’t reconcile the numbers using their footnote. No doubt one of our readers has gone down this rabbit hole and can illuminate us all (?). In the meantime, I took the BP numbers in Mtoe and looked up IEA % values for 2016 in TWh and worked out a conversion factor – multiply Mtoe by 0.0045. Then cross-checked with Fraunhofer ISE for Germany. This allows us to see the BP 2016 renewables breakdown in real electricity units rather than in mythical barrels of oil.

Another note – I’m not trying to generate exact figures. Every source has different values. Reconciling them is a big undertaking and very uninteresting work. I’m simply trying to get some perspective on actual renewables progress.

I don’t quote nuclear energy statistics in this article. It’s very low carbon emission, but not exactly “renewable”. The real reason for not including the numbers is that most developed countries are not significantly expanding their nuclear generation, and in Germany’s case are shutting it down. China is a different story, with a big nuclear expansion ongoing.

Germany v US

You would think that Germany, one of the leading lights in renewable energy, would be greatly outperforming the US on CO2 emissions reduction.

  • 2005 – 2015 German CO2 reduction = 0.9% p.a
  • 2005 – 2015 US CO2 reduction = 1.1% p.a

Over that time period the German population has stayed the same, while the US population has grown by about 9%, so we can adjust the US reduction to about 2% p.a on a per capita basis.

Now the US emissions peaked in 2005. You actually don’t need to read a report to find that out because when the US commitment to reducing CO2 emissions was announced in Paris in 2015 the commitment was a reduction “from 2005”. Being cynical about politicians never loses, and sure enough (when checking data in a report) the peak was 2005 – and the reduction from 2005 to 2015 was already about 12%.

Germany’s emission peaked in 1990 so I believe their commitment is always referenced to 1990. The story I haven’t verified is that after the collapse of the Soviet Union and the re-unification of Germany, lots of dirty heavy industry shut down and this was a big help in emissions reductions.

The US reduction looks to be – in part – due to the embrace of natural gas due to its recent very low cost (gas produces about half the CO2 of coal for the same electricity production). This is a result of the current revolution in “unconventional gas”.

When we look at CO2 emissions per kWh in 2016 the story is also surprising:

  • Germany – 1.3g CO2/kWh
  • US – 1.2g CO2/kWh

So this tells us that the GHG efficiency of electricity generation is effectively the same in both countries, slightly better in the US.

When we look at total usage (across all electricity generation, including industry) the story is what we might expect:

  • Germany – 19 kWh per person per day
  • US – 35 kWh per person per day

This tells us that the US uses almost double the electricity per person.

Changes in Renewables

I looked up a few other countries – Denmark, the UK and Spain because they have a big push into renewables; and China to contrast a rapidly developing country. The last column in the table, Total Produced, is total electricity produced from all sources, including fossil fuels and nuclear.

From BP data

The IEA values (not shown) give lower total electricity for each country. The BP figures are electricity produced and IEA figures are electricity consumed. The solar + wind value for Germany in 2016 moves from 18% to 20% of total if I use the lower IEA total.

I also looked up electricity prices in the IEA report and while I have values for 2016, I don’t have comparable values for 2006. I couldn’t find the 2006 or 2007 version of the report. Based on a variety of websites all using different methods, quoting in different currencies and from unverified sources (so not reliable) the average consumer price in Germany has gone from about 19c/kWh to 33c/kWh from 2006-2016 (US$). The US looks almost flat, perhaps from 12 to 12.5c/kWh. UK from 14 to 21c/kWh. The IEA report didn’t give a figure for Denmark.

So Germany produces about 18% of electricity from solar + wind. Its total renewables are 30% if we include biomass, and about 21% if don’t include them. As I mentioned at the start, biomass sometimes includes burning “renewable” wood chip instead of fossil fuels. Biomass is a (big) subject for another day with numerous problems and I haven’t looked at the breakdown.

The Denmark figure for total electricity is probably quite misleading – see the huge reduction in electricity production from 46 TWh to 30 TWh over 10 years. On wikipedia someone has provided a better breakdown, showing consumption as well and the consumption has dropped by just 4% over that time. Also 2006 appears to be a big outlier in electricity production. Denmark is a country connected to neighboring grids and generating lots of wind energy. So Denmark’s 2006 real figure for wind was about 20% of total consumed (not 14%) and has gone up to 43% over 10 years. On this basis Denmark could be at 80% of electricity generation by wind in 2035.

Confusion

When looking for electricity price changes, here was a random site I came across, Economists at Large:

By June 16 this year electricity generated from solar and wind power accounted for a record 61% of total electricity generated in Germany.

The actual figure for 2016 is about 18%.

If I went looking I’m sure I could find lots of sites, including “reputable” media outlets, with wide ranges of inflated figures. It’s very easy to generate confusion – quote a peak daytime value like this “Germany’s renewable output was …%… on May 28th at 1:15pm” and wait for the recyclers of mush (this includes “reputable” media outlets) to propogate it in a new way. Or quote growth figures – as in how much has been added this year. Or quote capacity added, and rely on the fact that no one understands that 10GW of wind farm only generates about 2GW on average of output in Germany. And so on.

I realize young people may expect media outlets to “fact check” but that is not their job. Their job is to generate headlines and have their stories quoted more widely.

Also, if you pay zero for your electricity because you have solar power you might think that you are generating all of your own electricity. Most of the time you would be wrong. Various governments have guaranteed feed-in tariffs for rooftop solar at well above market price.

Basic energy literacy means understanding the difference between these items.

Conclusion

I was just trying to find the core statistics for my own understanding and was especially interested in Germany.

For Germany, we could look at the 3.5x increase in solar + wind in a decade and say “amazing”. Alternatively, we could look at going from 5% to 18% of total electricity generation in 10 years and say that to get to 80% of electricity production will take another 40-50 years at the same rate and say “disappointing”.

Remember that electricity is only about 40% of energy use in most developed countries. Therefore, if you want to decarbonize the whole economy you also have to boost your electricity supply by 2.5x and switch over heating, transport, etc to electric supply.

At the moment, there are currently issues with increasing “non-synchronous” generation beyond a certain point (see V – Grid Stability As Wind Power Penetration Increases). If you read spruiking websites you will find two common suggestions, first “people said we couldn’t get past 10% and now we’re already at 20%” and second “look at Denmark”. If you like happy stories probably skip the rest of this section..

The most helpful textbook I found on the topic was Renewable Electricity and the Grid : The Challenge of Variability written by people who are trying to do it. Long story short, integrating wind energy is very easy at the start, and up to about 20% of total supply on average it doesn’t seem to present a problem. Above 20% there are questions and uncertainties. These are electricity generation and grid experts contributing to the various chapters.

The key point is that grid stability can come from who you are connected to and how.

Denmark, while a country, is really just the size of a large city (population 6M) connected to the rest of Europe and this connection provides their grid stability. Denmark produced 43% of their electricity from wind in 2016 but this is a much lower % of the grid that it is connected to. The question is not “can one small country connected to nearby large countries produce 80% of electricity from wind?” but instead “can the interconnected grid produce 80% from wind?” The answer to the first question is of course yes. The other countries provide grid stability to Denmark. When all the surrounding countries are producing wind energy at 80% of the total inter-connected grid it will be a different story.

However, this is not some fundamental physics problem, it’s an engineering problem that I’m sure can be solved. I haven’t dug in much beyond the references in Part V (referenced above) so I don’t know what issues and costs are involved.

Other Articles in this Series

Renewable Energy I – Introduction

Renewables II – Solar and Free Lunches – Solar power

Renewables III – US Grid Operators’ Opinions – The grid operators’ concerns

Renewables IV – Wind, Forecast Horizon & Backups – Some more detail about wind power – what do we do when the wind goes on vacation

Renewables V – Grid Stability As Wind Power Penetration Increases

Renewables VI – Report says.. 100% Renewables by 2030 or 2050

Renewables VII – Feasibility and Reality – Geothermal example

Renewables VIII – Transmission Costs And Outsourcing Renewable Generation

Renewables IX – Onshore Wind Costs

Renewables X – Nationalism vs Inter-Nationalism

Renewables XI – Cost of Gas Plants vs Wind Farms

Renewables XII – Windpower as Baseload and SuperGrids

Renewables XIII – One of Wind’s Hidden Costs

Renewables XIV – Minimized Cost of 99.9% Renewable Study

Renewables XV – Offshore Wind Costs

Renewables XVI – JP Morgan advises

Renewables XVII – Demand Management 1

Renewables XVIII – Demand Management & Levelized Cost

Renewables XIX – Behind the Executive Summary and Reality vs Dreams

References

BP Statistical Review of World Energy June 2017

BP Statistical Review of World Energy June 2017 – Renewables Appendices (this is a separate pdf)

IEA Key world energy statistics 2017

Renewable Electricity and the Grid : The Challenge of Variability, Godfrey Boyle, Earthscan (2007)

 

Over in another article, a commenter claims:

..Catastrophic predictions depend on accelerated forcings due to water vapour feedback. This water vapour feedback is simply written into climate models as parameters. It is not derived from any kind simulation of first principles in the General Circulation Model runs (GCMs)..

[Emphasis added]

I’ve seen this article of faith a lot. If you frequent fantasy climate blogs where people learn first principles and modeling basics from comments by other equally well-educated commenters this is the kind of contribution you will be able to make after years of study.

None of us knowed nothing, so we all sat around and teached each other.

Actually how the atmospheric section of climate models work is pretty simple in principle. The atmosphere is divided up into a set of blocks (a grid) with each block having dimensions something like 200 km x 200km x 500m high. The values vary a lot and depend on the resolution of the model, this is just to give you an idea.

Then each block has an E-W wind; a N-S wind; a vertical velocity; temperature; pressure; the concentrations of CO2, water vapor, methane; cloud fractions, and so on.

Then the model “steps forward in time” and uses equations to calculate the new values of each item.

The earth is spinning and conservation of momentum, heat, mass is applied to each block. The principles of radiation through each block in each direction apply via paramaterizations (note 1).

Specifically on water vapor – the change in mass of water vapor in each block is calculated from the amount of water evaporated, the amount of water vapor condensed, and the amount of rainfall taking water out of the block. And from the movement of air via E-W, N-S and up/down winds. The final amount of water vapor in each time step affects the radiation emitted upwards and downwards.

It’s more involved and you can read whole books on the subject.

I doubt that anyone who has troubled themselves to read even one paper on climate modeling basics could reach the conclusion so firmly believed in fantasy climate blogs and repeated above. If you never need to provide evidence for your claims..

For this blog we do like to see proof of claims, so please take a read of Description of the NCAR Community Atmosphere Model (CAM 4.0) and just show where this water vapor feedback is written in. Or pick another climate model used by a climate modeling group.

This is the kind of exciting stuff you find in the 200+ pages of an atmospheric model description:

From CAM4 Technical Note

You can also find details of the shortwave and longwave radiation parameterization schemes and how they apply to water vapor.

Here is a quote from The Global Circulation of the Atmosphere (ref below):

Essentially all GCMs yield water vapor feedback consistent with that which would result from holding relative humidity approximately fixed as climate changes. This is an emergent property of the simulated climate system; fixed relative humidity is not in any way built into the model physics, and the models offer ample means by which relative humidity could change.

From Water Vapor Feedback and Global Warming, a paper well-worth anyone reading for who wants to understand this key question in climate:

Water vapor is the dominant greenhouse gas, the most important gaseous source of infrared opacity in the atmosphere. As the concentrations of other greenhouse gases, particularly carbon dioxide, increase because of human activity, it is centrally important to predict how the water vapor distribution will be affected. To the extent that water vapor concentrations increase in a warmer world, the climatic effects of the other greenhouse gases will be amplified. Models of the Earth’s climate indicate that this is an important positive feedback that increases the sensitivity of surface temperatures to carbon dioxide by nearly a factor of two when considered in isolation from other feedbacks, and possibly by as much as a factor of three or more when interactions with other feedbacks are considered. Critics of this consensus have attempted to provide reasons why modeling results are overestimating the strength of this feedback..

Remember, just a few years of study at fantasy climate blogs can save an hour or more of reading papers on atmospheric physics.

References

Description of the NCAR Community Atmosphere Model (CAM 4)– free paper

On the Relative Humidity of the Atmosphere, Chapter 6 of The Global Circulation of the Atmosphere, edited by Tapio Schneider & Adam Sobel, Princeton University Press (2007)

Water Vapor Feedback and Global Warming, Held & Soden, Annu. Rev. Energy Environ (2000) – free paper

Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006)

Notes

Note 1: The very accurate calculation of radiation transfer is done via line by line calculations but they are computationally very expensive and so a simpler approximation is used in GCMs. Of course there are many studies comparing parameterizations vs line by line calculations. One example is Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), WD Collins et al, JGR (2006).

Two Basic Foundations

This article will be a placeholder article to filter out a select group of people. The many people who arrive and confidently explain that atmospheric physics is fatally flawed (without the benefit of having read a textbook). They don’t think they are confused, in their minds they are helpfully explaining why the standard theory is wrong. There have been a lot of such people.

Almost none of them ever provides an equation. If on rare occasions they do provide a random equation, they never explain what is wrong with the 65-year old equation of radiative transfer (explained by Nobel prize winner Subrahmanyan Chandrasekhar, see note 1) which is derived from fundamental physics. Or an explanation for why observation matches the standard theory. For example (and I have lots of others), here is a graph produced nearly 50 years ago (referenced almost 30 years ago) of the observed spectrum at the top of atmosphere vs the calculated spectrum from the standard theory.

Why is it so accurate?

From Atmospheric Radiation, Goody (1989)

From Atmospheric Radiation, Goody (1989)

If it was me, and I thought the theory was wrong, I would read a textbook and try and explain why the textbook was wrong. But I’m old school and generally expect physics textbooks to be correct, short of some major revolution. Conventionally, when you “prove” textbook theory wrong you are expected to explain why everyone got it wrong before.

There is a simple reason why our many confident visitors never do that. They don’t know anything about the basic theory. Entertaining as that is, and I’ll be the first to admit that it has been highly entertaining, it’s time to prune comments from overconfident and confused visitors.

I am not trying to push away people with questions. If you have questions please ask. This article is just intended to limit the tsunami of comments from visitors with their overconfident non-textbook understanding of physics – that have often dominated comment threads. 

So here are my two questions for the many visitors with huge confidence in their physics knowledge. Dodging isn’t an option. You can say “not correct” and explain your alternative formulation with evidence, but you can’t dodge.

Answer these two questions:

1. Is the equation of radiative transfer correct or not?

Iλ(0) = Iλm)em + ∫ Bλ(T)e [16]

The intensity at the top of atmosphere equals.. The surface radiation attenuated by the transmittance of the atmosphere, plus.. The sum of all the contributions of atmospheric radiation – each contribution attenuated by the transmittance from that location to the top of atmosphere

Of course (and I’m sure I don’t even need to spell it out) we need to integrate across all wavelengths, λ, to get the flux value.

For the derivation see Understanding Atmospheric Radiation and the “Greenhouse” Effect – Part Six – The Equations. If you don’t agree it is correct then explain why.

[Note that other articles explain the basics. For example – The “Greenhouse” Effect Explained in Simple Terms, which has many links to other in depth articles].

If you don’t understand the equation you don’t understand the core of radiative atmospheric physics.

—-

2. Is this graphic with explanation from an undergraduate heat transfer textbook (Fundamentals of Heat and Mass Transfer, 6th edition, Incropera and DeWitt 2007) correct or not?

From "Fundamentals of Heat and Mass Transfer, 6th edition", Incropera and DeWitt (2007)

From “Fundamentals of Heat and Mass Transfer, 6th edition”, Incropera and DeWitt (2007)

You can see that radiation is emitted from a hot surface and absorbed by a cool surface. And that radiation is emitted from a cool surface and absorbed by a hot surface. More examples of this principle, including equations, in Amazing Things we Find in Textbooks – The Real Second Law of Thermodynamics – scanned pages from six undergraduate heat transfer textbooks (seven textbooks if we include the one added in comments after entertaining commenter Bryan suggested the first six were “cherry-picked” and offered his preferred textbook which had exactly the same equations).

—-

What I will be doing for the subset of new visitors with their amazing and confident insights is to send them to this article and ask for answers. In the past I have never been able to get a single member of this group to commit. The reason why is obvious.

But – if you don’t answer, your comments may never be published.

Once again, this is not designed to stop regular visitors asking questions. Most people interested in climate don’t understand equations, calculus, radiative physics or thermodynamics – and that is totally fine.

Call it censorship if it makes you sleep better at night.

Notes

Note 1 – I believe the theory is older than Chandrasekhar but I don’t have older references. It derives from basic emission (Planck), absorption (Beer Lambert) and the first law of thermodynamics. Chandrasekhar published this in his 1952 book Radiative Transfer (the link is the 1960 reprint). This isn’t the “argument from authority”, I’m just pointing out that the theory has been long established. Punters are welcome to try and prove it wrong, just no one ever does.

In recent articles we have looked at rainfall and there is still more to discuss. This article changes tack to look at tropical cyclones, prompted by the recent US landfall of Harvey and Irma along with questions from readers about attribution and the future.

It might be surprising to find the following statement from leading climate scientists (Kevin Walsh and many co-authors in 2015):

At present, there is no climate theory that can predict the formation rate of tropical cyclones from the mean climate state.

The subject gets a little involved so let’s dig into a few papers. First from Gabriel Vecchi and some co-authors in 2008 in the journal Science. The paper is very brief and essentially raises one question – has the recent rise in total Atlantic cyclone intensity been a result of increases in absolute sea surface temperature (SST) or relative sea surface temperature:

From Vecchi et al 2008

Figure 1

The top graph (above) shows a correlation of 0.79 between SST and PDI (power dissipation index). The bottom graph shows a correlation of 0.79 between relative SST (local sea surface temperature minus the average tropical sea surface temperature) and PDI.

With more CO2 in the atmosphere from burning fossil fuels we expect a warmer SST in the tropical Atlantic in 2100 than today. But we don’t expect the tropical Atlantic to warm faster than the tropics in general.

If cyclone intensity is dependent on local SST we expect more cyclones, or more powerful cyclones. If cyclone intensity is dependent on relative SST we expect no increase in cyclones. This is because climate models predict warmer SSTs in the future but not warmer Atlantic SSTs than the tropics. The paper also shows a few high resolution models – green symbols – sitting close to the zero change line.

Now predicting tropical cyclones with GCMs has a fundamental issue – the scale of a modern high resolution GCM is around 100km. But cyclone prediction requires a higher resolution due to their relatively small size.

Thomas Knutson and co-authors (including the great Isaac Held) produced a 2007 paper with an interesting method (of course, the idea is not at all new). They input actual meteorological data (i.e. real history from NCEP reanalysis) into a high resolution model which covered just the Atlantic region. Their aim was to see how well this model could reproduce tropical storms. There are some technicalities to the model – the output is constantly “nudged” back towards the actual climatology and out at the boundaries of the model we can’t expect good simulation results. The model resolution is 18km.

The main question addressed here is the following: Assuming one has essentially perfect knowledge of large-scale atmospheric conditions in the Atlantic over time, how well can one then simulate past variations in Atlantic hurricane activity using a dynamical model?

They comment that the cause of the recent (at that time) upswing in hurricane activity “remains unresolved”. (Of course, fast forward to 2016, prior to the recent two large landfall hurricanes, and the overall activity is at a 1970 low. In early 2018, this may be revised again..).

Two interesting graphs emerge. First an excellent match between model and observations for overall frequency year on year:

From Knutson et al 2007

Figure 2

Second, an inability to predict the most intense hurricanes. The black dots are observations, the red dots are simulations from the model. The vertical axis, a little difficult to read, is SLP, or sea level pressure:

From Knutson et al 2007

Figure 3

These results are a common theme of many papers – inputting the historical climatological data into a model we can get some decent results on year to year variation in tropical cyclones. But models under-predict the most intense cyclones (hurricanes).

Here is Morris Bender and co-authors (including Thomas Knutson, Gabriel Vecchi – a frequent author or co-author in this genre, and of course Isaac Held) from 2010:

Some statistical analyses suggest a link between warmer Atlantic SSTs and increased hurricane activity, although other studies contend that the spatial structure of the SST change may be a more important control on tropical cyclone frequency and intensity. A few studies suggest that greenhouse warming has already produced a substantial rise in Atlantic tropical cyclone activity, but others question that conclusion.

This is a very typical introduction in papers on this topic. I note in passing this is a huge blow to the idea that climate scientists only ever introduce more certainty and alarm on the harm from future CO2 emissions. They don’t. However, it is also true that some climate scientists believe that recent events have been accentuated due to the last century of fossil fuel burning and these perspectives might be reported in the media. I try to ignore the media and that is my recommendation to readers on just about all subjects except essential ones like the weather and celebrity news.

This paper used a weather prediction model starting a few days before each storm to predict the outcome. If you understand the idea behind Knutson 2007 then this is just one step further – a few days prior to the emergence of an intense storm, input the actual climate data into a high resolution model and see how well the high res model predicts the observations. They also used projected future climates from CMIP3 models (note 1).

In the set of graphs below there are three points I want to highlight – and you probably need to click on the graph to enlarge it.

First, in graph B, “Zetac” is the model used by Knutson et al 2007, whereas GFDL is the weather prediction model getting better results in this paper – you can see that observations and the GFDL are pretty close in the maximum wind speed distribution. Second, the climate change predictions in E show that predictions of the future show an overall reduction in frequency of tropical storms, but an increase in the frequency of storms with the highest wind speeds – this is a common theme in papers from this genre. Third, in graph F, the results (from the weather prediction model) fed by different GCMs for future climate show quite different distributions. For example, the UKMO model produces a distribution of future wind speeds that is lower than current values.

From Bender et al 2010

Figure 4 – Click to enlarge

In this graph (S3 from the Supplementary data) we see graphs of the difference between future projected climatologies and current climatologies for three relevant parameters for each of the four different models shown in graph F in the figure above:

From Bender et al 2010

Figure 5 – Click to enlarge

This illustrates that different projected future climatologies, which all show increased SST in the Atlantic region, generate quite different hurricane intensities. The paper suggests that the reduction in wind shear in the UKMO model produces a lower frequency of higher intensity hurricanes.

Conclusion

This article illustrates that feeding higher resolution models with current data can generate realistic cyclone data in some aspects, but less so in other aspects. As we increase the model resolution we can get even better results – but this is dependent on inputting the correct climate data. As we look towards 2100 the questions are – How realistic is the future climate data? How does that affect projections of hurricane frequencies and intensities?

Articles in this Series

Impacts – I – Introduction

Impacts – II – GHG Emissions Projections: SRES and RCP

Impacts – III – Population in 2100

Impacts – IV – Temperature Projections and Probabilities

Impacts – V – Climate change is already causing worsening storms, floods and droughts

Impacts – VI – Sea Level Rise 1

Impacts – VII – Sea Level 2 – Uncertainty

Impacts – VIII – Sea level 3 – USA

Impacts – IX – Sea Level 4 – Sinking Megacities

Impacts – X – Sea Level Rise 5 – Bangladesh

Impacts XI – Rainfall 1

Impacts – XII – Rainfall 2

Impacts – XIII – Rainfall 3

References

Hurricanes and climate: the US CLIVAR working group on hurricanes, American Meteorological Society, Kevin Walsh et al (2015) – free paper

Whither Hurricane Activity? Gabriel A Vecchi, Kyle L Swanson & Brian J. Soden, Science (2008) – free paper

Simulation of the Recent Multidecadal Increase of Atlantic Hurricane Activity Using an 18-km-Grid Regional Model, Thomas Knutson et al, American Meteorological Society, (2007) – free paper

Modeled Impact of Anthropogenic Warming on the Frequency of Intense Atlantic Hurricanes, Morris A Bender et al, Science (2010) – free paper

Notes

Note 1: The scenario is A1B, which is similar to RCP6 – that is, an approximate doubling of CO2 by the end of the century. The simulations came from the CMIP3 suite of model results.

In XII – Rainfall 2 we saw the results of many models on rainfall as GHGs increase. They project wetter tropics, drier subtropics and wetter higher latitude regions. We also saw an expectation that rainfall will increase globally, with something like 2-3% per ºC of warming.

Here is a (too small) graph from Allen & Ingram (2002) showing the model response of rainfall under temperature changes from GHG increases. The dashed line marked “C-C” is the famous (in climate physics) Clausius–Clapeyron relation which, at current temperatures, shows a 7% change in water vapor per ºC of warming. The red triangles are the precipitation changes from model simulations showing about half of that.

From Allen & Ingram (2002)

Figure 1

Here is another graph from the same paper showing global mean temperature change (top) and rainfall over land (bottom):

From Allen & Ingram (2002)

Figure 2

The temperature has increased over the last 50 years, and models and observations show that the precipitation has.. oh, it’s not changed. What is going on?

First, the authors explain some important background:

The distribution of moisture in the troposphere (the part of the atmosphere that is strongly coupled to the surface) is complex, but there is one clear and strong control: moisture condenses out of supersaturated air.

This constraint broadly accounts for the humidity of tropospheric air parcels above the boundary layer, because almost all such parcels will have reached saturation at some point in their recent history. Physically, therefore, it has long seemed plausible that the distribution of relative humidity would remain roughly constant under climate change, in which case the Clausius-Clapeyron relation implies that specific humidity would increase roughly exponentially with temperature.

This reasoning is strongest at higher latitudes where air is usually closer to saturation, and where relative humidity is indeed roughly constant through the substantial temperature changes of the seasonal cycle. For lower latitudes it has been argued that the real-world response might be different. But relative humidity seems to change little at low latitudes under a global warming scenario, even in models of very high vertical resolution, suggesting this may be a robust ’emergent constraint’ on which models have already converged.

They continue:

If tropospheric moisture loading is controlled by the constraints of (approximately) unchanged relative humidity and the Clausius-Clapeyron relation, should we expect a corresponding exponential increase in global precipitation and the overall intensity of the hydrological cycle as global temperatures rise?

This is certainly not what is observed in models.

To clarify, the point in the last sentence is that models do show an increase in precipitation, but not at the same rate as the expected increase in specific humidity (see note 1 for new readers).

They describe their figure 2 (our figure 1 above) and explain:

The explanation for these model results is that changes in the overall intensity of the hydrological cycle are controlled not by the availability of moisture, but by the availability of energy: specifically, the ability of the troposphere to radiate away latent heat released by precipitation.

At the simplest level, the energy budgets of the surface and troposphere can be summed up as a net radiative heating of the surface (from solar radiation, partly offset by radiative cooling) and a net radiative cooling of the troposphere to the surface and to space (R) being balanced by an upward latent heat flux (LP, where L is the latent heat of evaporation and P is global-mean precipitation): evaporation cools the surface and precipitation heats the troposphere.

[Emphasis added].

Basics Digression

Picture the atmosphere over a long period of time (like a decade), and for the whole globe. If it hasn’t heated up or cooled down we know that the energy in must equal energy out (or if it has only done so only marginally then energy in is almost equal to energy out). This is the first law of thermodynamics – energy is conserved.

What energy comes into the atmosphere?

  1. Solar radiation is partly absorbed by the atmosphere (most is transmitted through and heats the surface of the earth)
  2. Radiation emitted from the earth’s surface (we’ll call this terrestrial radiation) is mostly absorbed by the atmosphere (some is transmitted straight through to space)
  3. Warm air is convected up from the surface
  4. Heat stored in evaporated water vapor (latent heat) is convected up from the surface and the water vapor condenses out, releasing heat into the atmosphere when this happens

How does the atmosphere lose energy?

  1. It radiates downwards to the surface
  2. It radiates out to space

..end of digression

Changing Energy Budget

In a warmer world, if we have more evaporation we have more latent heat transfer from the surface into the troposphere. But the atmosphere has to be able to radiate this heat away. If it can’t, then the atmosphere becomes warmer, and this reduces convection. So with a warmer surface we may have a plentiful potential supply of latent heat (via water vapor) but the atmosphere needs a mechanism to radiate away this heat.

Allen & Ingram put forward a simple conceptual equation:

ΔRc + ΔRT = LΔP

where the change in radiative cooling ΔR, is split into two components: ΔRc that is independent of the change in atmospheric temperature; and ΔRT that depends only on the temperature

L = latent heat of water vapor (a constant), ΔP = change in rainfall (= change in evaporation, as evaporation is balanced by rainfall)

LΔP is about 1W/m² per 1% increase in global precipitation.

Now, if we double CO2, then before any temperature changes we decrease the outgoing longwave radiation through the tropopause (the top of the troposphere) by about 3-4W/m² and we increase atmospheric radiation to the surface by about 1W/m².

So doubling CO2, ΔRc = -2 to -3W/m²; prior to a temperature change ΔRT = 0; and so ΔP reduces.

The authors comment that increasing CO2 before any temperature change takes place reduces the intensity of the hydrological cycle and this effect was seen in early modeling experiments using prescribed sea surface temperatures.

Now, of course, the idea of doubling CO2 without any temperature change is just a thought experiment. But it’s an important thought experiment because it lets us isolate different factors.

The authors then consider their factor ΔRT:

The enhanced radiative cooling due to tropospheric warming, ΔRT, is approximately proportional to ΔT: tropospheric temperatures scale with the surface temperature change and warmer air radiates more energy, so ΔRT = kΔT, with k=3W/(m²K)..

All this is saying is that as the surface warms, the atmosphere warms at about the same rate, and the atmosphere then emits more radiation. This is why the model results of rainfall in our figure 2 above show no trend in rainfall over 50 years, and also match the observations – the constraint on rainfall is the changing radiative balance in the troposphere.

And so they point out:

Thus, although there is clearly usable information in fig. 3 [our figure 2], it would be physically unjustified to estimate ΔP/ΔT directly from 20th century observations and assume that the same quantity will apply in the future, when the balance between climate drivers will be very different.

There is a lot of other interesting commentary in their paper, although the paper itself is now quite dated (and unfortunately behind a paywall). In essence they discuss the difficulties of modeling precipitation changes, especially for a given region, and are looking for “emergent constraints” from more fundamental physics that might help constrain forecasts.

A forecasting system that rules out some currently conceivable futures as unlikely could be far more useful for long-range planning than a small number of ultra-high-resolution forecasts that simply rule in some (very detailed futures as possibilities).

This is a very important point when considering impacts.

Conclusion

Increasing the surface temperature by 1ºC is expected to increase the humidity over the ocean by about 7%. This is simply the basic physics of saturation. However, climate models predict an increase in mean rainfall of maybe 2-3% per ºC. The fundamental reason is that the movement of latent heat from the surface to the atmosphere has to be radiated away by the atmosphere, and so the constraint is the ability of the atmosphere to do this. And so the limiting factor in increasing rainfall is not the humidity increase, it is the radiative cooling of the atmosphere.

We also see that despite 50 years of warming, mean rainfall hasn’t changed. Models also predict this. This is believed to be a transient state, for reasons explained in the article.

References

Constraints on future changes in climate and the hydrologic cycle, MR Allen & WJ Ingram, Nature (2002)  – freely available [thanks, Robert]

Notes

1 Relative humidity is measured as a percentage. If the relative humidity = 100% it means the air is saturated with water vapor – it can’t hold any more water vapor. If the relative humidity = 0% it means the air is completely dry. As temperature increases the ability of air to hold water vapor increases non-linearly.

For example, at 0ºC, 1kg of air can carry around 4g of water vapor, at 10ºC that has doubled to 8g, and at 20ºC it has doubled again to 15g (I’m using approximate values).

So now imagine saturated air over the ocean at 20ºC rising up and therefore cooling (it is cooler higher up in the atmosphere). By the time the air parcel has cooled down to 0ºC (this might be anything from 2km to 5km altitude) it is still saturated but is only carrying 4g of water vapor, having condensed out 11g into water droplets.

At least 99.9% of physicists believe the theory of gravity, and the heliocentric model of the solar system. The debate is over. There is no doubt that we can send a manned (and woman-ed) mission to Mars.

Some “skeptics” say it can’t be done. They are denying basic science! Gravity is plainly true. So is the heliocentric model. Everyone agrees. There is an overwhelming consensus. So the time for discussion is over. There is no doubt about the Mars mission.

I create this analogy (note 1) for people who don’t understand the relationship between five completely different ideas:

  • the “greenhouse” effect
  • burning fossil fuels adds CO2 to the atmosphere, increasing the “greehouse” effect
  • climate models
  • crop models
  • economic models

The first two items on the list are fundamental physics and chemistry, and while advanced to prove (see The “Greenhouse” Effect Explained in Simple Terms for the first one) to people who want to work through a proof, they are indisputable. Together they create the theory of AGW (anthropogenic global warming). This says that humans are contributing to global warming by burning fossil fuels.

99.9% of people who understand atmospheric physics believe this unassailable idea (note 2).

This means that if we continue with “business as usual” (note 3) and keep using fossil fuels to generate energy, then by 2100 the world will be warmer than today.

How much warmer?

For that we need climate models.

Climate Models

These are models which break the earth’s surface, ocean and atmosphere into a big grid so that we can use physics equations (momentum, heat transfer and others) to calculate future climate (this class of model is called finite element analysis). These models include giant fudge-factors that can’t be validated (by giant fudge factors I mean “sub-grid parameterizations” and unknown parameters, but I’m writing this article for a non-technical audience).

One way to validate models is to model the temperature over the last 100 years. Another way is to produce a current climatology that matches observations. Generally temperature is the parameter with most attention (note 4).

Some climate models predict that if we double CO2 in the atmosphere (from pre-industrial periods) then surface temperature will be around 4.5ºC warmer. Others that the temperature will be 1.5ºC warmer. And everything in between.

Surely we can just look at which models reproduced the last 100 years temperature anomaly the best and work with those?

From Mauritsen et al 2012

If the model that predicts 1.5ºC in 2100 is close to the past, while the one that predicts 4.5ºC has a big overshoot, we will know that 1.5ºC is a more likely future. Conversely, if the model that predicts 4.5ºC in 2100 is close to the past but the 1.5ºC model woefully under-predicts the last 100 years of warming then we can expect more like 4.5ºC in 2100.

You would think so, but you would be wrong.

All the models get the last 100 years of temperature changes approximately correct. Jeffrey Kiehl produced a paper 10 years ago which analyzed the then current class of models and gently pointed out the reason. Models with large future warming included a high negative effect from aerosols over the last 100 years. Models with small future warming included a small negative effect from aerosols over the last 100 years. So both reproduced the past but with a completely different value of aerosol cooling. You might think we can just find out the actual cooling effect of aerosols around 1950 and then we will know which climate model to believe – but we can’t. We didn’t have satellites to measure the cooling effect of aerosols back then.

This is the challenge of models with many parameters that we don’t know. When a modeler is trying to reproduce the past, or the present, they pick the values of parameters which make the model match reality as best as they can. This is a necessary first step (note 5).

So how warm will it be in 2100 if we double CO2 in the atmosphere?

Somewhat warmer

Models also predict rainfall, drought and storms. But they aren’t as good as they are at temperature. Bray and von Storch survey climate scientists periodically on a number of topics. Here is their response to:

How would you rate the ability of regional climate models to make 50 year projections of convective rain storms/thunder storms? (1 = very poor to 7 = very good)

Similar ratings are obtained for rainfall predictions. The last 50 years has seen no apparent global worsening of storms, droughts and floods, at least according to the IPCC consensus (see Impacts – V – Climate change is already causing worsening storms, floods and droughts).

Sea level is expected to rise between around 0.3m to 0.6m (see Impacts – VI – Sea Level Rise 1 and IX – Sea Level 4 – Sinking Megacities) – this is from AR5 of the IPCC (under scenario RCP6). I mention this because the few people I’ve polled thought that sea level was expected to be 5-10m higher in 2100.

Actual reports with uneventful projections don’t generate headlines.

Crop Models

Crop models build on climate models. Once we know rainfall, drought and temperature we can work out how this impacts crops.

Will we starve to death? Or will there be plentiful food?

Past predictions of disaster haven’t been very accurate, although they are wildly popular with generating media headlines and book sales, as Paul Ehrlich found to his benefit. But that doesn’t mean future predictions of disaster are necessarily wrong.

There are a number of problems with trying to answer the question.

Even if climate models could predict the global temperature, when it comes to a region the size of, say, northern California their accuracy is much lower. Likewise for rainfall. Models which produce similar global temperature changes often have completely different regional precipitation changes. For example, from the IPCC Special Report on Extremes (SREX), p. 154:

At regional scales, there is little consensus in GCM projections regarding the sign of future change in monsoon characteristics, such as circulation and rainfall. For instance, while some models project an intense drying of the Sahel under a global warming scenario, others project an intensification of the rains, and some project more frequent extreme events..

In a warmer world with more CO2 (helps some plants) and maybe more rainfall, or maybe less what can we expect out of crop yields? It’s not clear. The IPCC AR5 wg II, ch 7, p 496:

For example, interactions among CO2 fertilization, temperature, soil nutrients, O3, pests, and weeds are not well understood (Soussana et al., 2010) and therefore most crop models do not include all of these effects.

Of course, as climate changes over the next 80 years agricultural scientists will grow different crops, and develop new ones. In 1900, almost half the US population worked in farming. Today the figure is 2-3%. Agriculture has changed unimaginably.

In the left half of this graph we can see global crop yield improvements over 50 years (the right side is projections to 2050):

From Ray et al 2013

Economic Models

What will the oil price be in 2020? Economic models give you the answer. Well, they give you an answer. And if you consult lots of models they give you lots of different answers. When the oil price changes a lot, which it does from time to time, all of the models turn out to be wrong. Predicting future prices of commodities is very hard, even when it is of paramount concern for major economies, and even when a company could make vast profits from accurate prediction.

AR5 of the IPCC report, wg 2, ch 7, p.512, had this to say about crop prices in 2050:

Changes in temperature and precipitation, without considering effects of CO2, will contribute to increased global food prices by 2050, with estimated increases ranging from 3 to 84% (medium confidence). Projections that include the effects of CO2 changes, but ignore O3 and pest and disease impacts, indicate that global price increases are about as likely as not, with a range of projected impacts from –30% to +45% by 2050..

..One lesson from recent model intercomparison experiments (Nelson et al., 2014) is that the choice of economic model matters at least as much as the climate or crop model for determining  price response to climate change, indicating the critical role of economic uncertainties for projecting the magnitude of price impacts.

In 2001, the 3rd report (often called TAR) said, ch 5, p.238, perhaps a little more clearly:

..it should be noted however that hunger estimates are based on the assumptions that food prices will rise with climate change, which is highly uncertain

Economic models are not very good at predicting anything. As Herbert Stein said, summarizing a lifetime in economics:

  • Economists do not know very much
  • Other people, including the politicians who make economic policy, know even less about economics than economists do

Conclusion

Recently a group, Cook et al 2013, reviewed over 10,000 abstracts of climate papers and concluded that 97% believed in the proposition of AGW – the proposition that humans are contributing to global warming by burning fossil fuels. I’m sure if the question were posed the right way directly to thousands of climate scientists, the number would be over 99%.

It’s not in dispute.

AGW is a necessary theory for Catastrophic Anthropogenic Global Warming (CAGW). But not sufficient by itself.

Likewise we know for sure that gravity is real and the planets orbit the sun. But it doesn’t follow that we can get humans safely to Mars and back. Maybe we can. Understanding gravity and the heliocentric theory is a necessary condition for the mission, but a lot more needs to be demonstrated.

The uncertainties in CAGW are huge.

Economic models that have no predictive skill are built on limited crop models which are built on climate models which have a wide range of possible global temperatures and no consensus on regional rainfall.

Human ingenuity somehow solved the problem of going from 2.5bn people in the middle of the 20th century to more than 7bn people today, and yet the proportion of the global population in abject poverty (note 6) has dropped from over 40% to maybe 15%. This was probably unimaginable 70 years ago.

Perhaps reasonable people can question if climate change is definitely the greatest threat facing humanity?

Perhaps questioning the predictive power of economic models is not denying science?

Perhaps it is ok to be unsure about the predictive power of climate models that contain sub-grid parameterizations (giant fudge factors) and that collectively provide a wide range of forecasts?

Perhaps people who question the predictions aren’t denying basic (or advanced) science, and haven’t lost their reason or their moral compass?

—-

[Note to commenters, added minutes after this post was written – this article is not intended to restart debate over the “greenhouse” effect, please post your comments in one of the 10s (100s?) of articles that have covered that subject, for example – The “Greenhouse” Effect Explained in Simple Terms – Comments on the reality of the “greenhouse” effect posted here will be deleted. Thanks for understanding.]

References

Twentieth century climate model response and climate sensitivity, Jeffrey Kiehl (2007)

Tuning the climate of a global model, Mauritsen et al (2012)

Yield Trends Are Insufficient to Double Global Crop Production by 2050, Deepak K. Ray et al (2013)

Quantifying the consensus on anthropogenic global warming in the scientific literature, Cook et al, Environmental Research Letters (2013)

The Great Escape, Angus Deaton, Princeton University Press (2013)

The various IPCC reports cited are all available at their website

Notes

1. An analogy doesn’t prove anything. It is for illumination.

2. How much we have contributed to the last century’s warming is not clear. The 5th IPCC report (AR5) said it was 95% certain that more than 50% of recent warming was caused by human activity. Well, another chapter in the same report suggested that this was a bogus statistic and I agree, but that doesn’t mean I think that the percentage of warming caused by human activity is lower than 50%. I have no idea. It is difficult to assess, likely impossible. See Natural Variability and Chaos – Three – Attribution & Fingerprints for more.

3. Reports on future climate often come with the statement “under a conservative business as usual scenario” but refer to a speculative and hard to believe scenario called RCP8.5 – see Impacts – II – GHG Emissions Projections: SRES and RCP. I think RCP 6 is much closer to the world of 2100 if we do little about carbon emissions and the world continues on the kind of development pathways that we have seen over the last 60 years. RCP8.5 was a scenario created to match a possible amount of CO2 in the atmosphere and how we might get there. Calling it “a conservative business as usual case” is a value-judgement with no evidence.

4. More specifically the change in temperature gets the most attention. This is called the “temperature anomaly”. Many models that do “well” on temperature anomaly actually do quite badly on the actual surface temperature. See Models, On – and Off – the Catwalk – Part Four – Tuning & the Magic Behind the Scenes – you can see that many “fit for purpose” models have current climate halfway to the last ice age even though they reproduce the last 100 years of temperature changes pretty well. That is, they model temperature changes quite well, but not temperature itself.

5. This is a reasonable approach used in modeling (not just climate modeling) – the necessary next step is to try to constrain the unknown parameters and giant fudge factors (sub-grid parameterizations). Climate scientists work very hard on this problem. Many confused people writing blogs think that climate modelers just pick the values they like, produce the model results and go have coffee. This is not the case, and can easily be seen by just reviewing lots of papers. The problem is well-understood among climate modelers. But the world is a massive place, detailed past measurements with sufficient accuracy are mostly lacking, and sub-grid parameterizations of non-linear processes are a very difficult challenge (this is one of the reasons why turbulent flow is a mostly unsolved problem).

6. This is a very imprecise term. I refer readers to the 2015 Nobel Prize winner Angus Deaton and his excellent book, The Great Escape (2013) for more.