Worst Case Scenario v. Fat Tail

April 18, 2014 11:05 pm3 comments

“If we omit discussion of tail risk, are we really telling the whole truth?”  -Kerry Emanuel

This post is motivated by Kerry Emanuel’s recent essay entitled Tail Risk vs. Alarmism. Excerpts:

In assessing the event risk component of climate change, we have, I would argue, a strong professional obligation to estimate and portray the entire probability distribution to the best of our ability. This means talking not just about the most probable middle of the distribution, but also the lower probability high-end risk tail, because the outcome function is very high there. 

Do we not have a professional obligation to talk about the whole probability distribution, given the tough consequences at the tail of the distribution? I think we do, in spite of the fact that we open ourselves to the accusation of alarmism and thereby risk reducing our credibility. 

Uncertainty monster simplification

Dr. Judith Curry

Dr. Judith Curry

In my paper Climate Science and the Uncertainty Monster, I described 5 ways of coping with the monster. Monster Simplification is particularly relevant here:  Monster simplifiers attempt to transform the monster by subjectively quantifying or simplifying the assessment of uncertainty.

The uncertainty monster paper distinguished between statistical uncertainty and scenario uncertainty:

  • Statistical uncertainty is the aspect of uncertainty that is described in statistical terms. An example of statistical uncertainty is measurement uncertainty, which can be due to sampling error or inaccuracy or imprecision in measurements.
  • Scenario uncertainty implies that it is not possible to formulate the probability of occurrence of one particular outcome. A scenario is a plausible but unverifiable description of how the system and/or its driving forces may develop over time. Scenarios may be regarded as a range of discrete possibilities with no a priori allocation of likelihood.

Given our uncertainty and ignorance surrounding climate sensitivity, I have discussed the problems with attempting probabilistic estimates of climate sensitivity, and to create a pdf (see this previous post Probabilistic estimates of climate sensitivity).  In my opinion, the most significant point in the IPCC AR5 WG1 report is their acknowledgment that they cannot create a meaningful pdf of climate sensitivity with a central tendency, and hence they only provide ranges with confidence levels (and they avoid identifying a best estimate of 3C as they did in the AR4).   The strategy used in the AR5 is appropriate in context of scenario uncertainty, where they identify some bounds for sensitivity, and present some assessment of likelihood (values less than 1C are extreme unlikely, and values greater than 6C are very unlikely).

So I disagree with this statement by Kerry Emanuel:

We have a strong professional obligation to estimate and portray the entire probability distribution to the best of our ability.

In my opinion, we have a strong profession obligation NOT to simplify the uncertainty by portraying it as a pdf, when the situation is characterized by substantial uncertainty that is not statistical in nature.  This  issue is discussed in a practical way with regards to climate science in a paper by Risbey and Kandlikar (2007), see especially Table 5:

Table 5_J.Curry Post

Climate sensitivity is definitely not characterized by #1, rather it is characterized by #2 or #4.  The lower bound is arguably well defined; the upper bound is not.  The problem at the upper bound is what concerns Kerry Emanuel; I am arguing that the way to address this is NOT through considering a fat tail that extends out to infinity of a mythical probability distribution.

Nicholas Taleb’s black swan arguments emphasize the non-computatability of the consequential rare events using scientific methods (owing to the very nature of small probabilities).

What’s the worst case?

I have spent considerable effort in identifying possible/plausible worst case scenarios, black swans and dragon kings:

Identifying possible/plausible worst case scenarios is much more useful in my opinion  in identifying possible black swans and dragon kings in the climate system than the fat tail approach.

Graphic by Michael Quirke.

Graphic by Michael Quirke

 

The philosophical foundation for thinking about ‘worst case scenarios’ is laid out in the work of Gregor Betz, see especially his paper What‘s the Worst Case? The Methodology of Possibilistic Prediction.  Excerpts:

Where even probabilistic prediction fails, foreknowledge is (at most) possibilistic in kind; i.e. we know some future events to be possible, and some other events to be impossible.

Gardiner, in defence of the precautionary principle, rightly notes that (i) the application of the precautionary principle demands that a range of realistic possibilities be established, and that (ii) this is required by any principle for decision making under uncertainty whatsoever.

Accepting the limits of probabilistic methods and refusing to make probabilistic forecasts where those limits are exceeded, originates, ultimately, from the virtue of truthfulness, and from the requirements of scientic policy advice in a democratic society.

‘Possibility’, here, means neither logical nor metaphysical possibility, but simply (logical and statistical) consistency with our relevant background knowledge.

A surprise of the first type occurs if a possibility that had not even been articulated becomes true. Hypothesis articulation is, essentially, the business of avoiding surprises (of this 1st type). There is, however, a second type of surprise that does not simply extend the picture we’ve drawn so far, but rather shakes it. Our scentific knowledge is constantly changing, whereas that change is not cumulative: scientific progress also comprises refuting, correcting, and abandoning previous scientificc results. Now a readjustment of the background knowledge questions the entire former assessment of possibilistic hypotheses.

It is not clear to me whether there are general principles which can guide rational decisions in such situations at all. This, however, must not serve as an excuse for simplifying the epistemic situation we face! If a policy decision requires a complex normative judgement, then democratically legitimised policy makers have arguably a hard job; it is, nevertheless, their job to balance and weigh the diverse risks of the alternative options. That is not the job of scientic policy advisers who might be tempted to simplify the situation, thereby pre-determining the complex value judgements.

Alarmism

I have written two previous posts that address the idea that uncertainty increases the argument for action

In my opinion, this argument is a stark and potentially dangerous oversimplification of how to approach decision making about this complex problem.  As Betz points out, there is no simple decision rule for dealing with this kind of deep uncertainty.

Alarmism occurs when possible, unverified worst case scenarios are touted as almost certain to occur.

Concluding remarks

My biggest concern is that by unduly (and almost exclusively) focusing on AGW that we are making a type 1 error:  a possibility that has not been articulated might come true.  These possibilities (e.g. abrupt climate change) are associated with natural climate variability, and possibly its interaction with AGW.

Pretending that all this can be characterized by a fat tail derived from estimates of climate sensitivity is highly misleading, in my opinion.

So I agree with Kerry Emanuel that we should think about worst cases (e.g. black swans and dragon kings); I disagree with him regarding how this should be approached scientifically and mathematically.  However, undue focus on on unverified worst case scenarios as a strategy for building political will for a particular policy option constitutes undesirable alarmism.

 

THE FORUM'S COMMENT THREAD

  • I’m a little baffled to see a scientist argue against “estimating and portraying the entire probability distribution to the best of our ability.”

    Emanuel’s main point was that just focusing on the most likely outcome (i.e. the centre of the pdf) ignores information on low probability/high impact events that may be highly relevant to society. Curry misrepresents his point I find. It is more correct to say that Emanuel argues *against* simplifying or ignoring uncertainty by arguing against the sole focus on centre values.

    Finally, it is always important to realize than uncertainty is not the same as ignorance. Curry’s argument seems to blur this distinction.

    • Bart (and Dana below) – Curry and Emanuel both agree that unlikely but very-high-impact events need to be articulated. Curry argues that ‘the best of our ability’ doesn’t include the ability to reliably quantify the PDF at this time.

      What do you think about WG1’s decision not to specify a PDF for climate sensitivity? It’s pretty much impossible to do a cost-benefit analysis without that PDF. Either they had irreconcilable differences within the writing committee, or they agreed with Curry that we didn’t know enough to estimate the whole thing.

      I see that decision as a failure attributable to the collision between Frequentist and Bayesian probability approaches. Frequentists are comfortable dealing with PDFs that are computable from a large number of samples and estimates of error. Bayesians can incorporate in a compatible manner both the probabilistic spread associated with a number of independent samples and the uncertainty associated with prior knowledge.

      For the WG1 to not estimate the PDF for climate sensitivity is either a rare example of scientists ethically refusing to overreach while being pressured to do so, or an abdication of responsibility to society. I think Curry would view it the first way. I would view it the second way if I thought the relevant WG1 writers were Bayesians. If they’re frequentists, I’m glad they didn’t try, because they would have gotten it wrong.

  • “undue focus” on anything is a mistake, of course. That’s a tautology, and Dr. Curry adds literally nothing by closing with arguing against something that is undue.

    The issue is what is undue, and Curry here makes an argument that attention to worst cases is undue that is rather vague and misses the mark.

    She does this by virtue of a rather misplaced attack on Emanuel’s use of “probability distribution functions” or pdfs in his argument that we must attend to worst cases.

    It is true that IPCC has used two different scales in its proclamations this time, and Curry, pointing to Risbey and Kandlikar ’07, seems to be arguing in favor of this approach. That is, some distributions are characterized better than others, and one can thus, conceivably at least, distinguish between uncertainty and confidence. Whether this is a useful distinction in policy making is something worthy of disputation, though clearly it does communicate an important epistemic distinction. Formally, though, these are still probability distributions in a risk management sense.

    A proper understanding of information theory and decision theory indicates that there is always an optimum policy in the light of available information, and this optimum is formally obtained via the use of risk weighting, which in turn is based on something that is structurally the same as a probability distribution function and is conceptually closely related. This is often called a probability distribution function in practice, and in the Bayesian model of probability, all pdf s ultimately have this epistemic status.

    Thus when Emanuel suggests we attend to plausible worst cases by referring to “fat tailed distributions”, he is using a shorthand familiar to anyone used to thinking statistically. One would hope that for all her focus on uncertainty, by now this would include Dr. Curry.

    By rejecting Emanuel’s formulation in this way, Curry is thus making a semantic distinction. It is a defensible semantic distinction. But it utterly avoids the actual content of Emanuel’s argument, which is that in setting policy, one does not generally focus concern on most likely outcomes but on most salient risks. Climate should obviously be no different.

    Curry’s response to the “uncertainty is not your friend” argument is thus utterly off point. There is no “big win” possible on the low sensitivity side to balance the enormous losses possible on the high sensitivity side. The intellectually coherent approach is that the less confidence one holds in climate science, the greater should be one’s urgency and commitment to implement emissions controls.

    Dr. Curry appears to be implying otherwise by innuendo but has presented no coherent argument to that effect.

Leave a Reply

You must be logged in to post a comment.

PUBLIC COMMENT THREAD

  • http://ourchangingclimate.wordpress.com/ Bart Verheggen

    I’m a little baffled to see a scientist argue against “estimating and portraying the entire probability distribution to the best of our ability.”

    Emanuel’s main point was that just focusing on the most likely outcome (i.e. the centre of the pdf) ignores information on low probability/high impact events that may be highly relevant to society. Curry misrepresents his point I find. It is more correct to say that Emanuel argues *against* simplifying or ignoring uncertainty by arguing against the sole focus on centre values.

    Finally, it is always important to realize than uncertainty is not the same as ignorance. Curry’s argument seems to blur this distinction.

    • http://atmo.tamu.edu/profile/JNielsen-Gammon John Nielsen-Gammon

      Bart (and Dana below) – Curry and Emanuel both agree that unlikely but very-high-impact events need to be articulated. Curry argues that ‘the best of our ability’ doesn’t include the ability to reliably quantify the PDF at this time.

      What do you think about WG1’s decision not to specify a PDF for climate sensitivity? It’s pretty much impossible to do a cost-benefit analysis without that PDF. Either they had irreconcilable differences within the writing committee, or they agreed with Curry that we didn’t know enough to estimate the whole thing.

      I see that decision as a failure attributable to the collision between Frequentist and Bayesian probability approaches. Frequentists are comfortable dealing with PDFs that are computable from a large number of samples and estimates of error. Bayesians can incorporate in a compatible manner both the probabilistic spread associated with a number of independent samples and the uncertainty associated with prior knowledge.

      For the WG1 to not estimate the PDF for climate sensitivity is either a rare example of scientists ethically refusing to overreach while being pressured to do so, or an abdication of responsibility to society. I think Curry would view it the first way. I would view it the second way if I thought the relevant WG1 writers were Bayesians. If they’re frequentists, I’m glad they didn’t try, because they would have gotten it wrong.

  • http://planet3.org/ Michael Tobis

    “undue focus” on anything is a mistake, of course. That’s a tautology, and Dr. Curry adds literally nothing by closing with arguing against something that is undue.

    The issue is what is undue, and Curry here makes an argument that attention to worst cases is undue that is rather vague and misses the mark.

    She does this by virtue of a rather misplaced attack on Emanuel’s use of “probability distribution functions” or pdfs in his argument that we must attend to worst cases.

    It is true that IPCC has used two different scales in its proclamations this time, and Curry, pointing to Risbey and Kandlikar ’07, seems to be arguing in favor of this approach. That is, some distributions are characterized better than others, and one can thus, conceivably at least, distinguish between uncertainty and confidence. Whether this is a useful distinction in policy making is something worthy of disputation, though clearly it does communicate an important epistemic distinction. Formally, though, these are still probability distributions in a risk management sense.

    A proper understanding of information theory and decision theory indicates that there is always an optimum policy in the light of available information, and this optimum is formally obtained via the use of risk weighting, which in turn is based on something that is structurally the same as a probability distribution function and is conceptually closely related. This is often called a probability distribution function in practice, and in the Bayesian model of probability, all pdf s ultimately have this epistemic status.

    Thus when Emanuel suggests we attend to plausible worst cases by referring to “fat tailed distributions”, he is using a shorthand familiar to anyone used to thinking statistically. One would hope that for all her focus on uncertainty, by now this would include Dr. Curry.

    By rejecting Emanuel’s formulation in this way, Curry is thus making a semantic distinction. It is a defensible semantic distinction. But it utterly avoids the actual content of Emanuel’s argument, which is that in setting policy, one does not generally focus concern on most likely outcomes but on most salient risks. Climate should obviously be no different.

    Curry’s response to the “uncertainty is not your friend” argument is thus utterly off point. There is no “big win” possible on the low sensitivity side to balance the enormous losses possible on the high sensitivity side. The intellectually coherent approach is that the less confidence one holds in climate science, the greater should be one’s urgency and commitment to implement emissions controls.

    Dr. Curry appears to be implying otherwise by innuendo but has presented no coherent argument to that effect.

  • Pingback: FORUM REPORT 01