A number of years ago, during George W. Bush’s second term and by sheer happenstance, I ended up playing a round of golf with a Navy SEAL Commander (half the SEALs train here in San Diego). Obviously, much of his job was classified and he was very circumspect in what he shared. However, when I asked where or how I could become better informed about foreign policy, he recommended Ron Suskind’s book, The One Percent Doctrine.
The “one percent doctrine” (also called the “Cheney doctrine”) was established shortly after 9.11 in response to worries that Pakistani scientists were offering nuclear weapons expertise to Al Qaeda. Here’s the money quote from Vice President Dick Cheney: “If there’s a 1 percent chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It’s not about our analysis … It’s about our response.”
Thus in Cheney’s view and per subsequent policy, the war on terror required and empowered the Bush administration to act without the same level of evidence or analysis as might otherwise be necessary. As Suskind describes it: “Even if there’s just a 1 percent chance of the unimaginable coming due, act as if it is a certainty. It’s not about ‘our analysis,’ as Cheney said. It’s about ‘our response.’ … Justified or not, fact-based or not, ‘our response’ is what matters. As to ‘evidence,’ the bar was set so low that the word itself almost didn’t apply.”
In most matters, the standards for action or decision are decidedly higher than one percent. In a civil trial, something is deemed proven if it is established by a preponderance of the evidence — more than 50 percent. Criminal trials require a higher standard — beyond a reasonable doubt. For a conventional scientist running a statistical test of a hypothesis, the accepted threshold is typically 95 percent.
Therefore, the idea is that if and when a threat is deemed at least 1 percent viable, a response is required, was an enormous undertaking and dangerous business indeed. For example, because the likelihood of rolling a 12 with a pair of dice is 1 in 36, almost 3 percent, the Cheney doctrine could support betting the house on rolling boxcars. Since the number of threats with that plausibility level were (and still are) huge, the costs of the response and the risks involved in carrying out the policy were staggering. Bluntly stated, mere suspicion was enough to justify a major military response. Implicit in the SEAL Commander’s recommendation was the idea that the costs of this policy were too high and that we as a nation had lost focus on the major threats we faced by spending so much time and energy on more unlikely threats.
Even so, when we are talking about low-frequency but really high impact events, some precautions are certainly in order. If I am on a plane and there is a one percent chance that someone on board has a bomb, I want it checked out thoroughly and completely before we take off. The costs of doing so are low and the capacity to do so is readily available.
The appropriate policy in a given case, it seems to me, requires a balancing of the impact of the outcome with the likelihood of its occurrence, the cost of prevention and the available “bandwidth” to undertake the task. In my airplane example, I have no quarrel with a delay in order to check out the passengers even if the risk is low. But I would have a problem with going to war, risking thousands of lives and spending billions of dollars without really good reasons and supporting evidence.
In the retirement planning world we are faced with these kinds of questions routinely. Most retirement income plans based upon systematic portfolio withdrawals are designed with and for assumed failure rates of 5-10 percent. Lower failure rates are deemed too restrictive and expensive. But since the consequences of failure are so drastic, many argue (as I have) that such a risk level is too high and guaranteed income solutions should be added to the overall mix. Even so, where systematic withdrawals need not be static — where there is flexibility to make adjustments if and as problems or needs arise — the real level of risk is not nearly as high as advertised.
As I have repeatedly noted, the mathematical and probabilistic acuity of humans generally sucks. Serving clients effectively demands that we professionals be the necessary experts so as to analyze the math fully and well to help those clients make the difficult and complex decisions they face.