Everybody who gets married expects the marriage to work out. “You don’t ever think you’ll be apart,” Clueless actress Alicia Silverstone explained, after filing for divorce. Otherwise, why get married? As Robert De Niro proclaims in the opening voice-over to Martin Scorsese’s Casino: “When you love someone, you’ve gotta trust them. There’s no other way. You’ve got to give them the key to everything that’s yours. Otherwise, what’s the point?”
We stand in front of family and friends and willingly – eagerly – vow “forever” and mean it. In Casino, De Niro says his piece as he is seen climbing into his 1981 Cadillac Eldorado Biarritz, and a car-bomb explodes. But we Americans, who overwhelming marry for love, don’t think our cars will blow up. We are utterly convinced that when problems crop up in our lives and relationships, as they inevitably do, we will be able to work things out.
Meanwhile, those of us who have managed to marry happily and well and to stay married for a long time [raises hand] are far too willing to pat ourselves on the back for it. Luck (or grace, depending upon your disposition) is much more a part of the success equation than any of us would care to admit. With the benefit of 20:20 hindsight, Alain de Botton suggests that a good question to ask one’s intended would be, “And how are you crazy?” However, like contestants on The Bachelor, which pitches a quest for true love amidst a harem of attractive women but has produced only one lasting relationship over 22 seasons, we might be willing to acknowledge that things often don’t work out, but we remain convinced of “forever love” for ourselves. To be fair, the Greshwin brothers are pretty convincing.
We may recognize that divorce is commonplace. But whatever version of the statistical landscape brides and grooms might dutifully be able to recite, we simply don’t think those probabilities are personally applicable. For example, recent research found that study participants thought the average member of the opposite sex has about a 40 percent chance of cheating on his or her partner. But those same participants said their own partner had only a de minimus chance of cheating. When you fall in love, all bets are off. When I fall in love, it will be forever.
Of that we’re sure.
In his fascinating book, On Being Certain, neurologist Robert Burton systematically and convincingly shows that certainty is a mental state, a feeling like anger or pride that can prove useful, but that doesn’t dependably reflect anything like objective truth. One disconcerting finding he describes is that, from a neurocognitive point of view, our feelings of certainty about things we’re right about is largely indistinguishable from our feelings of certainty about things we’re wrong about.
All of which confirms the (usually unspoken) truism about humans – we’re often wrong but never in doubt. We’re as sure of the future of our relationships as we are that 2+2=4. However, mathematics is a closed system. As such, it is subject to deduction (demonstration), which means that we can ascertain the outcome – even when we do very difficult math – correctly and certainly. Deductive reasoning, which happens when one begins with an accepted premise and then moves toward establishing a conclusion based upon the previously “known” information, can offer a definitive conclusion.
Conversely, the universe is an open system, subject only to inductive inference. I cannot be sure that I have seen and analyzed every possible outcome. If I have seen a million swans over my lifetime and all of them were white, I might conclude that all swans are white. But I would be wrong, as black swans reside in Australia.
Accordingly, my conclusions are inferred, not demonstrated. Inductive reasoning, which is an extrapolation from the information we observe in order to arrive at a conclusion about something that we have not observed, cannot offer definitive results. However, it is how science must be done in a universe that is open.
The distinction between deductive and inductive reasoning is not commonly understood. Much of what we think of as brilliant deductive skills…
…are actually brilliant inductive skills, which work out a lot better on television than in real life. Because induction is necessarily the way science works and advances, uncertainty is inevitable, no matter how “wicked smart” any of us are or how omnipotent Sherlock Holmes may appear.
Let’s suppose you have a coin you suspect is loaded. You toss it five times and it comes up heads five times. We know deductively that five heads in a row from tossing a fair coin should happen about three percent of the time, less than the five percent threshold usually used for ascertaining statistical significance. However, the inductive inference you were looking to make (the coin is loaded!) is hardly established. Probability and frequency are not the same things. Five heads in a row while tossing a fair coin doesn’t happen a lot, but it does happen. Therefore, more testing is needed even to be reasonably sure of your (inductive) conclusion. As Dr. Johnson remarked to Mrs. Thrale: “It is more from carelessness about truth than from intentional lying, that there is so much falsehood in the world.”
With scientific reasoning, we move from idea to hypothesis to theory (which, unlike much manifest ignorance on the subject, only relates to things that are exceedingly well established1), with (inductive) “proof,” meaning correlation, consistency and noncontradiction. Such conclusions, no matter how powerful, are always subject to modification or even outright rejection based upon further evidence gathering.
The ultimate key to finding scientific truth (always a small-t) is whether the conclusion actually works. Stating confidence in expected outcomes via the scientific process is easy. But actually getting a good result is much tougher to accomplish. Even so, when a scientific concept works – as when someone hears Samuel F.B. Morse tapping out “What hath God wrought?” in a new code on the other end of a telegraph line – it is only provisionally true, always subject to new evidence or understanding, to something that works better.
We want deductive (demonstrative, definitive) proof. Because we love certainty, it often feels (wrongly) like we have deductive proof. We are desperate for sure-fire, black-and-white, lead pipe locks. Such sure-things are incredibly rare in the real world, despite what salespeople and politicians tell us. In the real world, we usually have to settle for inductive (tentative) conclusions, as inductive logic is anything but guaranteed (see here for a particularly interesting example).
Accordingly, the great value of evidence is not so much that it points toward the correct conclusion (even though it often does), but that it allows us the ability to show that some things are conclusively wrong. Never seeing a black swan among a million swans seen does not prove that all swans are white. However, seeing a single black swan conclusively demonstrates that all swans are not white. It is a “killer fact.”
The bottom line here is that we want and expect proof positive but must settle for and rely upon proof negative.
The simple fact of the matter is that the negative proof construct does not resonate with us – we intuitively dislike disconfirming evidence. We tend to neglect the limits of induction or ignore potential disconfirmation and jump to overstated conclusions, especially when they are consistent with what we already think. Few academic papers get published establishing that something doesn’t work. Instead, we tend to spend the bulk of our time looking (and data-mining) for an approach that seems to work or even for evidence we can use to support our preconceived notions. We should be spending much more of our time focused upon a search for disconfirming evidence for what we think (there are excellent behavioral reasons for doing so too).
As the great Charlie Munger famously said, “If you can get good at destroying your own wrong ideas, that is a great gift.” But we don’t do that very often or well, as illustrated by a variation of the Wason selection task, shown below. Note that the test subjects were told that each of the cards has a letter on one side and a number on the other.
Most people give answer “A” — E and 4 — but that’s wrong. For the posited statement to be true, the E card must have an even number on the other side of it and the 7 card must have a consonant on the other side. It doesn’t matter what’s on the other side of the 4 card. But most of us turn the 4 card over because we intuitively want confirming evidence. We don’t think to turn over the 7 card because we tend not to look for disconfirming evidence, even when it would be “proof negative” that a given hypothesis is incorrect. In a wide variety of test environments, fewer than 10 percent of people get the right answer to this type of question.
I suspect that this cognitive failing is a natural result of our constant search for meaning in an environment where noise is everywhere and signal vanishingly difficult to detect. Correlation does in fact correlate, after all. Moreover, randomness is difficult for us to deal with. We are meaning-makers at every level and in nearly every situation. Yet, as I have noted often and as my masthead proclaims, information is cheap2 while meaning is expensive (and therefore elusive). Accordingly, we tend to short-circuit good process to get to the end result – not so coincidentally the result we wanted all along.
In the investment world, as in science generally, we need to build our processes from the ground up, with hypotheses offered only after a careful analysis of all relevant facts and tentatively held only to the extent the evidence allows. Accordingly, we should always to be on the lookout for disconfirming evidence — proof negative — even though doing so is counterintuitive pretty much all the time. For investors and advisors, that means a careful focus on broad and deep diversification (to limit the impact of errors), significant reliance on beta (because we will be wrong a lot), fees and costs (because they compound relentlessly whether we’re right or wrong), as well as portfolio ballast and clear execution rules (to try to guard against our tendency to be impulsive), while avoiding single points of failure (particularly for big, non-insurable risks). Old school analysis and judgment still matter, perhaps more than ever since the pile of available data has gotten so large. However, our conclusions need to be consistent with and supported by the data, no matter how bizarre we think the numbers might be.
For advisors, in particular, that means prioritizing the management of client behavior over finding great new ways to manage money. If our job is to “earn big returns” or “beat the market,” the odds are against us. If our job is to “keep clients from screwing up,” we have a fighting chance. That focus provides both a better chance of success and makes more of a difference overall. It’s like Nobel laureate Daniel Kahneman’s opening to his classic memoir, Thinking Fast and Slow: “The premise of this book is that it is easier to recognize other people’s mistakes than our own.”
1 Less than scrupulous operators use the inherent uncertainties of the scientific process to undercut any knowledge they find inconvenient. That is an enormous problem, obviously. It is no coincidence that many of the key players who were involved in the conspiracy to deny that smoking causes cancer in precisely that way are now involved in opposing the science relating to climate change. As Columbia’s Kate Marvel explained: “Too often, we scientists find ourselves asked to ‘debate’ people who believe (simultaneously) that the Earth is cooling, that it’s warming but the warming is natural, that the warming is human-caused but beneficial, and that NASA somehow made it all up in between faking moon landings and covering up alien abductions. These things cannot all be true. Climate denial is like bad science fiction: there’s no internal logic, the characters aren’t compelling, and you can see the scary things coming from miles away.”
2 The size of groups, or the number of people who could work together was limited by “Dunbar’s Number,” which states that groups of more than 150 people require institutional and technological innovations to cooperate effectively. But by removing technological limitations, modern communications technologies have transformed investing from a lethargic local affair to an instant global one.
Pingback: Uncertainty, Movies and Rules for Activist CEOs – Free Game Mods Download
Pingback: What Drives Cycles to Extremes? • Novel Investor
I suggest the answer for the cards is E, K and 7. You omitted to say that each card has a letter on one side and a number on the other so we should check there is no vowel opposite K.
The sentence immediately above the card task illustration says exactly what you say I “omitted to say.” It’s interesting how we can miss that when we get the wrong answer and want to justify ourselves.
Pingback: These Are the Goods
Pingback: Links for 4th July, 2018 – economics for everybody
Pingback: Mes semaines Twitter 25 et 26 de 2018 Philippe Maupas - Alpha Beta Blog
Pingback: The Weekend Starts Here... - The Financial Bodyguard Blog Site
Pingback: Weekend Reading: Are Emerging Markets Sending A Signal | peoples trust toronto
Pingback: Weekend Reading: Are Emerging Markets Sending A Signal – students loan
Pingback: Weekend Reading: Are Emerging Markets Sending A Signal – TCNN: The Constitutional News Network
Pingback: Weekend Reading: Are Emerging Markets Sending A Signal | Real Patriot News
Pingback: Weekend Reading: Are Emerging Markets Sending A Signal – Conspiracy News
Pingback: The Bps & Pieces Best of 2018 – bps and pieces
Pingback: First-Rate Intelligence | Above the Market