The Magnificent Seven is a terrific 1960 movie “western” about seven gunfighters hired to protect a small Mexican village from marauding bandits. A re-make is currently in the works and the “original is itself a re-make of Akira Kurosawa’s Japanese classic, Seven Samurai. Meanwhile, Maleficent is the “Mistress of All Evil” in Sleeping Beauty who curses the infant princess to prick her finger on the spindle of a spinning wheel and die before the sun sets on her sixteenth birthday. Today I’m offering up a mash-up from these movies to outline what I’m calling the Maleficent 7 – seven inherent human problems and limitations that impede our ability to make good decisions generally and especially about money.
1. Connecting Correlation with Causation. In the investment world, every piece of advertising must state that past performance is not indicative of future results. It’s the codification of the principle that correlation does not imply causation. Ongoing correlation gives us a hint that the correlated things might be connected, but it ain’t necessarily so. The higher the number of variables, the less likely it is that the connection is a causal one and one that will persist over time. The famous Latin expression of the fallacy we fall prey to in this area is post hoc ergo propter hoc, which means “after this, therefore because of this.” The Big Bang Theory’s Sheldon Cooper (played by Jim Parsons) illustrates its usage below.
We aren’t nearly as good at ferreting out causation as we like to think. The chart below demonstrates the difficulty as well as any.
To bring the issue home a bit in our information-rich age (as I so often say), information is cheap while meaning is expensive. Some connections are pretty obvious. Cake-buying will likely correlate with ice cream-buying. A strong and growing economy will generally correlate with a healthy stock market. Some connections are much less intuitive, but no less predictive. But often the supposed connections are simply random noise. Many a scam artist can backtest some data and come up with a “system” that will make a lot of money (sadly, for the scammer and not for you). The highly popular bestseller The Bible Code provides a good popular example of a data-mining failure that can be readily debunked.
Correlation isn’t causation. It’s a hint or a possibility, but no sure thing. This entire process is so difficult because we have so much trouble isolating causation. It’s easy to see that bad traffic can cause one’s commute to be longer than normal, but ascertaining causation where there are huge numbers of variables can be astonishingly difficult. Finding a causal chain in the hard sciences can be made easier by creating experiments that limit the variables or even eliminate all other possible variables. That’s simply not possible with the infinite numbers of variables impacting the markets.
Investors are always on the look-out for patterns that will persist and offer opportunities. Most never work. Some work for a time but are copied so much that the advantage disappears. A few (such as value, size, quality and momentum) have worked and continue to work – they persist (in statistical terms). But even then, they could stop working at any time. There’s no guaranty. Correlation does not imply causation. Past performance does not guaranty future results.
2. Confusing Luck and Skill. “Random” means being governed by or involving equal chances for each of the actual or hypothetical members of a population as well as having been produced or obtained by such a process and therefore unpredictable in detail. In investing, that unpredictability really matters, but so does the “in detail” qualifier. As I have stressed before, there are so many variables involved that investing can’t deal in anything like certainty. We might want or even believe in a sure thing investment, but the best we can do is play the probabilities. As in The Hunger Games, the idea is for the odds ever to be in your favor.
The concept is pretty easy to illustrate. The probability of drawing any given poker hand is calculated by dividing the number of ways to draw the hand by the total number of five-card hands. Therefore, since there are four different ways to draw a Royal Flush (one for each suit), the probability of doing so is 4/2,598,960, or about 0.000154 percent. That’s remarkable precision in the aggregate, but we still can’t know when one will be drawn (assuming a fair deck and dealer). That’s why it’s said to be unpredictable in detail.
In blackjack, a player who asks for a hit on 18 deserves to lose, but very occasionally draws a three. Or in poker, sometimes a player draws an inside straight. It’s against the probabilities and can’t be predicted (again, assuming a fair deck and dealer) — it’s utterly random – but it happens.
In the markets, examples are also easy to offer. Sildenafil citrate, now sold as Viagra, was merely a hypertension drug when Pfizer discovered certain significant side effects (essentially by accident) and ended up with one of the more profitable drugs ever. Talk about random!
As Nassim Taleb makes clear, we often mistake randomness for agency. In fact, the prevalent self-serving bias depends upon it, at least when the results are desirable. Accordingly, we need to account properly for randomness in every investing situation because, much of the time, it really is random.
As I have noted before, in Major League Baseball, over a 162-game season the best teams win roughly 60 percent of the time. But over shorter stretches, it’s not unusual to see significant streaks. Since reversion to the mean establishes that the expected value of the whole season is roughly 50:50 (or slightly above or below that level), 60 percent being great means that there is a lot of randomness in baseball. That idea makes intuitive sense – the difference between ball four and strike three can be tantalizingly small (even if/when the umpire gets the call right); so can the difference between a hit and an out. Luck is a huge factor in investment returns too, irrespective of manager. Indeed, most of the annual variation in one’s investment performance is due to luck, not skill. The fact that “the market” (however defined) gained or lost – say – 10 percent is far more important to one’s investment performance than the specific investments held.
Per Leonard Mlodinow), all of this means that we are readily tricked into thinking that random patterns are meaningful. So we build models that are far more sensitive to our initial assumptions than we realize, we make approximations that are cruder than we realize, we focus on what is easiest to measure rather than on what is really important, we build models that rely too heavily on statistics without enough theoretical understanding, and we unconsciously let biases based on expectation or self-interest affect our analysis.
3. We *love* shiny objects. We *love* shiny objects…
…often to our detriment.
We simply can’t resist what’s beeping and flashing right in front of our noses. According to data collected by the Android app Locket, the average user checks his or her phone over 100 times per day, often when we could be doing something worthwhile (like engaging with other humans). We are easily distracted and make choices based thereon, which readily explains why “impulse items” are at the front of the grocery store while milk is in the back corner. It’s why we so often can’t delay gratification.
We should be focusing on things that are really important far more often than we do. Those “shiny objects” are often to blame.
4. We’re lousy at math. We are all prone to innumeracy, which is “the mathematical counterpart of illiteracy,” according to Douglas Hofstadter. It describes “a person’s inability to make sense of the numbers that run their lives.” We intuitively tend to think that if we start with $1,000 and suffer a 50 percent loss on Day 1 but make 50 percent back on Day 2, we’re back to even. However, were that to happen, our $1,000 would be reduced to a mere $750 (more on the “arithmetic of loss” here). Similarly, a sum of money growing at 8 percent simple interest for ten years is the same as 6 percent (6.054 percent to be exact) compounded over that same period. Most of us have trouble thinking in those terms.
If we broaden our scope from simple math to probability, we fare even worse. For example, most people would consider it an unlikely coincidence if any two people would share the same birthday in a room with 23 people in it. Since one would need 367 people (due to leap year) in a room to be certain of finding two people with the same birthday, it seems to make sense that there is only a 6.26 percent chance of that happening with only 23 people in a room (23 divided by 367). However, 99 percent probability is reached with just 57 people in a room and 50 percent probability exists with only 23 (see more on the “birthday problem” here).
Interestingly, Nobel laureate Daniel Kahneman doesn’t blame our difficulties with probability on innumeracy. Instead, he says, “to compute probabilities you need to keep several possibilities in your mind at once. It’s difficult for most people. Typically, we have a single story with a theme. People have a sense of propensity, that the system is more likely to do one thing than the other, but it’s quite different from the probabilities where you have to think of two possibilities and weigh their relative chances of happening.” Contingencies and (perhaps random) consequences don’t correspond to the way we like to see the world. We are — pretty much all the time — looking backward and creating a pattern to fit events and constructing a story that explains what happened along with what caused it to happen, fitting what we see or assume we see into a preconceived narrative, or both.
Most of us recognize the need to make our investment process data-driven at every point. But it’s a difficult principle to implement due to our being so mathematically challenged.
5. We overrate our ability to impact the future. Most of us overrate our own capacities and exaggerate our abilities to shape the future. That flaw, per Daniel Kahneman, is the planning fallacy. The planning fallacy is our tendency to underestimate the time, costs, and risks of future actions and at the same time to overestimate the benefits thereof. It’s at least partly why we underestimate bad results. It’s why we think it won’t take us as long to accomplish something as it does. It’s why projects tend to cost more than we expect. It’s why the results we achieve aren’t as good as we expect.
The planning fallacy projects our fanciful and self-serving renderings forward with the idea that the future can somehow be managed—and perhaps controlled—despite the lack of any actual historical support for the notion. As Adam Gopnik sagely pointed out in The New Yorker, “[w]hat history actually shows is that nothing works out as planned, and that everything has unintentional consequences.” Indeed, “the best argument for reading history is not that it will show us the right thing to do in one case or the other, but rather that it will show us why even doing the right thing rarely works out.”
This overriding problem is why I take three trips to Home Depot on Saturdays and why it takes me all day to finish a household chore I expected to take maybe an hour (which then doesn’t work right or look right). As John Lennon put it, “Life is what happens to you while you’re busy making other plans.” Things rarely turn out the way we expect. We never have everything covered. Life happens.
6. We are beset by cognitive flaws. We are plagued by a long list of cognitive and behavioral biases that impede our ability to make good decisions. I have written about them often — so often that any sort of summary is probably unnecessary. But a few reminders don’t hurt.
Confirmation bias means that instead of the impartial judges of information that we like to think we are, we’re much more like attorneys looking for anything or any argument we think we can exploit without much regard for whether it’s true. It’s why Fox News and MSNBC viewers tend to see each other as some version of stupid, delusional and evil, no matter the situation or circumstances. Compounding the error, nearly everyone confuses the claims they accept as true with their personal identity and worth, giving us a public square flooded with outrage.
Optimism bias means that one’s subjective confidence in his judgment is reliably greater than his or her objective accuracy – we think we’re right far more often than we are. Fully 94 percent of college professors believe they have above-average teaching skills (anyone who has gone to college will surely disagree with that). Since 80 percent of drivers say that their driving skills are above average, I guess none of them drive on the freeway when I do. While 70 percent of high school students claim to have above-average leadership skills, only 2 percent say they are below average, all of them apparently taught by above average math teachers (see below).
Our self-serving bias pushes us to see the world such that the good stuff that happens is my doing (like the winning coach who says “We had a great week of practice, worked hard and executed today”) while the bad stuff is always someone else’s fault (“It just wasn’t our night” or “We would have won if the refereeing hadn’t been so awful” or “We couldn’t have foreseen that 100-year flood market crisis”). Our loss aversion means that we feel losses between two and two-and-a-half times as strongly as gains. It favors inaction over action and the status quo over any alternative. It’s one reason why football coaches are so frustratingly cautious and “go for it” far less often than the data says they should.
Intuitively, we think the more choices we have the better. However, the sad truth is that too many choices can lead to decision paralysis due to information overload. It’s why participation in 401(k) plans among employees decreases as the number of investable funds offered increases and why New Jersey diner menus (this one, for example) can be so frustrating. It’s also why we so readily rely upon heuristics (rules of thumb) rather than getting down and dirty with the data.
We routinely run in herds — large or small, bullish or bearish. Investment institutions herd even more than individuals in that investments chosen by one institution predict the investment choices of other institutions by a remarkable degree. It’s why market bubbles occur – in tulips, baseball cards, internet stocks and real estate. It’s why the NFL is a copycat league.
As I have emphasized repeatedly, we like to think that we see the world the way it really is. The sad truth is that we tend to see the world the way we really are.
7. Our personal insight is extremely limited. Try this experiment sometime. Ask an impartial observer – someone with no connection to any of the players – to go with you to a youth sporting event involving one of your children or grandchildren. Then go to a game involving no kids or families you know with the same companion. In each case, try to behave as you normally would but also watch and listen carefully, particularly to the conversations, attitudes and actions of the parents.
After both games are over, discuss what you experienced with the observer — second game first. Ask the observer to read this entire entry before the first game and to direct your post-game analysis and conversation accordingly. But you should read no further until after both games are over, nerves are calmed, tempers cooled and the games discussed.
When discussing the second game – where you didn’t know anyone – you will no doubt recall parents waxing eloquent about Johnny or Ann’s oh-so-bright athletic futures. No matter the age of the children, you will hear about inevitable league titles, professional prospects and worst case scenarios involving college scholarships. I can’t begin to tell you how many times I’ve heard the parents of elementary school kids who play club sports justify the expense as a means to obtain a sure-thing scholarship.
Your mileage won’t vary.
If Jimmy fails to score he is injured, sick, unlucky or the victim of a jealous teammate who won’t pass. If Alice isn’t playing regularly, the coach is obviously an idiot. The officiating sucks, nearly everybody comments on it, and several are loud; at least one parent is even pretty nasty about it. If Jean doesn’t make the All-Star team (or the varsity) lawyers will be consulted, potential remedies considered and threats offered. Any evidence suggesting that Jerry is not a future Hall-of-Famer (and remember, there is only a 0.03 percent — not 3 percent — chance of a high school basketball player — much less a youth player — playing professionally, per the NCAA) is thoroughly and utterly rejected (if not refuted). Violence may be threatened.
You will have probably chuckled knowingly at what transpired as it is so perfectly and profoundly predictable.
This test will establish the reality of confirmation bias – the idea that we tend to see only what we want to see, as noted above – beyond any reasonable doubt. It will likely seem to have been both obvious and ridiculous. It’s parents being parents, par for the course. How could they be so silly?
You and the observer probably even commented upon the abject silliness during the game. I hope you were discreet.
Ben Roethlisberger, two-time Super Bowl winner and Pro Bowl quarterback of the Pittsburgh Steelers, didn’t play QB in high school until his senior year because the coach’s son – one year older – was the starter. The coach was no doubt certain that he was making an entirely objective evaluation and doing what was best for the team. He still thinks he was right (of course). That’s confirmation bias. The absurdity surprises none of us.
Next discuss your game. Your – oops, I mean your child’s – game. In the unlikely event that you didn’t cheat and read this entire entry before embarking on the experiment, and if you are scrupulously honest, you will recall that at your child’s game, you were no better than the parents at the second game. You likely would have denied that you were like them, of course. And if you were a jerk, you will make excuses as to why that game was unusual (“the officiating was especially bad” or “Alicia’s coach is particularly incompetent”).
Perhaps you were careful not to blame the coach entirely for his (it’s almost always a him, even for girls) inadequacies (maybe he isn’t experienced enough or well enough trained). Perhaps you weren’t as loud as others or only thought what others said out loud. Perhaps you’re really an expert where the previous parents weren’t and thus qualified to make the observations. Perhaps you’re even embarrassed at the realization of what you thought and did.
As noted above. we all suffer from cognitive and behavioral biases that poison any realistic hope we may have for objective analysis, especially about those closest to us and in whom we have the most invested. As the economists say, we have really strong priors (illustrated hysterically if very crudely here). If we are aware, we will frequently recognize it in others – especially the most egregious examples. But we will almost never recognize it in ourselves. That’s because everybody else is expressing opinions while we are stating facts, or so we are convinced. We are all derps who herp far too often. Here’s Sheldon Cooper again.
“Don’t you think if I were wrong I’d know it?”
That reality – that failing – is bias blindness. It’s our inability or unwillingness, even if and when we see it in others, to see the biases that beset us. Bias is everywhere. So is bias blindness, no matter how willing – and even eager – we are to deny it.