The Magnificent Seven is a terrific 1960 movie “western” about seven gunfighters hired to protect a small Mexican village from marauding bandits. A re-make is currently in the works and the “original is itself a re-make of Akira Kurosawa’s Japanese classic, Seven Samurai. Meanwhile, Maleficent is the “Mistress of All Evil” in Sleeping Beauty who curses the infant princess to prick her finger on the spindle of a spinning wheel and die before the sun sets on her sixteenth birthday. Today I’m offering up a mash-up from these movies to outline what I’m calling the Maleficent 7 – seven inherent human problems and limitations that impede our ability to make good decisions generally and especially about money. Continue reading
Over 50 years ago, Edward Lorenz created an algorithmic computer weather model at MIT to try to provide accurate weather forecasts. During the winter of 1961, as recounted by James Gleick in Chaos: Making a New Science, Professor Lorenz was running a series of weather simulations using his computer model when he decided to repeat one of them over a longer time period. To save time (the computer and the model were primitive by today’s standards) he started the new run in the middle, typing in numbers from the first run for the initial conditions, assuming that the model would provide the same results as the prior run and then go on from there. Instead, the two weather trajectories diverged on completely separate paths.
After ruling out computer error, Lorenz realized that he had not entered the initial conditions for the second run exactly. His computer stored numbers to an accuracy of six decimal places but printed the results to three decimal places to save space. Lorenz had entered the rounded-off numbers for his second run starting point. Astonishingly, this tiny discrepancy altered the end results dramatically.
This finding (even using a highly simplified model and confirmed by further testing) allowed Lorenz to make the otherwise counterintuitive leap to the conclusion that highly complex systems are not ultimately predictable. This phenomenon (“sensitive dependence”) came to be called the “butterfly effect” because a butterfly flapping its wings in Brazil can set off a tornado in Texas. Lorenz built upon the work of late 19th century mathematician Henri Poincaré, who demonstrated that the movements of as few as three heavenly bodies are hopelessly complex to calculate, even though the underlying equations of motion seem simple.
Accordingly and at best, complex systems – from the weather to the markets – allow only for probabilistic forecasts with significant margins for error and often seemingly outlandish and hugely divergent potential outcomes. Continue reading
On a fine morning 100 summers ago in Sarajevo, an automobile driver ferrying the Austro-Hungarian Archduke Franz Ferdinand made a wrong turn off the main street into a narrow passageway and came to a stop directly in front of a teenaged activist member of the Serbian terrorist organization Black Hand. Gavrilo Princip drew his pistol and fired twice. The archduke and his wife fell dead. Within hours, World War I was well on its way to seeming inevitable (or at least necessary), all because of a wrong turn. And the idea that history is rational and sheds light on an intelligible story, much less a story of inevitable and inexorable advance, was also shot dead, as dead as the archduke himself.
I hope you’ll read it all.
Investment Belief #4: Randomness must be actively accounted for as part of the investing equation
For at least the course of my lifetime, we Americans – all of whom are said to be “created equal” – have held to a straightforward construct of the American Dream, where it’s always morning in America. In its telling, we are a people of unlimited power, promise and potential. The keys to our success are not status, wealth or connections, but rather ability, ambition, and drive. Anybody can become whatever he or she wants. Life is thus a ladder, there to be climbed by anybody willing to step up.
James Truslow Adams coined this evocative phrase in his 1931 book, The Epic of America. His American Dream is “that dream of a land in which life should be better and richer and fuller for everyone, with opportunity for each according to ability or achievement. It is…a dream…in which each man and each woman shall be able to attain to the fullest stature of which they are innately capable, and be recognized by others for what they are, regardless of the fortuitous circumstances of birth or position.”
Over the last few decades, a darker vision has grown up. Many of those who claim to have “done their part” by going to school, gaining new skills or working hard do not perceive themselves to have received reward commensurate with their efforts and abilities, leading to great disappointment and some remarkable income inequality. Indeed, since at least the dawning of the 21st Century, this Dream has turned more nightmarish and been deemed outside the reach of many. Investments haven’t seemed to live up to their earlier promise either, with two major financial crises since 2000 terrifying an entire generation of potential investors. It’s as if (in the words of the critic Andy Greenwald), by some cruel trick, we have come to realize too late that someone or something has tipped the ladder of success sideways, the rungs casting shadows tall as prison bars.
As a consequence, political activists of all stripes actively point blame and propose solutions. But on a personal level, we are all prone to self-serving bias – our tendency to attribute our successes to or own effort and skill but to attribute less desirable outcomes to bad luck. In point of fact, and irrespective of the political conclusions one draws from the current state of the American Dream, luck (and, if you have a spiritual bent, grace) plays an enormous role in our lives – both good and bad – just as luck plays an enormous role in many specific endeavors, from investing to poker to coin-flipping to winning a Nobel Prize. We don’t like to think that much of what happens (and happens to us) is the result of luck – i.e., randomness. We hate the idea of so much that is so important being outside of our control. But how we feel about a given proposition tells us precisely nothing about whether or not it is true and there is no disputing the facts. The random is an important factor in our lives, and, despite how counterintuitive and contradictory it may sound, we need to plan accordingly. Continue reading
I often write about the relative importance of luck and skill in various endeavors, including sports and investing (here, for example) and how the outcomes in such things — heavily influenced by luck — can cause us to miss important aspects of the process involved, which is much more important in the long run (for example, here). This past week’s loss by my San Diego Chargers to the Tennessee Titans provides a terrific example of how these things work.
Titan quarterback Jake Locker is 2-1 after three games and has thrown zero interceptions so far this season. He also led the Titans on a 94-yard drive for a touchdown to beat the Bolts as time expired (against a very soft zone defense — arrrggggg!). However, Locker’s overall statistics this season are virtually identical to last year’s mediocre numbers when the Titans had a blah 6-10 record (as Grantland’s Bill Barnwell has carefully pointed out). Is he much improved or not?
It’s too early to tell for sure, but the following play (courtesy of Bolts from the Blue) offers one good data point and a helpful jumping off point toward my still quite tentative view that the overall statistics may be a better gauge of where he is than Locker’s won-loss record and lack of picks so far this season.
Marcus Gilchrist of the Chargers flat-out drops an interception with just seconds left in the game that would have secured the win for my guys. It isn’t on the level of the late-game Marlon McCree post-interception fumble that cost the Chargers a 2007 play-off game to New England (I was in the stands for that one), but it’s still pretty bad. Obviously, the play isn’t Locker’s fault in that he hit Delanie Walker in stride and Walker tipped the football straight to Gilchrist. But think for a bit what this play demonstrates.
If Marcus makes the pick, the outcome (Titans loss) could cause us to conclude that Locker isn’t really progressing. We’d look at his losing record and think that he could only score 13 points at home against the Chargers and couldn’t get it done in the two-minute drill. But since Gilchrist dropped the ball and the Titans went on the win, we may now forget that Locker badly missed a wide open Damian Williams in the end zone just before the game-winning play, didn’t make a great throw on the final play (although it was pretty good) and that Locker was just 2-for-11 on throws that traveled 15 yards or more in the air for the game.
These events provide great examples of how outcomes can disguise crucial elements of the process that — together with a significant amount of randomness — dictates those outcomes. For example, the Gilchrist drop shows how and why players who outperform for a given stretch tend to regress toward the mean. That’s also why, despite the sample size being much too small to be sure, a lot of talent and, as a very young quarterback, a much better chance of significant improvement than more seasoned pros, it seems more probable that Locker is the player we thought he was last year than a budding star, despite some very good outcomes to this point in the season.
The self-serving bias is our tendency to see the good stuff that happens as our doing (“we had a great week of practice, worked really hard and executed on Sunday”) while the bad stuff is rarely our fault (“It just wasn’t our night” or “we simply couldn’t catch a break” or “we would have won if the refereeing hadn’t been so awful”). Thus desirable results are typically due to our skill and hard work — not luck — while lousy results are outside of our control and frequently the offspring of being unlucky.
Two fine recent books undermine this outlook by (rightly) attributing a surprising amount of what happens to us — both good and bad — to luck. Continue reading
When my kids were teenagers, if something was random, that was a good thing. A really good thing, in fact. Something funny was random. A good party was random. Being more than a bit of a fussbudget, I objected to such usage. I didn’t think it was correct.
But I was wrong. Continue reading
Hurricane/Superstorm Sandy rocked the eastern seaboard last week to devastating effect. In a significant instance of good planning, markets and schools were closed, states of emergency declared and mandatory evacuations begun well before the storm made landfall. Yet nearly until the storm reached land in New Jersey last Monday, I heard lots of grousing about alleged hysteria and overreaction with respect to the precautions and preparations being undertaken to mitigate potential damage (see below for a prominent example).
Some went so far as to defy evacuation orders, and some people paid for doing so with their lives. Once the storm actually hit and caused serious damage – albeit no longer officially as a hurricane, but as a “superstorm” – the complaining stopped. Fortunately, the governmental disaster preparedness organization seems to have performed well overall. You can read about these events in many venues, including here, here and here.
The pre-crisis grousing and the refusal of so many to evacuate are worth thinking about because of what is thereby revealed about us as humans and the cognitive biases that beset us. I offer three “take-away” thoughts that are broadly applicable as well as specifically applicable to the investment world.
1. We don’t deal well with probabilities. When a weather forecast says that there is a 70 percent chance of sun, we tend to think that the forecaster screwed up if it rains. But that’s not how we should evaluate probabilities. Instead, we should consider how often it rains when the forecast calls for a 70 percent chance of sun. When the forecast is spot-on perfect, it will rain 30 percent of the time when it calls for a 70 percent chance of sun. The odds favor sun, but because complex systems like the weather (and financial markets) encompass so many variables, nothing approaching certainty is possible. We don’t handle that kind of thinking very well (a very current and interesting example in a political context is examined here).
To illustrate the level of complexity I’m talking about, consider that we can construct a linear, one-dimensional chain with 10 different links in 3,628,800 different ways. For 100 different links, the possibilities total 10158. If those are the possibilities for making a simple chain, imagine the possibilities when we’re talking about complex systems where wild randomness rules.
Perhaps the key argument of Nobel laureate Daniel Kahneman’s brilliant book, Thinking Fast and Slow, is that without careful and intentional deliberation (and often even then), we suffer from probabilistic irrationality. Remember back in 2009 when New England Patriots coach (and my former New Jersey neighbor) Bill Belichick famously decided to go for a first down on fourth-and-two in Patriots territory rather than punt while up six points late against Peyton Manning and the Indianapolis Colts? When Wes Welker was stopped just short of the first down and the Colts went on to score the winning touchdown, the criticism was overwhelming even though Belichick’s decision gave the Pats a better chance of winning. Those withering attacks simply demonstrate our difficulties with probabilities. Doing what offers the best chance of success in no way guarantees success. As analyst Bill Barnwell, who was agnostic on whether Belichick was right or wrong, wrote: “you can’t judge Belichick’s decision by the fact that it didn’t work” (bold and italics in the original). We can (and should) hope for the best while preparing for the worst.
The world is wildly random. With so many variables, even the best process (when we are able to overcome our probabilistic irrationality) can be undermined at many points, a significant number of which are utterly out of anyone’s control. As Nate Silver reports in his fine new book, The Signal and the Noise, the National Weather Service is extremely good at weather forecasting in a probabilistic sense. When the NWS says there is a 70 percent chance of sun, it’s sunny just about 70 percent of the time. Because we don’t think probabilistically (and crave certainty too), we tend to assume that the forecasts on the days it rains – 30 percent of the time – are wrong. Accordingly, when a probabilistic forecast of a dangerous hurricane is generally inconsistent with our experience (“I didn’t have a problem last time”) and isn’t what we want to hear (think confirmation bias), we can readily focus on the times we remember weather forecasts being “wrong” and discount the threat. As mathematician John Allen Paulos tweeted regarding the trouble that so many seem to have election probabilities:
Many people’s notion of probability is so impoverished that it admits of only two values: 50-50 and 99%, tossup or essentially certain.
In a fascinating research study, economists Emre Soyer and Robin Hogarth showed the results of a regression analysis to a test population of economics professors. When they presented the results in the way most commonly done in economics journals (as a single number accompanied by some error measures), the economists — whose careers are largely predicated upon doing just this sort of analysis! — did an embarrassingly poor job of answering a set of questions about the probabilities of various outcomes. When they presented the results as a scatter graph, the economists got most of the questions right. Yet when they presented the results both ways, the economists got most of the questions wrong again. As Justin Fox emphasizes, there seems to be something about a single-number probability assessment that lures our primitive brains in and leads them astray.
Due to complexity and the wild randomness it entails, the investment world — like weather forecasting — offers nothing like certainty. As every black jack player recognizes, making the “right” play (probabilistically) in does not ensure success. The very best we can hope for is favorable odds and that over a long enough period those odds will play out (and even then only after careful research to establish the odds). That we don’t deal well with probabilities makes a difficult situation far, far worse.
2. We’re prone to recency bias too. We are all prone to recency bias, meaning that we tend to extrapolate recent events into the future indefinitely. Since the recent experience of residents of the eastern seaboard (Hurricane Irene) wasn’t nearly as bad as expected (despite doing significant damage), that experience was extrapolated to the present by many. When confirmation bias (we tend to see what we want to see) and optimism bias are added to the mix, it’s no wonder so many didn’t evaluate storm risk (and don’t evaluate investment risk) very well.
3. We don’t deal well with low probability, high impact events. In the aggregate, hurricanes are low-frequency but high impact events. As I have explained before, when people calculate the risk of hurricane damage and make decisions about hurricane insurance, they consistently misread their prior experience. This conclusion comes from a paper by Wharton Professor Robert Meyer that describes and reports on a research simulation in which participants were instructed that they were owners of properties in a hurricane-prone coastal area and were given monetary incentives to make smart choices about (a) when and whether to buy insurance against hurricane losses and (b) how much insurance to buy.
Over the course of the study (three simulated hurricane “seasons”), participants would periodically watch a map that showed whether a hurricane was building as well as its strength and course. Until virtually the last second before the storm was shown to reach landfall, the participants could purchase partial insurance ($100 per 10 percent of protection, up to 50 percent) or full coverage ($2,500) on the $50,000 home they were said to own. Participants were advised how much damage each storm was likely to cause and, afterward, the financial consequences of their choices. They had an unlimited budget to buy insurance. Those who made the soundest financial decisions were eligible for a prize.
The focus of the research was to determine whether there are “inherent limits to our ability to learn from experience about the value of protection against low-probability, high-consequence events.” In other words — whether experience can help us deal with tail risk. Sadly, we don’t deal with this type of risk management very well. Moreover, as Nassim Taleb has shown, such risks — while still not anything like frequent — happen much more often than we tend to think (which explains why the 2008-09 financial crisis was deemed so highly unlikely by the vast majority of experts and their models).
The bottom line here is that participants seriously under-protected their homes. The first year, they sustained losses almost three times higher than if they had bought protection rationally. The key problem was a consistent failure to buy protection or enough protection even when serious and imminent risk was obvious (sounds like people refusing to evacuate, doesn’t it?). Moreover, most people reduced the amount of protection they bought whenever they endured no damage in the previous round, even if that lack of damage was specifically the result of having bought insurance.
Experience helped a little. Participants got better at the game as season one progressed, but they slipped back into old habits when season two began. By season three, these simulated homeowners were still suffering about twice as much damage as they should have. As Meyer’s paper reports, these research results are consistent with patterns seen in actual practice. For example, the year after Hurricane Katrina there was a 53% increase in new flood-insurance policies issued nationally. But within two years, cancellations had brought the coverage level down to pre-Katrina levels.
We simply don’t do a very good job dealing with low-probability, high-impact events, even when we have experience with them. Since those in the northeast have so little experience with hurricanes, their discounting of hurricane risk is (again) even more understandable. Given what happened to the vast majority of investment portfolios in 2008-09, the alleged market “professionals” often don’t manage tail risk very well either. That said, when a low-frequency event is treated as a certainty or near-certainty as a matter of policy, that overreaction can be disastrous and the costs too high to bear, as a Navy SEAL Commander here in San Diego once took great pains to explain to me in the context of fighting terrorism.
Taleb goes so far as to assert that we should “ban the use of probability.” I disagree, but we ought to use probabilities with care and be particularly careful about how we convey probability assessments. For example, a potential range of outcomes is better than a single number (as with the scatter graphs noted above). Similarly, an outlook that shows the weighing of probabilities together with costs and potential outcomes will also help (this discussion makes a start in that direction). Despite the risks of being perceived as “crying wolf,” we intuitively understand that when and as the potential negative outcomes are greater, lower likelihood events should generally be treated more seriously and that the progression is typically non-linear.
In virtually every endeavor, our cognitive biases are a consistent problem and provide a constant challenge. In terms of investing, they can and often do rock us like a hurricane — or at least a superstorm. As Cullen Roche points out, consistent with the research noted above, we can and should learn from our investment errors, cognitive or otherwise. Sadly, we do so far less often than we ought, as last week’s events amply demonstrate.