Beguiled By Narrative

Thomas Hoepker (Magnum Photos)

Thomas Hoepker (Magnum Photos)

The photograph above, taken at the Brooklyn waterfront on the afternoon of September 11, 2001 by German photographer Thomas Hoepker, is now one of the iconic images of that horrible day. In fact, the Observer New Review (London) republished it in 2011 as the 9/11 photograph. In Hoepker’s words, he saw “an almost idyllic scene near a restaurant — flowers, cypress trees, a group of young people sitting in the bright sunshine of this splendid late summer day while the dark, thick plume of smoke was rising in the background.” By his reckoning, even though he had paused but for a moment and didn’t speak to anyone in the picture, Hoepker was concerned that the people in the photo “were not stirred” by the events at the World Trade Center — they “didn’t seem to care.” Hoepker published many images from that day, but he withheld this picture for over four years because, in his view, it “did not reflect at all what had transpired on that day.”

In 2006, the image was finally published in David Friend’s book, Watching the World Change. Frank Rich wrote a 9.11 fifth anniversary column in The New York Times, framed by the photo, which he called “shocking.” Continue reading


Missed It By *This* Much

It’s part of the gig. A Wall Street strategist, economist or even a run-of-the-mill investment manager gets a crack on financial television and is asked about his or her forecast for the market. Instead of wisely objecting to the premise of the question, the poor schlemiel answers and, once matters play out, is shown to have been less than prescient (even though the forecast is likely forgotten). Indeed, one forecast that is almost certain to be correct is that market forecasts are almost certain to be wrong.Get Smart

The delightful old Mel Brooks/Buck Henry spy satire Get Smart, which was on television from 1965-70, included a number of funny catch-phrases uttered by Don Adams as agent Maxwell Smart (played by Steve Carell in the movie). One of them was the following, offered when Max had, yet again and like our fearless forecasters, screwed up.

So, as year-end approaches, let’s take a look at how much “this much” is — how badly various Wall Street market forecasts missed it with their prognostications for the S&P 500 in 2013. Continue reading

9.11 and the Narrative Fallacy

The photograph above, taken by German photographer Thomas Hoepker, is one of the iconic images of 9.11.  The picture was taken at the Brooklyn waterfront on the afternoon of that infamous day twelve years ago.   In Hoepker’s words, he saw “an almost idyllic scene near a restaurant — flowers, cypress trees, a group of young people sitting in the bright sunshine of this splendid late summer day while the dark, thick plume of smoke was rising in the background.”  By his reckoning, even though he had paused but for a moment and didn’t speak to anyone in the picture, Hoepker was concerned that the people in the photo “were not stirred” by the events at the World Trade Center — they “didn’t seem to care.”  Even though he published many images from that day, Hoepker withheld this picture for over four years because, in his view, it “did not reflect at all what had transpired on that day.” Continue reading

“Assume a Spherical Cow”

Anyone who has managed money for more than about a nanosecond recognizes that the idea that the markets are efficient is a myth, especially in times of crisis, real or perceived. Despite claims of scientific objectivity, economics is as prone to human frailty as anything else. As Seth Godin wrote this week: “Your first mistake might be assuming that people are rational.” For example, on my birthday in 2008, the S&P 500 lost over 9 percent, over a trillion dollars in value, on account of (per CNN) nothing more definitive than “recession talk.” That’s hardly evidence of rationality.

The consistently engaging 3 Quarks Daily has a new piece up this week on economics as religion rather than science. It’s hardly a novel concept, but the argument is an interesting one.  However, author Ben Schreckinger missed or ignored some of the best available evidence, which I’ll get to in a bit of a roundabout way.

Spherical Cow2The above cartoon (from Abstruse Goose) riffs on a classic physics joke that goes something like this:

Milk production at a dairy farm was low, so a farmer wrote to the local university to ask for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist, and two weeks of intensive on-site investigation took place. The scholars then returned to the university, notebooks crammed with data, where the task of writing the report was left to the team leader. Shortly thereafter the physicist returned to the farm, and advised the farmer, “I have the solution, but it only works in the case of spherical cows in a vacuum.”

Thus “spherical cow” has become a metaphor for highly (overly!) simplified scientific models of complex real life phenomena. Economists may be even worse offenders than theoretical physicists. As Hale Stewart wrote yesterday, “complex models that claim to model the entire US economy just aren’t worth the time of day no matter how good the algorithms backing it up. “That economists so frequently suffer from “physics envy” makes for a delightful bit of irony. Yet an even worse offense by economists is their all-too-frequent willingness to elevate a favored ideology ahead of the actual facts. People are routinely driven by their ideologies and their behavioral biases rather than facts and data, of course, but economists claim to be acting as scientists, with a delineated method designed to root out such things. If only. Continue reading

We Always Know Less Than We Think

Barry Ritholtz has another fine “Philosophy Phriday” piece up today on the value of not knowing. His emphasis is on recognizing what you don’t (and perhaps can’t) know so as to minimize error.  He’s right of course.  This idea reminds me of the following classic commercial.

Continue reading

Gaming the Numb3rs

Last night at the Old Globe here in San Diego I got to see one of my favorite plays, Rosencrantz and Guildenstern are Dead, presented as part of the Globe’s 2013 Shakespeare Festival. Doing so brought the following post to mind in that it uses the play as a springboard for discussing probability and investing. I hope you will enjoy it — or enjoy it again.


Tom Stoppard’s Rosencrantz and Guildenstern are Dead  presents Shakespeare’s Hamlet from the bewildered point of view of two of the Bard’s bit players, the comically indistinguishable nobodies who become headliners in Stoppard’s play.  The play opens before our heroes have even joined the action in Shakespeare’s epic. They have been “sent for” and are marking time by flipping coins and getting heads each time (the opening clip from the movie version is shown above).  Guildenstern keeps tossing coins and Rosencrantz keeps pocketing them. Significantly, Guildenstern is less concerned with his losses than in puzzling out what the defiance of the odds says about chance and fate. “A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability.”

The coin tossing streak depicted provides us with a chance to consider these probabilities.  Guildenstern offers among other explanations the one mathematicians and investors should favor —“a spectacular vindication of the principle that each individual coin spin individually is as likely to come down heads as tails and therefore should cause no surprise each individual time it does.”  In other words, past performance is not indicative of future results.

Even so, how unlikely is a streak of this length? Continue reading

That’s So Random

cluelessWhen my kids were teenagers, if something was random, that was a good thing.  A really good thing, in fact.  Something funny was random.  A good party was random. Being more than a bit of a fussbudget, I objected to such usage.  I didn’t think it was correct.

But I was wrong.  Continue reading

The Narrative Opportunity

Regular readers of this site know that I reference and write about what Nassim Taleb calls the narrative fallacy often.  It is our tendency to look backward and create a pattern to fit events and to construct a story that explains what happened along with what caused it to happen.  We all like to think that our decision-making is a rationally based process — that we examine the evidence and only after careful evaluation come to a reasoned conclusion as to what the evidence suggests or shows.

But we don’t. Continue reading

We Suck at Probabilities

I have often noted (for example, here) that we generally suck at math, to our great detriment.  I have also noted that we are especially poor at dealing with probabilities.  If a weather forecaster says that there is an 80 percent chance of rain and it remains sunny, instead of waiting to see if it rains 80 out of 100 times when his or her forecast called for an 80 percent chance of rain, we race to conclude — perhaps based upon that single instance — that the forecaster isn’t any good. Data trumps our lyin’ eyes, but we don’t routinely see it. Continue reading

Rock You Like a Superstorm


Hurricane/Superstorm Sandy rocked the eastern seaboard last week to devastating effect.  In a significant instance of good planning, markets and schools were closed, states of emergency declared and mandatory evacuations begun well before the storm made landfall.  Yet nearly until the storm reached land in New Jersey last Monday, I heard lots of grousing about alleged hysteria and overreaction with respect to the precautions and preparations being undertaken to mitigate potential damage (see below for a prominent example).   

Some went so far as to defy evacuation orders, and some people paid for doing so with their lives.  Once the storm actually hit and caused serious damage – albeit no longer officially as a hurricane, but as a “superstorm” – the complaining stopped.  Fortunately, the governmental disaster preparedness organization seems to have performed well overall.  You can read about these events in many venues, including herehere and here.

The pre-crisis grousing and the refusal of so many to evacuate are worth thinking about because of what is thereby revealed about us as humans and the cognitive biases that beset us.  I offer three “take-away” thoughts that are broadly applicable as well as specifically applicable to the investment world.

1. We don’t deal well with probabilities.  When a weather forecast says that there is a 70 percent chance of sun, we tend to think that the forecaster screwed up if it rains.  But that’s not how we should evaluate probabilities.  Instead, we should consider how often it rains when the forecast calls for a 70 percent chance of sun.  When the forecast is spot-on perfect, it will rain 30 percent of the time when it calls for a 70 percent chance of sun.  The odds favor sun, but because complex systems like the weather (and financial markets) encompass so many variables, nothing approaching certainty is possible.  We don’t handle that kind of thinking very well (a very current and interesting example in a political context is examined here).

To illustrate the level of complexity I’m talking about, consider that we can construct a linear, one-dimensional chain with 10 different links in 3,628,800 different ways.  For 100 different links, the possibilities total 10158. If those are the possibilities for making a simple chain, imagine the possibilities when we’re talking about complex systems where wild randomness rules. 

Perhaps the key argument of Nobel laureate Daniel Kahneman’s brilliant book, Thinking Fast and Slow, is that without careful and intentional deliberation (and often even then), we suffer from probabilistic irrationality. Remember back in 2009 when New England Patriots coach (and my former New Jersey neighbor) Bill Belichick famously decided to go for a first down on fourth-and-two in Patriots territory rather than punt while up six points late against Peyton Manning and the Indianapolis Colts?  When Wes Welker was stopped just short of the first down and the Colts went on to score the winning touchdown, the criticism was overwhelming even though Belichick’s decision gave the Pats a better chance of winning. Those withering attacks simply demonstrate our difficulties with probabilities. Doing what offers the best chance of success in no way guarantees success. As analyst Bill Barnwell, who was agnostic on whether Belichick was right or wrong, wrote: “you can’t judge Belichick’s decision by the fact that it didn’t work” (bold and italics in the original). We can (and should) hope for the best while preparing for the worst.

The world is wildly random.  With so many variables, even the best process (when we are able to overcome our probabilistic irrationality) can be undermined at many points, a significant number of which are utterly out of anyone’s control.   As Nate Silver reports in his fine new book, The Signal and the Noise, the National Weather Service is extremely good at weather forecasting in a probabilistic sense. When the NWS says there is a 70 percent chance of sun, it’s sunny just about 70 percent of the time.  Because we don’t think probabilistically (and crave certainty too), we tend to assume that the forecasts on the days it rains – 30 percent of the time – are wrong.  Accordingly, when a probabilistic forecast of a dangerous hurricane is generally inconsistent with our experience (“I didn’t have a problem last time”) and isn’t what we want to hear (think confirmation bias), we can readily focus on the times we remember weather forecasts being “wrong” and discount the threat.  As mathematician John Allen Paulos tweeted regarding the trouble that so many seem to have election probabilities:

Many people’s notion of probability is so impoverished that it admits of only two values: 50-50 and 99%, tossup or essentially certain.

In a fascinating research study, economists Emre Soyer and Robin Hogarth showed the results of a regression analysis to a test population of economics professors. When they presented the results in the way most commonly done in economics journals (as a single number accompanied by some error measures), the economists — whose careers are largely predicated upon doing just this sort of analysis! — did an embarrassingly poor job of answering a set of questions about the probabilities of various outcomes. When they presented the results as a scatter graph, the economists got most of the questions right. Yet when they presented the results both ways, the economists got most of the questions wrong again. As Justin Fox emphasizes, there seems to be something about a single-number probability assessment that lures our primitive brains in and leads them astray.

Due to complexity and the wild randomness it entails, the investment world — like weather forecasting — offers nothing like certainty.  As every black jack player recognizes, making the “right” play (probabilistically) in does not ensure success.  The very best we can hope for is favorable odds and that over a long enough period those odds will play out (and even then only after careful research to establish the odds).  That we don’t deal well with probabilities makes a difficult situation far, far worse.

2. We’re prone to recency bias too.  We are all prone to recency bias, meaning that we tend to extrapolate recent events into the future indefinitely.  Since the recent experience of residents of the eastern seaboard (Hurricane Irene) wasn’t nearly as bad as expected (despite doing significant damage), that experience was extrapolated to the present by many.  When confirmation bias (we tend to see what we want to see) and optimism bias are added to the mix, it’s no wonder so many didn’t evaluate storm risk (and don’t evaluate investment risk) very well.

3. We don’t deal well with low probability, high impact events.  In the aggregate, hurricanes are low-frequency but high impact events.  As I have explained before, when people calculate the risk of hurricane damage and make decisions about hurricane insurance, they consistently misread their prior experience. This conclusion comes from a paper by Wharton Professor Robert Meyer that describes and reports on a research simulation in which participants were instructed that they were owners of properties in a hurricane-prone coastal area and were given monetary incentives to make smart choices about (a) when and whether to buy insurance against hurricane losses and (b) how much insurance to buy.

Over the course of the study (three simulated hurricane “seasons”), participants would periodically watch a map that showed whether a hurricane was building as well as its strength and course. Until virtually the last second before the storm was shown to reach landfall, the participants could purchase partial insurance ($100 per 10 percent of protection, up to 50 percent) or full coverage ($2,500) on the $50,000 home they were said to own. Participants were advised how much damage each storm was likely to cause and, afterward, the financial consequences of their choices. They had an unlimited budget to buy insurance.  Those who made the soundest financial decisions were eligible for a prize.

The focus of the research was to determine whether there are “inherent limits to our ability to learn from experience about the value of protection against low-probability, high-consequence events.”  In other words — whether experience can help us deal with tail risk. Sadly, we don’t deal with this type of risk management very well. Moreover, as Nassim Taleb has shown, such risks — while still not anything like frequent — happen much more often than we tend to think (which explains why the 2008-09 financial crisis was deemed so highly unlikely by the vast majority of experts and their models). 

The bottom line here is that participants seriously under-protected their homes. The first year, they sustained losses almost three times higher than if they had bought protection rationally. The key problem was a consistent failure to buy protection or enough protection even when serious and imminent risk was obvious (sounds like people refusing to evacuate, doesn’t it?).  Moreover, most people reduced the amount of protection they bought whenever they endured no damage in the previous round, even if that lack of damage was specifically the result of having bought insurance.

Experience helped a little.  Participants got better at the game as season one progressed, but they slipped back into old habits when season two began. By season three, these simulated homeowners were still suffering about twice as much damage as they should have.  As Meyer’s paper reports, these research results are consistent with patterns seen in actual practice. For example, the year after Hurricane Katrina there was a 53% increase in new flood-insurance policies issued nationally.  But within two years, cancellations had brought the coverage level down to pre-Katrina levels.

We simply don’t do a very good job dealing with low-probability, high-impact events, even when we have experience with them.  Since those in the northeast have so little experience with hurricanes, their discounting of hurricane risk is (again) even more understandable.  Given what happened to the vast majority of investment portfolios in 2008-09, the alleged market “professionals” often don’t manage tail risk very well either. That said, when a low-frequency event is treated as a certainty or near-certainty as a matter of policy, that overreaction can be disastrous and the costs too high to bear, as a Navy SEAL Commander here in San Diego once took great pains to explain to me in the context of fighting terrorism.

Taleb goes so far as to assert that we should “ban the use of probability.”  I disagree, but we ought to use probabilities with care and be particularly careful about how we convey probability assessments.  For example, a potential range of outcomes is better than a single number (as with the scatter graphs noted above).  Similarly, an outlook that shows the weighing of probabilities together with costs and potential outcomes will also help (this discussion makes a start in that direction).  Despite the risks of being perceived as “crying wolf,” we intuitively understand that when and as the potential negative outcomes are greater, lower likelihood events should generally be treated more seriously and that the progression is typically non-linear.   

In virtually every endeavor, our cognitive biases are a consistent problem and provide a constant challenge.  In terms of investing, they can and often do rock us like a hurricane — or at least a superstorm.  As Cullen Roche points out, consistent with the research noted above, we can and should learn from our investment errors, cognitive or otherwise.  Sadly, we do so far less often than we ought, as last week’s events amply demonstrate.