Last night at the Old Globe here in San Diego I got to see one of my favorite plays, Rosencrantz and Guildenstern are Dead, presented as part of the Globe’s 2013 Shakespeare Festival. Doing so brought the following post to mind in that it uses the play as a springboard for discussing probability and investing. I hope you will enjoy it — or enjoy it again.

___________

Tom Stoppard’s Rosencrantz and Guildenstern are Dead presents Shakespeare’s Hamlet from the bewildered point of view of two of the Bard’s bit players, the comically indistinguishable nobodies who become headliners in Stoppard’s play. The play opens before our heroes have even joined the action in Shakespeare’s epic. They have been “sent for” and are marking time by flipping coins and getting heads each time (the opening clip from the movie version is shown above). Guildenstern keeps tossing coins and Rosencrantz keeps pocketing them. Significantly, Guildenstern is less concerned with his losses than in puzzling out what the defiance of the odds says about chance and fate. “A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability.”

The coin tossing streak depicted provides us with a chance to consider these probabilities. Guildenstern offers among other explanations the one mathematicians and investors should favor —“a spectacular vindication of the principle that each individual coin spin individually is as likely to come down heads as tails and therefore should cause no surprise each individual time it does.” In other words, past performance is not indicative of future results.

Tom Stoppard’s Rosencrantz and Guildenstern are Dead presents Shakespeare’s Hamlet from the bewildered point of view of two of the Bard’s bit players, the comically indistinguishable nobodies who become headliners in Stoppard’s play. The play opens before our heroes have even joined the action in Shakespeare’s epic. They have been “sent for” and are marking time by flipping coins and getting heads each time (the opening clip from the movie version is shown above). Guildenstern keeps tossing coins and Rosencrantz keeps pocketing them. Significantly, Guildenstern is less concerned with his losses than in puzzling out what the defiance of the odds says about chance and fate. “A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability.”

The coin tossing streak depicted provides us with a chance to consider these probabilities. Guildenstern offers among other explanations the one mathematicians and investors should favor —“a spectacular vindication of the principle that each individual coin spin individually is as likely to come down heads as tails and therefore should cause no surprise each individual time it does.” In other words, past performance is not indicative of future results.

Even so, how unlikely is a streak of this length?

The probability that a fair coin, when flipped, will turn up heads is 50 percent (the probability of any two independent sequential events both happening is the product of the probability of both). Thus the odds of it turning up twice in a row is 25 percent (½ x ½), the odds of it turning up three times in a row is 12.5 percent (½ x ½ x ½) and so on. Accordingly, if we flip a coin 10 times (one “set” of ten), we would only expect to have a set end up with 10 heads in a row once every 1024 sets {(½)^{10} = 1/1024}.

Rosencrantz and Guildenstern got heads more than 100 consecutive times. The chances of that happening are: (½)^{100} = 1/7.9 x 1031. In words, we could expect it to happen once in 79 million million million million million (that’s 79 with 30 zeros after it) sets. By comparison, the universe is about 13.9 billion years old, in which time only about 10^{17} seconds (1 with 17 zeros after it) have elapsed. Looked at another way, if every person who ever lived (around 110 billion) had flipped a 100-coin set simultaneously every second since the beginning of the universe (again, about 13.9 billion years ago), we could expect all of the 100 coins to have come up heads two times.

If anything like that had happened to you (especially in a bet), you’d agree with Nassim Taleb that the probabilities favor a loaded coin. But then again, while 100 straights heads is less probable than 99, which is less probable than 98, and so on, any exact order of tosses is as likely (actually, unlikely) as 100 heads in a row: (½)^{100}. We notice the unlikelihood of 100 in a row because of the pattern and we are pattern-seeking creatures. More “normal” combinations look random and thus expected. We don’t see them as noteworthy. Looked at another way, if there will be one “winner” selected from a stadium of 100,000 people, each person has a 1 in 100,000 chance of winning. But we aren’t surprised when someone does win, even though the individual winner is shocked.

The point here is that the highly improbable happens all the time. In fact, much of what happens is highly improbable. This math explains why we shouldn’t be surprised when the market remains “irrational” far longer than seems possible. But we are.

Much of that difficulty arises because we neglect the limits of induction. Science never fully proves anything. It analyzes the available data and, when the force of the data is strong enough, it makes tentative conclusions. But these conclusions are always subject to modification or even outright rejection based upon further evidence gathering. Instead, we crave and claim certainty, even when we have no basis for it.

In his brilliant book, On Being Certain, neurologist Robert Burton systematically and convincingly shows that certainty is a mental state, a feeling like anger or pride that can help guide us, but that doesn’t dependably reflect anything like objective truth. One disconcerting finding he describes is that, from a neurocognitive point of view, our feelings of certainty about things we’re right about is largely indistinguishable from our feelings of certainty about things we’re wrong about (think “narrative fallacy” and “confirmation bias”).

As Columbia’s Rama Cont points out, “[w]hen I first became interested in economics, I was surprised by the deductive, rather than inductive, approach of many economists.” In the hard sciences, researchers tend to observe empirical data and then build a theory to explain their observations, while “many economic studies typically start with a theory and eventually attempt to fit the data to their model.” As noted by Emanuel Derman:

In physics it’s fairly easy to tell the crackpots from the experts by the content of their writings, without having to know their academic pedigrees. In finance it’s not easy at all. Sometimes it looks as though anything goes.

I suspect that these leaps of ideological fancy are a natural result of our constant search for meaning in an environment where noise is everywhere and signal vanishingly difficult to detect. Randomness is difficult for us to deal with. We are meaning-makers at every level and in nearly every situation. Yet, as I have noted before, information is cheap and meaning is expensive. Therefore, we tend to short-circuit good process to get to the end result – typically and not so coincidentally the result we wanted all along.

As noted above, science progresses not via verification (which can only be inferred) but by falsification (which, if established and itself verified, provides relative certainty only as to what is not true). Thank you, Karl Popper. In our business, as in science generally, we need to build our investment processes from the ground up, with hypotheses offered only after a careful analysis of all relevant facts and tentatively held only to the extent the facts and data allow. Yet the markets demand action. There is nothing tentative about them. That’s the conundrum we face.

Even after 100 heads in a row, the odds of the next toss being heads remains one-in-two (the “gambler’s fallacy” is committed when one assumes that a departure from what occurs on average or in the long-term will be corrected in the short-term). We look for patterns (“shiny objects”) to convince ourselves that we have found a “secret sauce” that justifies our making big bets on less likely outcomes. In this regard, we are dumber than rats – literally.

In numerous studies (most prominently those by Edwards and Estes, as reported by Philip Tetlock in Expert Political Judgment), the stated task was predicting which side of a “T-maze” holds food for the subject rat. Unbeknownst both to observers and the rat, the maze was rigged such that the food was randomly placed (no pattern), but 60 percent of the time on one side and 40 percent of the time on the other.

The rat quickly “gets it” and waits at the “60 percent side” every time and is thus correct 60 percent of the time. Human observers kept looking for patterns and chose sides in rough proportion to recent results. As a consequence, the humans were right only 52 percent of the time – they (we!) were much dumber than rats. Overall, we insist on rejecting probabilistic strategies that accept the inevitability of randomness and error.

As I described yesterday, the great gambler Billy Walters uses a probabilistic sports betting model that is correct roughly 57 percent of time. He expects and plans for being wrong 43 percent of the time. Since he can’t predict the timing of his successes and failures, he has to be prepared for long losing streaks (although he obviously hopes that none are long as Guildenstern’s). Common gambling practice had been (and often still is) to make fewer bets – to bet only on those games one is most sure of. But that approach is not thinking probabilistically. Walters makes as many bets as he can within the confines of his model (when he thinks the point spread is off by at least one-and-one-half points).

For investors, the lessons to be gained here relate to diversification, a carefully delineated and bounded process, clear execution rules, and stick-to-itiveness over the long haul. This doesn’t mean that quants should control everything. Old school analysis and judgment still matter, perhaps more than ever since the pile of available data has gotten so large. But it does mean that our conclusions need to be consistent with and supported by the data, no matter how bizarre the numbers or how long the streak.