Gaming the Numb3rs

Last night at the Old Globe here in San Diego I got to see one of my favorite plays, Rosencrantz and Guildenstern are Dead, presented as part of the Globe’s 2013 Shakespeare Festival. Doing so brought the following post to mind in that it uses the play as a springboard for discussing probability and investing. I hope you will enjoy it — or enjoy it again.

___________

Tom Stoppard’s Rosencrantz and Guildenstern are Dead  presents Shakespeare’s Hamlet from the bewildered point of view of two of the Bard’s bit players, the comically indistinguishable nobodies who become headliners in Stoppard’s play.  The play opens before our heroes have even joined the action in Shakespeare’s epic. They have been “sent for” and are marking time by flipping coins and getting heads each time (the opening clip from the movie version is shown above).  Guildenstern keeps tossing coins and Rosencrantz keeps pocketing them. Significantly, Guildenstern is less concerned with his losses than in puzzling out what the defiance of the odds says about chance and fate. “A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability.”

The coin tossing streak depicted provides us with a chance to consider these probabilities.  Guildenstern offers among other explanations the one mathematicians and investors should favor —“a spectacular vindication of the principle that each individual coin spin individually is as likely to come down heads as tails and therefore should cause no surprise each individual time it does.”  In other words, past performance is not indicative of future results.

Even so, how unlikely is a streak of this length? Continue reading

Advertisement

Gaming the Numb3rs

 

Tom Stoppard’s Rosencrantz and Guildenstern are Dead  presents Shakespeare’s Hamlet from the bewildered point of view of two of the Bard’s bit players, the comically indistinguishable nobodies who become headliners in Stoppard’s play.  The play opens before our heroes have even joined the action in Shakespeare’s epic. They have been “sent for” and are marking time by flipping coins and getting heads each time (the opening clip from the movie version is shown above).  Guildenstern keeps tossing coins and Rosencrantz keeps pocketing them. Significantly, Guildenstern is less concerned with his losses than in puzzling out what the defiance of the odds says about chance and fate. “A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability.”

The coin tossing streak depicted provides us with a chance to consider these probabilities.  Guildenstern offers among other explanations the one mathematicians and investors should favor —“a spectacular vindication of the principle that each individual coin spin individually is as likely to come down heads as tails and therefore should cause no surprise each individual time it does.”  In other words, past performance is not indicative of future results.

Even so, how unlikely is a streak of this length?

The probability that a fair coin, when flipped, will turn up heads is 50 percent (the probability of any two independent sequential events both happening is the product of the probability of both). Thus the odds of it turning up twice in a row is 25 percent (½ x ½), the odds of it turning up three times in a row is 12.5 percent (½ x ½ x ½) and so on.  Accordingly, if we flip a coin 10 times (one “set” of ten), we would only expect to have a set end up with 10 heads in a row once every 1024 sets {(½)10 = 1/1024}.

Rosencrantz and Guildenstern got heads more than 100 consecutive times. The chances of that happening are: (½)100 = 1/7.9 x 1031. In words, we could expect it to happen once in 79 million million million million million (that’s 79 with 30 zeros after it) sets. By comparison, the universe is about 13.9 billion years old, in which time only about 1017 seconds (1 with 17 zeros after it) have elapsed.  Looked at another way, if every person who ever lived (around 110 billion) had flipped a 100-coin set simultaneously every second since the beginning of the universe (again, about 13.9 billion years ago), we could expect all of the 100 coins to have come up heads two times.    

If anything like that had happened to you (especially in a bet), you’d agree with Nassim Taleb that the probabilities favor a loaded coin.  But then again, while 100 straights heads is less probable than 99, which is less probable than 98, and so on, any exact order of tosses is as likely (actually, unlikely) as 100 heads in a row:  (½)100.  We notice the unlikelihood of 100 in a row because of the pattern and we are pattern-seeking creatures.  More “normal” combinations look random and thus expected.  We don’t see them as noteworthy.  Looked at another way, if there will be one “winner” selected from a stadium of 100,000 people, each person has a 1 in 100,000 chance of winning.  But we aren’t surprised when someone does win, even though the individual winner is shocked.

The point here is that the highly improbable happens all the time.  In fact, much of what happens is highly improbable.  This math explains why we shouldn’t be surprised when the market remains “irrational” far longer than seems possible.  But we are.

Much of that difficulty arises because we neglect the limits of induction.  Science never fully proves anything.  It analyzes the available data and, when the force of the data is strong enough, it makes tentative conclusions.  But these conclusions are always subject to modification or even outright rejection based upon further evidence gathering.  Instead, we crave and claim certainty, even when we have no basis for it. 

In his brilliant book, On Being Certain, neurologist Robert Burton systematically and convincingly shows that certainty is a mental state, a feeling like anger or pride that can help guide us, but that doesn’t dependably reflect anything like objective truth. One disconcerting finding he describes is that, from a neurocognitive point of view, our feelings of certainty about things we’re right about is largely indistinguishable from our feelings of certainty about things we’re wrong about (think “narrative fallacy” and “confirmation bias”).

As Columbia’s Rama Cont points out, “[w]hen I first became interested in economics, I was surprised by the deductive, rather than inductive, approach of many economists.” In the hard sciences, researchers tend to observe empirical data and then build a theory to explain their observations, while “many economic studies typically start with a theory and eventually attempt to fit the data to their model.”  As noted by Emanuel Derman:

In physics it’s fairly easy to tell the crackpots from the experts by the content of their writings, without having to know their academic pedigrees. In finance it’s not easy at all. Sometimes it looks as though anything goes.

I suspect that these leaps of ideological fancy are a natural result of our constant search for meaning in an environment where noise is everywhere and signal vanishingly difficult to detect.  Randomness is difficult for us to deal with.  We are meaning-makers at every level and in nearly every situation.  Yet, as I have noted before, information is cheap and meaning is expensive.  Therefore, we tend to short-circuit good process to get to the end result – typically and not so coincidentally the result we wanted all along.

As noted above, science progresses not via verification (which can only be inferred) but by falsification (which, if established and itself verified, provides relative certainty only as to what is not true).  Thank you, Karl Popper. In our business, as in science generally, we need to build our investment processes from the ground up, with hypotheses offered only after a careful analysis of all relevant facts and tentatively held only to the extent the facts and data allow. Yet the markets demand action.  There is nothing tentative about them. That’s the conundrum we face.

Even after 100 heads in a row, the odds of the next toss being heads remains one-in-two (the “gambler’s fallacy” is committed when one assumes that a departure from what occurs on average or in the long-term will be corrected in the short-term). We look for patterns (“shiny objects”) to convince ourselves that we have found a “secret sauce” that justifies our making big bets on less likely outcomes. In this regard, we are dumber than rats – literally.

In numerous studies (most prominently those by Edwards and Estes, as reported by Philip Tetlock in Expert Political Judgment), the stated task was predicting which side of a “T-maze” holds food for the subject rat.  Unbeknownst both to observers and the rat, the maze was rigged such that the food was randomly placed (no pattern), but 60 percent of the time on one side and 40 percent of the time on the other.

The rat quickly “gets it” and waits at the “60 percent side” every time and is thus correct 60 percent of the time.  Human observers kept looking for patterns and chose sides in rough proportion to recent results.  As a consequence, the humans were right only 52 percent of the time – they (we!) were much dumber than rats.  Overall, we insist on rejecting probabilistic strategies that accept the inevitability of randomness and error.

As I described yesterday, the great gambler Billy Walters uses a probabilistic sports betting model that is correct roughly 57 percent of time.  He expects and plans for being wrong 43 percent of the time.  Since he can’t predict the timing of his successes and failures, he has to be prepared for long losing streaks (although he obviously hopes that none are long as Guildenstern’s).  Common gambling practice had been (and often still is) to make fewer bets – to bet only on those games one is most sure of.  But that approach is not thinking probabilistically.  Walters makes as many bets as he can within the confines of his model (when he thinks the point spread is off by at least one-and-one-half points). 

For investors, the lessons to be gained here relate to diversification, a carefully delineated and bounded process, clear execution rules, and stick-to-itiveness over the long haul.  This doesn’t mean that quants should control everything.  Old school analysis and judgment still matter, perhaps more than ever since the pile of available data has gotten so large.  But it does mean that our conclusions need to be consistent with and supported by the data, no matter how bizarre the numbers or how long the streak.

Even 100 in a row.

Of Data and Certainty

We crave certainty. 

As reported by Harvard’s Daniel Gilbert on the Happy Days blog at nytimes.com, Maastricht University researchers gave (volunteer) subjects a series of 20 electric shocks. Some subjects were told that they would receive an intense shock every time while others were told that they would receive 17 mild shocks and 3 intense ones, but that they wouldn’t know on which of the 20 the intense shocks would come. The study showed that subjects who thought there was a small chance of receiving an intense shock were more afraid — they sweated more profusely, their hearts beat faster — than those who knew for sure that they’d receive an intense shock. Interestingly, that’s because people feel worse when something bad might occur than when something bad will occur — they find uncertainty more painful than the things they’re uncertain about.

Why do people seem to prefer to know the worst rather than merely to suspect it? According to Gilbert, that’s probably because when most of us get bad news we cry for a bit and then get busy making the best of things. We change our behaviors and we change our attitudes. We raise our attentiveness and lower our expectations. We find our bootstraps and pull (pretty hard if necessary). But we can’t come to terms with circumstances whose terms we don’t yet know. An uncertain future leaves us stranded in an unhappy present with nothing to do but wait. 

We all respond positively to increased certainty in our lives (including in financial outcomes) — even after a major shock and when that certainty limits our prospective gain. In these highly uncertain times, increased certainty can be a highly valuable commodity.  Unfortunately, our level of certainty – desired though it is – is not well correlated to the facts.

The day after the space shuttle Challenger exploded in 1986, Cornell psychology professor Ulric Neisser (who died last month at 83) had his students write precisely where they’d been when they heard about the disaster. Nearly three years later, he asked them to recount it again. A quarter of the accounts were strikingly different, half were somewhat different, and less than a tenth had all the details correct. Yet all were confident that their most recent accounts were completely accurate. Indeed, many couldn’t be dissuaded even after seeing their original notes.  One of them even asserted, “That’s my handwriting, but that’s not what happened.”

For neurologist Robert Burton, the Neisser study is emblematic of an essential quality of who we are. In his brilliant book, On Being Certain, Burton systematically and convincingly shows that certainty is a mental state, a feeling like anger or pride that can help guide us, but that doesn’t dependably reflect anything like objective truth. One disconcerting finding he describes is that, from a neurocognitive point of view, our feelings of certainty about things we’re right about is largely indistinguishable from our feelings of certainty about things we’re wrong about.

Such unwarranted certainty is consistent with our tendency (discussed earlier this week here and here) to build our ideologies first and then to construct narratives to support those ideologies, with facts and data only sought out to undergird our pre-conceived notions after the fact and subjectively “analyzed” only in that light.  It also suggests why we can be so uncomfortable with the necessarily inductive process of scientific inquiry.  We’d much prefer the certainty of deductive logic.  Sadly, much that claims to be “research” in the financial world is nothing of the sort – it is ideology (or sales literature) in disguise (and not very well disguised at that).  Even so, perceived certainty gives us the confidence we need to make decisions and to establish trust and credibility with others.  It’s an ironic feedback loop of sorts. 

Good science requires the careful and objective collection of data with any interpretations and conclusions drawn therefrom being tentative and provisional, and of course subject to any subsequent findings.  But that’s not what often happens, especially in the financial world.  As Columbia’s Rama Cont points out, “[w]hen I first became interested in economics, I was surprised by the deductive, rather than inductive, approach of many economists.” In the hard sciences, researchers tend to observe empirical data and then build a theory to explain their observations, while “many economic studies typically start with a theory and eventually attempt to fit the data to their model.”  As noted by Emanuel Derman:

In physics it’s fairly easy to tell the crackpots from the experts by the content of their writings, without having to know their academic pedigrees. In finance it’s not easy at all. Sometimes it looks as though anything goes.

I suspect that these leaps of ideological fancy are a natural result of our constant search for meaning in an environment where noise is everywhere and signal vanishingly difficult to detect.  We are meaning-makers at every level and in nearly every situation.  Yet, as I have noted before, information is cheap and meaning is expensive.  Therefore, we tend to short-circuit good process to get to the end result – typically and not so coincidentally the result we wanted all along.

Science progresses not via verification (which can only be inferred) but by falsification (which, if established and itself verified, provides certainty as to what is not true).  Thank you, Karl Popper. In our business, as in science generally, we need to build our investment processes from the ground up, with hypotheses offered only after a careful analysis of all relevant facts and tentatively held only to the extent the facts and data allow. Yet the markets demand action.  There is nothing tentative about them. That’s the conundrum we face.

The scientific process cannot offer meaning and can only suggest interpretation.  Near the end of her wonderful novel, Housekeeping, Pulitzer Prize winner (for the equally wonderful Gilead) Marilynne Robinson notes that ”[f]act explains nothing. On the contrary, it is fact that requires explanation.” This is a telling observation and one those who are overly enamored with the scientific process are prone to ignore or forget. Science is a fabulous tool – the best we have – but also merely a tool. It is not a be-all nor is it an end-all. Derman again:  “[d]ata alone doesn’t tell you anything, it carries no message.” Brute fact requires both meaning and context in order to approach anything like truth or understanding. But meaning is increasingly difficult to find in a world and with respect to markets that demand definitive answers (or at least definitive decisions) immediately.

I’m certain of it.