Signing Day and the Investment Process

davidYesterday – the first Wednesday in February and thus the so-called National Signing Day – was the first day that high school seniors could sign letters of intent to accept an athletic scholarship to play Division I college football in the fall. It’s the culmination of a long recruiting process and crucial to the success of teams and coaches. It can get more than a bit ridiculous.

Some players announced their intentions using live animal props, or worse. One recruit picked Texas over Washington based on a coin flip. At least it wasn’t for the gear, officially anyway. And Snoop Dogg will be giving up his support for USC to cross-town rival UCLA because his son picked the Bruins, where he’ll join P. Diddy’s kid on the team. Cornerback Iman Marshall, a big-time USC signee, has a self-styled “commitment video” that’s particularly absurd.

But the coaches and the media outlets that cover college football recruiting (of which there are an astonishingly high number) take it all very seriously indeed. As the parent of a DI player (at Cal, see above), *I* took it very seriously.

These various publications generally rate high school players being recruited via a “star system” of from two to five stars, with five stars being reserved for top 50 players, four stars for the next 250 (numbers 51-300), three stars for the next 500, and two stars for players who are considered “mid-major” and thus not good enough for the top conferences and teams. Alabama’s current recruiting class is usually reputed to be the nation’s best, for the fifth straight year, averaging out to 4.08 stars. And while it’s not much ado about nothing, it’s much ado about a lot less than you’d think, and in a different way than you probably think. Continue reading

More Probability Suckage

Source: xkcd

Source: xkcd

I have often noted (see here too) that we generally suck at math, to our great detriment. I have also noted that we are especially poor at dealing with probabilities. If a weather forecaster says that there is an 80 percent chance of rain and it remains sunny, instead of waiting to see if, in the aggregate, it rains 80 percent of the times when his or her forecast called for an 80 percent chance of rain, we race to conclude — perhaps based upon that single instance — that the forecaster isn’t any good. Data trumps our lyin’ eyes, but we don’t routinely see it (and even deny its efficacy).

Further evidence – as if it were needed – in support of my thesis has been offered this week in the reaction to Nate Silver’s projection that Republicans have a very real chance of gaining control of the Senate later this year. This forecast (“a Republican gain of six seats, plus or minus five”) is hardy earth-shattering to anybody who has been paying attention. The configuration of seats up for election favors Republicans and the Democratic President’s approval ratings are dreadful. There isn’t much reason to expect an upswing in Democratic support either, even though (obviously) almost anything could happen over the next few months. Dealing with probabilities necessarily means being wrong sometimes.

Continue reading

Worth Reading

Worth ReadingI wrote about probability earlier this week within the context of the unlikely and astonishing conclusion to this past Sunday’s Vikings v. Ravens NFL game. Such probability questions are looked at again here, with the added bonus of showing how a poker hand with similarly unlikely outcomes could have played out. Enjoy.  

If Ravens-Vikings Was A Poker Game

The Probability Problem

ProbabilityInvesting is a probabilistic enterprise. Since certainty is even rarer than high risk-free returns, we’re left trying to make the best decisions we can based upon the knowledge we have. If we do that extremely well, we might be right most of the time, but still a long ways away from all of the time. The improbable — the highly unlikely even — happens and happens surprisingly often.

Take yesterday’s NFL action, for example. More specifically, consider the astonishing Vikings v. Ravens game in snowy Baltimore. Continue reading

The Wyatt Earp Effect

The Big PictureMy first post for The Big Picture, the wonderful blog from Barry Ritholtz, also of The Washington Post and Bloomberg View, is now up. You may read it here. I hope you will.

The Wyatt Earp Effect

Retirement Planning’s Probability Problem

Probabilities2My latest Research magazine column is now available. Here’s a snippet:

The point here is that the highly improbable happens all the time but is always unexpected. This math explains why we shouldn’t be surprised when the market remains “irrational” far longer than seems possible. But we are. Randomness is difficult for us to deal with. Instead of dealing appropriately with probability, we look for patterns to convince ourselves that the numbers don’t really say what they clearly do. In this regard, we are dumber than rats—literally.

In multiple studies (most prominently those by Edwards and Estes, as reported by Philip Tetlock in his book Expert Political Judgment), subjects were asked to predict which side of a “T-maze” held food for a rat. The maze was rigged such that the food was randomly placed (no pattern), but 60% of the time on one side and 40% on the other. The rat quickly “gets it” and waits at the “60% side” every time and is thus correct 60% of the time. Human observers keep looking for patterns and choose sides in rough proportion to recent results. As a consequence, the humans were right only 52% of the time—they (we!) are much dumber than rats. We routinely misinterpret probabilistic strategies that accept the inevitability of randomness and error.

If we are going to recommend and implement probabilistic retirement planning strategies, we need to prepare for client and advisor difficulty in dealing with such concepts.

Retirement Planning’s Probability Problem

Gaming the Numb3rs

Last night at the Old Globe here in San Diego I got to see one of my favorite plays, Rosencrantz and Guildenstern are Dead, presented as part of the Globe’s 2013 Shakespeare Festival. Doing so brought the following post to mind in that it uses the play as a springboard for discussing probability and investing. I hope you will enjoy it — or enjoy it again.


Tom Stoppard’s Rosencrantz and Guildenstern are Dead  presents Shakespeare’s Hamlet from the bewildered point of view of two of the Bard’s bit players, the comically indistinguishable nobodies who become headliners in Stoppard’s play.  The play opens before our heroes have even joined the action in Shakespeare’s epic. They have been “sent for” and are marking time by flipping coins and getting heads each time (the opening clip from the movie version is shown above).  Guildenstern keeps tossing coins and Rosencrantz keeps pocketing them. Significantly, Guildenstern is less concerned with his losses than in puzzling out what the defiance of the odds says about chance and fate. “A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability.”

The coin tossing streak depicted provides us with a chance to consider these probabilities.  Guildenstern offers among other explanations the one mathematicians and investors should favor —“a spectacular vindication of the principle that each individual coin spin individually is as likely to come down heads as tails and therefore should cause no surprise each individual time it does.”  In other words, past performance is not indicative of future results.

Even so, how unlikely is a streak of this length? Continue reading

Fixing the Math Suckage

DanicaI have repeatedly raged against our human failings with respect to all things mathematical and probabilistic (examples are here, here, here, here, here and here).  Therefore, I was delighted to see Danica McKellar (best known for playing Winnie Cooper on The Wonder Years, but also featured on favorite shows such as The West Wing and The Big Bang Theory as well as being a math whiz from UCLA) at this past week-end’s Los Angeles Times Festival of Books to promote various methods to help us (and particularly young women) to improve at math.  She was encouraging, engaging and even frequently insightful.  Danica (or at least Winnie Cooper) is also featured in The New Yorker today.  You can check out her books here. I encourage you to do so, especially if you are a young woman or have any young women in your life.  The odds are very good that you will be glad you did.

Consider All the Possibilities

I wrote earlier today about our general difficulty in dealing with probabilities in the context of Superstorm Sandy. Here is an interesting exercise broadly to test your abilities in this area and more particularly concerning information processing (from Michael Mauboussin‘s excellent new book, The Success Equation).  

Jack is looking at Anne but Anne is looking at George.  Jack is married but George is not.  Is a married person looking at an unmarried person?

  • (a) Yes.
  • (b) No.
  • (c) Cannot be determined.

To see the answer and an explanation for it, right-click over the text between the asterisk marks below (I tried to match that text color with the background as closely as I could; the text will “show up” when you do that).

* The answer is (a) but more than 80 percent of people choose (c).  You can reach the correct answer only by considering all the possibilities. If Anne is unmarried, Jack is looking at her and if she is married, she is looking at George.   This is a very helpful exercise for investing and for life in general. Consider all the possibilities. *

Rock You Like a Superstorm


Hurricane/Superstorm Sandy rocked the eastern seaboard last week to devastating effect.  In a significant instance of good planning, markets and schools were closed, states of emergency declared and mandatory evacuations begun well before the storm made landfall.  Yet nearly until the storm reached land in New Jersey last Monday, I heard lots of grousing about alleged hysteria and overreaction with respect to the precautions and preparations being undertaken to mitigate potential damage (see below for a prominent example).   

Some went so far as to defy evacuation orders, and some people paid for doing so with their lives.  Once the storm actually hit and caused serious damage – albeit no longer officially as a hurricane, but as a “superstorm” – the complaining stopped.  Fortunately, the governmental disaster preparedness organization seems to have performed well overall.  You can read about these events in many venues, including herehere and here.

The pre-crisis grousing and the refusal of so many to evacuate are worth thinking about because of what is thereby revealed about us as humans and the cognitive biases that beset us.  I offer three “take-away” thoughts that are broadly applicable as well as specifically applicable to the investment world.

1. We don’t deal well with probabilities.  When a weather forecast says that there is a 70 percent chance of sun, we tend to think that the forecaster screwed up if it rains.  But that’s not how we should evaluate probabilities.  Instead, we should consider how often it rains when the forecast calls for a 70 percent chance of sun.  When the forecast is spot-on perfect, it will rain 30 percent of the time when it calls for a 70 percent chance of sun.  The odds favor sun, but because complex systems like the weather (and financial markets) encompass so many variables, nothing approaching certainty is possible.  We don’t handle that kind of thinking very well (a very current and interesting example in a political context is examined here).

To illustrate the level of complexity I’m talking about, consider that we can construct a linear, one-dimensional chain with 10 different links in 3,628,800 different ways.  For 100 different links, the possibilities total 10158. If those are the possibilities for making a simple chain, imagine the possibilities when we’re talking about complex systems where wild randomness rules. 

Perhaps the key argument of Nobel laureate Daniel Kahneman’s brilliant book, Thinking Fast and Slow, is that without careful and intentional deliberation (and often even then), we suffer from probabilistic irrationality. Remember back in 2009 when New England Patriots coach (and my former New Jersey neighbor) Bill Belichick famously decided to go for a first down on fourth-and-two in Patriots territory rather than punt while up six points late against Peyton Manning and the Indianapolis Colts?  When Wes Welker was stopped just short of the first down and the Colts went on to score the winning touchdown, the criticism was overwhelming even though Belichick’s decision gave the Pats a better chance of winning. Those withering attacks simply demonstrate our difficulties with probabilities. Doing what offers the best chance of success in no way guarantees success. As analyst Bill Barnwell, who was agnostic on whether Belichick was right or wrong, wrote: “you can’t judge Belichick’s decision by the fact that it didn’t work” (bold and italics in the original). We can (and should) hope for the best while preparing for the worst.

The world is wildly random.  With so many variables, even the best process (when we are able to overcome our probabilistic irrationality) can be undermined at many points, a significant number of which are utterly out of anyone’s control.   As Nate Silver reports in his fine new book, The Signal and the Noise, the National Weather Service is extremely good at weather forecasting in a probabilistic sense. When the NWS says there is a 70 percent chance of sun, it’s sunny just about 70 percent of the time.  Because we don’t think probabilistically (and crave certainty too), we tend to assume that the forecasts on the days it rains – 30 percent of the time – are wrong.  Accordingly, when a probabilistic forecast of a dangerous hurricane is generally inconsistent with our experience (“I didn’t have a problem last time”) and isn’t what we want to hear (think confirmation bias), we can readily focus on the times we remember weather forecasts being “wrong” and discount the threat.  As mathematician John Allen Paulos tweeted regarding the trouble that so many seem to have election probabilities:

Many people’s notion of probability is so impoverished that it admits of only two values: 50-50 and 99%, tossup or essentially certain.

In a fascinating research study, economists Emre Soyer and Robin Hogarth showed the results of a regression analysis to a test population of economics professors. When they presented the results in the way most commonly done in economics journals (as a single number accompanied by some error measures), the economists — whose careers are largely predicated upon doing just this sort of analysis! — did an embarrassingly poor job of answering a set of questions about the probabilities of various outcomes. When they presented the results as a scatter graph, the economists got most of the questions right. Yet when they presented the results both ways, the economists got most of the questions wrong again. As Justin Fox emphasizes, there seems to be something about a single-number probability assessment that lures our primitive brains in and leads them astray.

Due to complexity and the wild randomness it entails, the investment world — like weather forecasting — offers nothing like certainty.  As every black jack player recognizes, making the “right” play (probabilistically) in does not ensure success.  The very best we can hope for is favorable odds and that over a long enough period those odds will play out (and even then only after careful research to establish the odds).  That we don’t deal well with probabilities makes a difficult situation far, far worse.

2. We’re prone to recency bias too.  We are all prone to recency bias, meaning that we tend to extrapolate recent events into the future indefinitely.  Since the recent experience of residents of the eastern seaboard (Hurricane Irene) wasn’t nearly as bad as expected (despite doing significant damage), that experience was extrapolated to the present by many.  When confirmation bias (we tend to see what we want to see) and optimism bias are added to the mix, it’s no wonder so many didn’t evaluate storm risk (and don’t evaluate investment risk) very well.

3. We don’t deal well with low probability, high impact events.  In the aggregate, hurricanes are low-frequency but high impact events.  As I have explained before, when people calculate the risk of hurricane damage and make decisions about hurricane insurance, they consistently misread their prior experience. This conclusion comes from a paper by Wharton Professor Robert Meyer that describes and reports on a research simulation in which participants were instructed that they were owners of properties in a hurricane-prone coastal area and were given monetary incentives to make smart choices about (a) when and whether to buy insurance against hurricane losses and (b) how much insurance to buy.

Over the course of the study (three simulated hurricane “seasons”), participants would periodically watch a map that showed whether a hurricane was building as well as its strength and course. Until virtually the last second before the storm was shown to reach landfall, the participants could purchase partial insurance ($100 per 10 percent of protection, up to 50 percent) or full coverage ($2,500) on the $50,000 home they were said to own. Participants were advised how much damage each storm was likely to cause and, afterward, the financial consequences of their choices. They had an unlimited budget to buy insurance.  Those who made the soundest financial decisions were eligible for a prize.

The focus of the research was to determine whether there are “inherent limits to our ability to learn from experience about the value of protection against low-probability, high-consequence events.”  In other words — whether experience can help us deal with tail risk. Sadly, we don’t deal with this type of risk management very well. Moreover, as Nassim Taleb has shown, such risks — while still not anything like frequent — happen much more often than we tend to think (which explains why the 2008-09 financial crisis was deemed so highly unlikely by the vast majority of experts and their models). 

The bottom line here is that participants seriously under-protected their homes. The first year, they sustained losses almost three times higher than if they had bought protection rationally. The key problem was a consistent failure to buy protection or enough protection even when serious and imminent risk was obvious (sounds like people refusing to evacuate, doesn’t it?).  Moreover, most people reduced the amount of protection they bought whenever they endured no damage in the previous round, even if that lack of damage was specifically the result of having bought insurance.

Experience helped a little.  Participants got better at the game as season one progressed, but they slipped back into old habits when season two began. By season three, these simulated homeowners were still suffering about twice as much damage as they should have.  As Meyer’s paper reports, these research results are consistent with patterns seen in actual practice. For example, the year after Hurricane Katrina there was a 53% increase in new flood-insurance policies issued nationally.  But within two years, cancellations had brought the coverage level down to pre-Katrina levels.

We simply don’t do a very good job dealing with low-probability, high-impact events, even when we have experience with them.  Since those in the northeast have so little experience with hurricanes, their discounting of hurricane risk is (again) even more understandable.  Given what happened to the vast majority of investment portfolios in 2008-09, the alleged market “professionals” often don’t manage tail risk very well either. That said, when a low-frequency event is treated as a certainty or near-certainty as a matter of policy, that overreaction can be disastrous and the costs too high to bear, as a Navy SEAL Commander here in San Diego once took great pains to explain to me in the context of fighting terrorism.

Taleb goes so far as to assert that we should “ban the use of probability.”  I disagree, but we ought to use probabilities with care and be particularly careful about how we convey probability assessments.  For example, a potential range of outcomes is better than a single number (as with the scatter graphs noted above).  Similarly, an outlook that shows the weighing of probabilities together with costs and potential outcomes will also help (this discussion makes a start in that direction).  Despite the risks of being perceived as “crying wolf,” we intuitively understand that when and as the potential negative outcomes are greater, lower likelihood events should generally be treated more seriously and that the progression is typically non-linear.   

In virtually every endeavor, our cognitive biases are a consistent problem and provide a constant challenge.  In terms of investing, they can and often do rock us like a hurricane — or at least a superstorm.  As Cullen Roche points out, consistent with the research noted above, we can and should learn from our investment errors, cognitive or otherwise.  Sadly, we do so far less often than we ought, as last week’s events amply demonstrate.