CFA Conference: Post Compendium

CFAI have been live-blogging at the 67th CFA Institute Annual Conference here in Seattle. As before, I have been very gratified at the kind reception they have received. This post will act as a compendium of these posts so that you may find them in one convenient location. Links to the posts are provided below, with the most recent listed first.

Advertisement

CFA Conference: Nate Silver

CFAThe Signal and The Noise

Moderated by Diane Brady, Bloomberg Businessweek

Nate Silver runs the political website FiveThirtyEight.com, where he publishes a running forecast of current elections and hot-button issues. Formerly published in The New York Times and recently relaunched in partnership with ESPN, FiveThirtyEight.com has made Mr. Silver the public face of statistical analysis and political forecasting. His latest book is titled The Signal and The Noise: Why Most Predictions Fail—But Some Don’t. Before coming to politics, Mr. Silver established his credentials as an analyst of baseball statistics, developing a widely acclaimed system that predicts player performance, career development, and seasonal winners and losers. He has written for ESPN.com, Sports Illustrated, Slate, and the New York Times. Mr. Silver received his BA in economics from the University of Chicago.

  • Overfitting: The reason noise is mistaken for signal
  • Closing the gap between what we know and what we think we know: One giant Bayesian leap—and some small steps—forward
  • Improving forecasts and models: Think probabilistically, stop and smell the data, know your biases, and add a dose of humility

Continue reading

More Probability Suckage

Source: xkcd

Source: xkcd

I have often noted (see here too) that we generally suck at math, to our great detriment. I have also noted that we are especially poor at dealing with probabilities. If a weather forecaster says that there is an 80 percent chance of rain and it remains sunny, instead of waiting to see if, in the aggregate, it rains 80 percent of the times when his or her forecast called for an 80 percent chance of rain, we race to conclude — perhaps based upon that single instance — that the forecaster isn’t any good. Data trumps our lyin’ eyes, but we don’t routinely see it (and even deny its efficacy).

Further evidence – as if it were needed – in support of my thesis has been offered this week in the reaction to Nate Silver’s projection that Republicans have a very real chance of gaining control of the Senate later this year. This forecast (“a Republican gain of six seats, plus or minus five”) is hardy earth-shattering to anybody who has been paying attention. The configuration of seats up for election favors Republicans and the Democratic President’s approval ratings are dreadful. There isn’t much reason to expect an upswing in Democratic support either, even though (obviously) almost anything could happen over the next few months. Dealing with probabilities necessarily means being wrong sometimes.

Continue reading

We Are Less Than Rational

Investment Belief #3: We aren’t nearly as rational as we assume

InvestmentBeliefssm2 (2)Traditional economic theory insists that we humans are rational actors making rational decisions amidst uncertainty in order to maximize our marginal utility. Sometimes we even try to believe it.  But we aren’t nearly as rational as we tend to assume. We frequently delude ourselves and are readily manipulated – a fact that the advertising industry is eager to exploit.1

Watch Mad Men‘s Don Draper (Jon Hamm) use the emotional power of words to sell a couple of Kodak executives on himself and his firm while turning what they perceive to be a technological achievement (the “wheel”) into something much richer and more compelling – the “carousel.”

Those Kodak guys will hire Draper, of course, but their decision-making will hardly be rational. Homo economicus is thus a myth. But, of course, we already knew that. Even young and inexperienced investors can recognize that after just a brief exposure to the real world markets. The “rational man” is as non-existent as the Loch Ness Monster, Bigfoot and (perhaps) moderate Republicans.  Yet the idea that we’re essentially rational creatures is a very seductive myth, especially as and when we relate the concept to ourselves (few lose money preying on another’s ego). We love to think that we’re rational actors carefully examining and weighing the available evidence in order to reach the best possible conclusions.

Oh that it were so. If we aren’t really careful, we will remain deluded that we see things as they really are. The truth is that we see things the way we really are. I frequently note that investing successfully is very difficult. And so it is. But the reasons why that is so go well beyond the technical aspects of investing. Sometimes it is retaining honesty, lucidity and simplicity – seeing what is really there – that is what’s so hard. Continue reading

Betting on Investment Skill

Deanna BrooksIn 2006, the TradingMarkets/Playboy 2006 Stock Picking Contest was won by Playboy’s Miss May of 1998, Deanna Brooks (shown right). Her portfolio, which bet heavily on oil and gold stocks, gained 46.43 percent on the year and every stock in it provided double-digit returns. She liked Yamana Gold because “What girl doesn’t like a little bling? I’m hot for gold this year.…” It wasn’t her only nugget of sterling analysis. She also liked Petrobras because “oil is making money” and IBM because computers “aren’t going away.” She wasn’t the only Playmate to find a rich vein of success. A higher percentage of participating Playmates bested the S&P 500’s 2006 returns than active money managers. Think about that for a moment. Over the course of a full year, a bunch of Playmates outperformed a whopping majority of highly trained and experienced professionals with vast resources who spend all day every day trying to beat the market.

It’s easy to say that the Playmates got lucky, and they did. But we’d never expect a guy swimming laps at the YMCA to beat Michael Phelps across the pool, a girl off the street to beat a Grandmaster in chess, or an unschooled janitor to solve an insanely complex math problem amidst a spot of cleaning in the afternoon that the best and the brightest need years to figure out. Not even once.

If something like that actually were to happen, we’d treat is as a marvel (as the movie, Good Will Hunting, excerpted above, does), not just as a whimsical curiosity to be used for the purposes of garnering a bit of publicity and ogling attractive women.

It’s tempting simply to say that the contest is too small a sample size to be meaningful and move on. Had she stuck with investing, Miss May’s performance would miss and miss by a lot, probably sooner rather than later, as all investment performance tends to be mean reverting. But we also know that sample size doesn’t mean much when little luck is involved. It doesn’t matter how many times I race Michael Phelps. The chances of my winning will always be vanishingly small — effectively zero.

It’s also important to emphasize (as Michael Mauboussin did in his excellent book, The Success Equation and at The Big Picture Conference recently) the paradox of skill when it comes to investing. As overall skill improves, aggregate performance improves and luck becomes more important to individual outcomes. On account of the growth and development of the investment industry, John Bogle could quite consistently write his senior thesis at Princeton on the successes of active fund management and then go on some years later to found Vanguard and become the primary developer and intellectual forefather of indexing. In other words, the ever-increasing aggregate skill (supplemented by massive computing power) of the investment world has come largely to cancel itself out.

These explanations are good as far as they go, but they hardly tell the entire story. Lady Luck is crucial to investment outcomes. There is no getting around it. Managing one’s portfolio so as to benefit the most from good luck and (even more importantly) to get hurt the least by bad luck are the keys to investment management. Doing so well is a remarkable skill, but not the sort of skill that’s commonly assumed, even (especially!) by professionals. 

More to the point, if investment returns depend that heavily on luck and real investment skill is that elusive and rare, what should we do with our (or our clients’) money? For some answers, we turn to the world of…poker? That’s right — pokerContinue reading

Bracketology

The best two-and-a-half weeks of the sports year start today (I know that “play-in” games were held earlier).  Since this site strives to be “data-driven” (check out the masthead) and since we tend to suck both at math in general and at probabilities, as a public service I offer this video explaining the likelihood that you (or anyone else) will have a perfect bracket this year. 

As a further public service, Nate Silver’s bracket analysis and that of the Harvard College Sports Analysis Collective are linked below.

And as one final public service, please enjoy the following too.  I always do.

 

Data Beats Your Lyin’ Eyes

As reported by The New York Times, film critic Pauline Kael expressed shock at Richard Nixon’s landslide victory over George McGovern in 1972. “I live in a rather special world. I only know one person who voted for Nixon. Where they are I don’t know. They’re outside my ken. But sometimes when I’m in a theater I can feel them.” Even after the votes were in and counted, Kael wanted to believe her lyin’ eyes. This year, it’s Republicans who have fallen prey to confirmation bias and rejected the data in favor of preconceived ideological commitments and intuition.

Source: xkcd

Stats wizard Nate Silver has been at the  center of a controversy this election season as his data-driven presidential election analysis, outlined at his FiveThirtyEight blog, contradicted the desires of Republicans and pundits who did not want a clear victory for President Obama (albeit for different reasons).  Silver created a forecasting model that was uncannily accurate in 2008 (49 of 50 states) and which consistently predicted that President Obama was a clear favorite over Mitt Romney, angering conservatives in the process.  When the President won a clear victory last night (the extent of which is still being determined as I write this), Silver’s method and approach were vindicated.

Silver critics such as Politico’s Dylan Byers (“Nate Silver could be a one-term celebrity”), David Brooks of The New York Times (“The pollsters tell us what’s happening now. When they start projecting, they’re getting into silly land”), Morning Joe‘s Joe Scarborough (“Nate Silver says this is a 73.6 percent chance that the president is going to win? Nobody in that campaign thinks they have a 73 percent chance — they think they have a 50.1 percent chance of winning”), The Washington Post’s Michael Gerson (“Silver’s prediction is not an innovation; it is trend taken to its absurd extreme”) and Politico’s Josh Gerstein (“Isn’t the basic problem with the Nate Silver prediction in question, and the critique, that it puts a percentage on a one-off event?”) have all demonstrated that, consistent with my warnings, we simply do not deal with probability very well.  More fundamentally, their data-deficient “analysis” has been weighed and found wanting.

With respect to probability, as Silver warned Byers, one shouldn’t confuse prediction with prophecy.  As Zeynep Tufekci proclaimed at Wired in his careful defense of Silver, this “isn’t wizardry,” but “the sound science of complex systems.”  Accordingly, “[u]ncertainty is an integral part of it. But that uncertainty shouldn’t suggest that we don’t know anything, that we’re completely in the dark, that everything’s a toss-up.”  Here’s the key:

What his model says is that currently, given what we know, if we run a gabazillion modeled elections, Obama wins 80 percent of the time…Since we’ll only have one election on Nov. 6, it’s possible that Obama can lose. But Nate Silver’s (and others’) statistical models remain robust and worth keeping and expanding — regardless of the outcome this Tuesday.

Wa-Bam.  The probabilities were clear.  Governor Romney could have won, but it was unlikely.

With respect to data, Ezra Klein ‘s Wonkblog at the Washington Post offers a detailed defense of quantitative analysis as well as Silver (more here). Had Silver’s model been wrong, it would have been because the underlying polls — lots of them — were wrong. Silver’s model is a sophisticated form of poll valuation and aggregation together with demographic and voting trend analysis.

As my Above the Market masthead proclaims, I believe in and strive to focus on “data-driven analysis.”  Because Silver’s work is quintessentially that, it was easy for me to rely upon it in making my prediction of 303 electoral votes for the President (Silver predicted 313).  The pundits, however, were all over the map.  Data must override ideology, punditry and feelings whether we’re talking about elections, markets or anything else. Data wins.  If you want to oppose what the data suggests, it can only be done via better data or better analysis of the data.

To be clear, my prediction (like Silver’s) could have been dramatically wrong.  Again, it was based upon data and probabilities rather than certainties.  The electorate could have defied the odds (in much the same way that a longshot can win the Super Bowl). Silver, in his fine new book The Signal and the Noise, urges us to “stop and smell the data — slow down, and consider the imperfections in your thinking.” Those of us who work in the markets should do exactly as he suggests.

I’m a big fan of Peggy Noonan. But here’s her pre-election analysis:

There is no denying the Republicans have the passion now, the  enthusiasm. The Democrats do not.  Independents are breaking for Romney.  And there’s the thing about the yard signs.  In Florida a few weeks  ago I saw Romney signs, not Obama ones.  From Ohio I hear the same.   From tony Northwest Washington, D.C., I hear the same.

Is it possible this whole thing is playing out before our eyes and we’re not really noticing because we’re too busy looking at data on paper  instead of what’s in front of us?  Maybe that’s the real distortion of  the polls this year:  They left us discounting the world around us.

Her writing is still lovely but her lyin’ eyes were wrong and that form of punditry (and market analysis) is d-e-a-d.

Rock You Like a Superstorm

 

Hurricane/Superstorm Sandy rocked the eastern seaboard last week to devastating effect.  In a significant instance of good planning, markets and schools were closed, states of emergency declared and mandatory evacuations begun well before the storm made landfall.  Yet nearly until the storm reached land in New Jersey last Monday, I heard lots of grousing about alleged hysteria and overreaction with respect to the precautions and preparations being undertaken to mitigate potential damage (see below for a prominent example).   

Some went so far as to defy evacuation orders, and some people paid for doing so with their lives.  Once the storm actually hit and caused serious damage – albeit no longer officially as a hurricane, but as a “superstorm” – the complaining stopped.  Fortunately, the governmental disaster preparedness organization seems to have performed well overall.  You can read about these events in many venues, including herehere and here.

The pre-crisis grousing and the refusal of so many to evacuate are worth thinking about because of what is thereby revealed about us as humans and the cognitive biases that beset us.  I offer three “take-away” thoughts that are broadly applicable as well as specifically applicable to the investment world.

1. We don’t deal well with probabilities.  When a weather forecast says that there is a 70 percent chance of sun, we tend to think that the forecaster screwed up if it rains.  But that’s not how we should evaluate probabilities.  Instead, we should consider how often it rains when the forecast calls for a 70 percent chance of sun.  When the forecast is spot-on perfect, it will rain 30 percent of the time when it calls for a 70 percent chance of sun.  The odds favor sun, but because complex systems like the weather (and financial markets) encompass so many variables, nothing approaching certainty is possible.  We don’t handle that kind of thinking very well (a very current and interesting example in a political context is examined here).

To illustrate the level of complexity I’m talking about, consider that we can construct a linear, one-dimensional chain with 10 different links in 3,628,800 different ways.  For 100 different links, the possibilities total 10158. If those are the possibilities for making a simple chain, imagine the possibilities when we’re talking about complex systems where wild randomness rules. 

Perhaps the key argument of Nobel laureate Daniel Kahneman’s brilliant book, Thinking Fast and Slow, is that without careful and intentional deliberation (and often even then), we suffer from probabilistic irrationality. Remember back in 2009 when New England Patriots coach (and my former New Jersey neighbor) Bill Belichick famously decided to go for a first down on fourth-and-two in Patriots territory rather than punt while up six points late against Peyton Manning and the Indianapolis Colts?  When Wes Welker was stopped just short of the first down and the Colts went on to score the winning touchdown, the criticism was overwhelming even though Belichick’s decision gave the Pats a better chance of winning. Those withering attacks simply demonstrate our difficulties with probabilities. Doing what offers the best chance of success in no way guarantees success. As analyst Bill Barnwell, who was agnostic on whether Belichick was right or wrong, wrote: “you can’t judge Belichick’s decision by the fact that it didn’t work” (bold and italics in the original). We can (and should) hope for the best while preparing for the worst.

The world is wildly random.  With so many variables, even the best process (when we are able to overcome our probabilistic irrationality) can be undermined at many points, a significant number of which are utterly out of anyone’s control.   As Nate Silver reports in his fine new book, The Signal and the Noise, the National Weather Service is extremely good at weather forecasting in a probabilistic sense. When the NWS says there is a 70 percent chance of sun, it’s sunny just about 70 percent of the time.  Because we don’t think probabilistically (and crave certainty too), we tend to assume that the forecasts on the days it rains – 30 percent of the time – are wrong.  Accordingly, when a probabilistic forecast of a dangerous hurricane is generally inconsistent with our experience (“I didn’t have a problem last time”) and isn’t what we want to hear (think confirmation bias), we can readily focus on the times we remember weather forecasts being “wrong” and discount the threat.  As mathematician John Allen Paulos tweeted regarding the trouble that so many seem to have election probabilities:

Many people’s notion of probability is so impoverished that it admits of only two values: 50-50 and 99%, tossup or essentially certain.

In a fascinating research study, economists Emre Soyer and Robin Hogarth showed the results of a regression analysis to a test population of economics professors. When they presented the results in the way most commonly done in economics journals (as a single number accompanied by some error measures), the economists — whose careers are largely predicated upon doing just this sort of analysis! — did an embarrassingly poor job of answering a set of questions about the probabilities of various outcomes. When they presented the results as a scatter graph, the economists got most of the questions right. Yet when they presented the results both ways, the economists got most of the questions wrong again. As Justin Fox emphasizes, there seems to be something about a single-number probability assessment that lures our primitive brains in and leads them astray.

Due to complexity and the wild randomness it entails, the investment world — like weather forecasting — offers nothing like certainty.  As every black jack player recognizes, making the “right” play (probabilistically) in does not ensure success.  The very best we can hope for is favorable odds and that over a long enough period those odds will play out (and even then only after careful research to establish the odds).  That we don’t deal well with probabilities makes a difficult situation far, far worse.

2. We’re prone to recency bias too.  We are all prone to recency bias, meaning that we tend to extrapolate recent events into the future indefinitely.  Since the recent experience of residents of the eastern seaboard (Hurricane Irene) wasn’t nearly as bad as expected (despite doing significant damage), that experience was extrapolated to the present by many.  When confirmation bias (we tend to see what we want to see) and optimism bias are added to the mix, it’s no wonder so many didn’t evaluate storm risk (and don’t evaluate investment risk) very well.

3. We don’t deal well with low probability, high impact events.  In the aggregate, hurricanes are low-frequency but high impact events.  As I have explained before, when people calculate the risk of hurricane damage and make decisions about hurricane insurance, they consistently misread their prior experience. This conclusion comes from a paper by Wharton Professor Robert Meyer that describes and reports on a research simulation in which participants were instructed that they were owners of properties in a hurricane-prone coastal area and were given monetary incentives to make smart choices about (a) when and whether to buy insurance against hurricane losses and (b) how much insurance to buy.

Over the course of the study (three simulated hurricane “seasons”), participants would periodically watch a map that showed whether a hurricane was building as well as its strength and course. Until virtually the last second before the storm was shown to reach landfall, the participants could purchase partial insurance ($100 per 10 percent of protection, up to 50 percent) or full coverage ($2,500) on the $50,000 home they were said to own. Participants were advised how much damage each storm was likely to cause and, afterward, the financial consequences of their choices. They had an unlimited budget to buy insurance.  Those who made the soundest financial decisions were eligible for a prize.

The focus of the research was to determine whether there are “inherent limits to our ability to learn from experience about the value of protection against low-probability, high-consequence events.”  In other words — whether experience can help us deal with tail risk. Sadly, we don’t deal with this type of risk management very well. Moreover, as Nassim Taleb has shown, such risks — while still not anything like frequent — happen much more often than we tend to think (which explains why the 2008-09 financial crisis was deemed so highly unlikely by the vast majority of experts and their models). 

The bottom line here is that participants seriously under-protected their homes. The first year, they sustained losses almost three times higher than if they had bought protection rationally. The key problem was a consistent failure to buy protection or enough protection even when serious and imminent risk was obvious (sounds like people refusing to evacuate, doesn’t it?).  Moreover, most people reduced the amount of protection they bought whenever they endured no damage in the previous round, even if that lack of damage was specifically the result of having bought insurance.

Experience helped a little.  Participants got better at the game as season one progressed, but they slipped back into old habits when season two began. By season three, these simulated homeowners were still suffering about twice as much damage as they should have.  As Meyer’s paper reports, these research results are consistent with patterns seen in actual practice. For example, the year after Hurricane Katrina there was a 53% increase in new flood-insurance policies issued nationally.  But within two years, cancellations had brought the coverage level down to pre-Katrina levels.

We simply don’t do a very good job dealing with low-probability, high-impact events, even when we have experience with them.  Since those in the northeast have so little experience with hurricanes, their discounting of hurricane risk is (again) even more understandable.  Given what happened to the vast majority of investment portfolios in 2008-09, the alleged market “professionals” often don’t manage tail risk very well either. That said, when a low-frequency event is treated as a certainty or near-certainty as a matter of policy, that overreaction can be disastrous and the costs too high to bear, as a Navy SEAL Commander here in San Diego once took great pains to explain to me in the context of fighting terrorism.

Taleb goes so far as to assert that we should “ban the use of probability.”  I disagree, but we ought to use probabilities with care and be particularly careful about how we convey probability assessments.  For example, a potential range of outcomes is better than a single number (as with the scatter graphs noted above).  Similarly, an outlook that shows the weighing of probabilities together with costs and potential outcomes will also help (this discussion makes a start in that direction).  Despite the risks of being perceived as “crying wolf,” we intuitively understand that when and as the potential negative outcomes are greater, lower likelihood events should generally be treated more seriously and that the progression is typically non-linear.   

In virtually every endeavor, our cognitive biases are a consistent problem and provide a constant challenge.  In terms of investing, they can and often do rock us like a hurricane — or at least a superstorm.  As Cullen Roche points out, consistent with the research noted above, we can and should learn from our investment errors, cognitive or otherwise.  Sadly, we do so far less often than we ought, as last week’s events amply demonstrate.

Trying to Pick Winners

We all know that the outcomes in many activities in life combine both skill and luck. Investing is one of these.  Understanding the relative contributions of luck and skill can help us assess past results and, more importantly, anticipate future results.  It might even help our forecasting skills, but we’d probably be wise not to bet on it.

As I have noted before, in Major League Baseball, over a 162-game season the best teams win roughly 60 percent of the time.  But over shorter stretches, it’s not unusual to see significant streaks.  Since reversion to the mean establishes that the expected value of the whole season is roughly 50:50 (or slightly above or below that level), 60 percent being great means that there is a lot of randomness in baseball.  That idea makes intuitive sense – the difference between ball four and strike three can be tantalizingly small (even if/when the umpire gets the call right); so can the difference between a hit and an out.

To look at it another way, the Tigers were big favorites in last night’s first game of this year’s World Series in large measure due to the pitching match-up.  Detroit’s Justin Verlander is widely regarded as the game’s top pitcher and had been dominant through-out the post-season to that point while Barry Zito is generally thought to be one of the great busts in free agent history. Zito was even left off the World Series roster by the Giants just two years ago.  But last night Zito pitched great while Verlander lasted only four difficult innings as the Giants won handily.  In the words of the great Casey Stengel, Who’d-a thunk it?

Luck (randomness) is a huge factor in investment returns too, irrespective of manager.  “Most of the annual variation in [one’s investment] performance is due to luck, not skill,” according to California Institute of Technology professor Bradford Cornell in a view shared by all experts (Nobel Prize winner Daniel Kahneman talks about it in this video, for example).  Even more troublesome is our perfectly human tendency to attribute poor results to bad luck and good results to skill.

As a consequence, in all probabilistic fields, the best performers dwell on process. This is true for great value investors, great poker players, and great athletes. A great hitter focuses upon a good approach, his mechanics, being selective and hitting the ball hard. If he does that – maintains a good process – he will make outs sometimes (even when he hits the ball hard) but the hits will take care of themselves.  Maintaining good process is really hard to do psychologically, emotionally, and organizationally.  But it is absolutely imperative for investment success.

In what Kahneman calls the “planning fallacy,” our ability even to forecast the future, much less control the future, is extremely limited and is far more limited than we want to believe. In his terrific book, Thinking, Fast and Slow, Kahneman describes the “planning fallacy” as a corollary to optimism bias (think Lake Wobegon – where all the children are above average) and self-serving bias (where the good stuff is my doing and the bad stuff is always someone else’s fault). Most of us overrate our own capacities and exaggerate our abilities to shape the future.  The planning fallacy is our tendency to underestimate the time, costs, and risks of future actions and at the same time overestimate the benefits thereof.  It’s at least partly why we underestimate bad results. It’s why we think it won’t take us as long to accomplish something as it does. It’s why projects tend to cost more than we expect.  It’s why the results we achieve aren’t as good as we expect.  It’s why I take three trips to Home Depot on Saturdays. We are all susceptible – clients and financial professionals alike.

As Nate Silver’s outstanding new book emphasizes, forecasting is really hard.  There are simply too many variables and too much uncertainty (Donald Rumsfeld’s infamous – but accurate – “unknown unknowns”) for forecasting to be anything like easy.  As I keep repeating, information is cheap; meaning is expensive.  For example (per Leonard Mlodinow), we are tricked into thinking that random patterns are meaningful, we build models that are far more sensitive to our initial assumptions than we realize, we make approximations that are cruder than we realize, we focus on what is easiest to measure rather than on what is really important, we build models that rely too heavily on statistics without enough theoretical understanding, and we unconsciously let biases based on expectation or self-interest affect our analysis.

Accordingly, consider the following.

  • No less an authority than Milton Friedman called Irving Fisher “the greatest economist the United States has ever produced.”   However, in 1929 (just three days before the notorious Wall Street crash) Fisher forecast that “stocks have reached what looks like a permanently high plateau.”
  • Many of you may remember a book published in late 2000 by James Glassman and Kevin Hassett entitled Dow 36,000.  Its introduction states as follows.  “If you are worried about missing the market’s big move upward, you will discover that it is not too late. Stocks are now in the midst of a one-time-only rise to much higher ground – to the neighborhood of 36,000 on the Dow Jones Industrial Average.”
  • Also back in 2000, Fortune magazine picked a group of ten stocks designed to last the then-forthcoming decade and promoted them as a “buy and forget” portfolio of their best ideas. Unfortunately, anyone who purchased that portfolio would want to forget it. An investment in an equally weighted portfolio of these stocks back then would have suffered a 70% loss over the next decade. 

For perhaps the most pertinent example of all, Pundit Tracker checked up on this year’s pre-season World Series predictions of 58 pundits from ESPN and Sports Illustrated.  These guys are all paid experts who pontificate for a living.  Yet even though the Tigers and the Giants were among the favorites to win their respective pennants (Vegas handicappers had the Tigers at 3-to-1 odds and the Giants at 7-to-1), not a single one of these “experts” picked the Tigers and Giants to meet in the World Series.  That’s a lot of randomness.

The take-away here is pretty obvious. If your investment approach requires or even includes a relevant forecast of future events, be very careful.  And the more specific the forecast, the more careful you should be.