Because I belong to my supermarket’s “loyalty club,” I get some significant discounts on lots of stuff I buy. Four bucks a pound on steaks. Fifty cents off a two liter bottle of Diet Coke. Seventy-five cents on a five-pound bag of sugar. All the major supermarket retailers do it. Why is that?
The answer is pretty simple. The data we provide them about what we purchase and when by “signing in” every time we buy something allows them to make those lost revenues back plus a whole lot more. Supermarkets “mine” the data we provide to allow them to make more money.
Data-mining is the process of analyzing data from different perspectives and then manipulating it into useful information. It is a large and growing field within computer science designed to find correlations or patterns among dozens of fields in large relational databases and then deciding whether and how to act upon those patterns. It offers great opportunities but since we humans are very good at seeing patterns and even at seeing patterns that aren’t really there, this process can be fraught with peril.
Supermarkets have been particularly adept at data-mining the information at their disposal. One Midwest grocery chain discovered that when men bought diapers on Thursdays and Saturdays, they also tended to buy beer. Further analysis showed that these shoppers typically did their full-blown weekly grocery shopping on Saturday but they only bought a few items on Thursdays. The retailer (pretty obviously) concluded that they purchased the beer on Thursdays to have it available for the upcoming weekend. The grocery chain could use this information in various ways to increase revenue. For example, it could move the beer displays closer to the diaper displays and it could make sure beer and diapers were sold at full price on Thursdays.
This kind of experimentation (to see if the patterns hold over time) makes sense in the grocery context because it costs so little and can be readily abandoned when it doesn’t work or stops working. Investing is another matter. When trades don’t work out, money gets lost. There are trading costs and tax considerations to consider too. That’s why investment professionals are generally careful to emphasize that correlation does not imply causation. More accurately, we should stress that correlation is not causation. Ongoing correlation gives us a hint that the correlated things are connected, but it ain’t necessarily so. The higher the number of variables, the less likely it is that the connection is a causal one and one that will persist over time.
The Latin phrase post hoc ergo propter hoc means “after this, therefore because of this” and is the logical fallacy which supports this concept. The Big Bang Theory’s Sheldon Cooper (played by Jim Parsons) illustrates its usage here:
This chart demonstrates it as well as any:
To bring the issue home a bit in our information-rich age (and as I have said before), information is cheap while meaning is expensive. Some connections are pretty obvious. Cake-buying will likely correlate with ice cream-buying. A strong and growing economy will correlate with a healthy equity market. Some connections are much less intuitive, but no less predictive — like the diapers and beer example above. But often the supposed connections are simply random noise. Many a scam artist can backtest some data and come up with a “system” that will make a lot of money (sadly, for the scammer and not you). The highly popular bestseller The Bible Code provides a good popular example of a data-mining failure that can be readily debunked (more here and an interesting PBS interview here).
The risks in making too much of the patterns we see are exacerbated by our behavioral biases. We love stories. They help us to explain, understand and interpret the world around us. They also give us a frame of reference we can use to remember the concepts we take them to represent. Perhaps most significantly, we inherently prefer narrative to data — often to the detriment of our understanding because, unfortunately, our stories are also often steeped in error. If the seeming correlation is supported by a good story, especially if the story supports an already favored narrative (perhaps “sell in May and go away”), so much the better.
Nassim Taleb calls our tendency to create false and/or unsupported stories in an effort to legitimize our pre-conceived notions the “narrative fallacy.” That fallacy threatens our analysis and judgment constantly. Therefore, while we may enjoy the stories and even be aided by them, we should put our faith in the actual data, especially because they are so often in conflict. In addition, we need to keep checking and re-checking the data. Keeping one’s analysis and interpretation of the data reasonably objective – since analysis and interpretation are required for data to be actionable – is really, really hard even in the best of circumstances. We want to see agency in the world. We want to understand everything in terms of intent, but sometimes (many times) the “cause” is pure noise and worthless as a teacher.
It is true, as Taleb concedes, that “chance favors the prepared.” However, as Taleb also points out, those successes have as much to do with randomness and noise as with talent, plan and execution. That’s a major reason why we are such bad forecasters. As a consequence, we should be much more focused on failure than success, consistent with the philosophy of science developed by Karl Popper. If we are always on the look-out for ways we are wrong and consistently try to show that we are wrong, we will be much better investors.
Data-mining sometimes does provide us with useful information for trading or investing purposes. High-frequency trading and momentum trading, for example, are dependent upon complex algorithms created by mining mounds and mounds of data, and these approaches can be and often are highly successful. However, data-mining in the investment world — leading to claims of fancy-sounding “proprietary models” and “black boxes” – can also be a highly questionable endeavor. The markets produce so much data that coincidence alone accounts for many systems which backtest successfully (think domestic oil production and quality rock music). If you plow through enough data long enough, seeming patterns will inevitably emerge. Unfortunately, backtesting successfully is not the same as forecasting successfully. Remember, information is cheap while meaning is expensive.
Correlation isn’t causation. It’s a hint or a possibility, but no sure thing. This entire process is so difficult because we have so much trouble isolating causation. It’s easy to see that bad traffic can cause one’s commute to be longer than normal, but ascertaining causation where there are huge numbers of variables can be astonishingly difficult. Finding a causal chain in the hard sciences can be made easier by creating experiments that limit the variables or even eliminate all other possible variables. That’s simply not possible in the markets.
Investors are always on the look-out for patterns that will persist and offer opportunities. Most never work. Some work for a time but are copied so much that the advantage disappears. A few (such as value, size and momentum) have worked and continue to work.
Keep looking for the new new thing. Just don’t expect to find it. And even if/when you think you have found something useful, keep trying to falsify it for a while to be really sure – your bottom line will appreciate it.