Recently I wrote a piece on financial services lies here at Above the Market. Lie #10 was “I don’t need help.” Here’s what I wrote about it.
American virologist David Baltimore, who won the Nobel Prize for Medicine in 1975 for his work on the genetic mechanisms of viruses, once told me that over the years (and especially while he was president of CalTech) he received many manuscripts claiming to have solved some great scientific problem. Most prominent scientists have drawers full of similar submissions, almost always from people who work alone and outside of the scientific community. Unfortunately, none of these offerings has done anything remotely close to what was claimed, and Dr. Baltimore offered some fascinating insight into why he thinks that’s so. At its best, he noted, good science (like good investing and good thinking) is a collaborative, community effort. On the other hand, “crackpots work alone.” Good collaboration among professionals and with good professionals by consumers improves investment outcomes, usually by a lot. A good professional can offer help with goals and plans, an Investment Policy Statement, asset allocation, risk management, behavioral management, protection from fraud (especially for seniors), and tax, estate and financial planning. We all need more help than we think.
A commenter calling himself (herself?) “pott” responded to that post as follows.
“Crackpots work alone” — a crackpot. https://www.quantamagazine.org/20150402-prime-proof-zhang-interview/ Damn right, mister.
Yitang Zhang (the subject of the article linked by pott) was, prior to 2013, an obscure mathematician when, following a decade of arduous work in isolation, he made a major breakthrough toward solving the Twin Prime Conjecture, a concept that had remained unproven for more than 150 years and was widely thought too difficult to solve. It’s tempting simply to note that Nobel laureate David Baltimore is hardly a crackpot (based upon his work and his intellect, obviously, but also because his statement was generally true and well supported by a wealth of personal experience) and that Zhang is simply an exception that proves the rule. The existence (and investment performance!) of the likes of Seth Klarman, Warren Buffett and Howard Marks hardly makes anyone a crackpot who asserts (quite rightly) that beating the market is really, really hard. But making that point and quickly moving on would miss something important. Besides, that wouldn’t require a separate post. So here’s the question at issue.
Is Zhang just an amazing outlier or is math different somehow and thus more conducive to lone geniuses than other areas of inquiry?
In The Psychology of Invention in the Mathematical Field, a mathematician describes how these lone visionaries can solve great problems. “It often seems to me, especially when I am alone, that I find myself in another world. Ideas of numbers seem to live. Suddenly, questions of any kind rise before my eyes with their answers.”
Zhang had a similar epiphany and then carefully completed his proof before painstakingly checking and rechecking his work. He then submitted his paper to Annals of Mathematics, the field’s most prestigious journal. Note what The New Yorker had to say specifically about crackpots working alone in math.
In the Annals archives are unpublished papers claiming to have solved practically every math problem that anyone has ever thought of, and others that don’t really exist. Some are from people who “know a lot of math, then they go insane,” a mathematician told me. Such people often claim that everyone else who has attacked the problem is wrong. Or they announce that they have solved several problems at once, or “they say they have solved a famous problem along with some unified-field theory in physics,” the mathematician said. Journals such as Annals are always skeptical of work from someone they have never heard of.
That description sounds a lot like what Dr. Baltimore said.
But the Zhang problem remains. Annals receives a huge number of papers seeking publication. The review process is typically long and rarely results in publication. But after careful examination via peer review, Zhang’s paper was accepted for publication in an astonishing three weeks. It was that good and that groundbreaking. Henryk Iwaniec of Rutgers, who has also done important work in this area, explains: “Zhang somehow completely understood the situation, even though he was working alone. That’s how he surprised. He just amazingly pushed further some of the arguments in these papers.”
In mathematics, the lone genius is rare, to be sure, but apparently less rare than in other areas. It has some significant history of seemingly intractable problems being solved by lone geniuses working in isolation (e.g., Mochizuki on the ABC Conjecture – perhaps; Wiles and Fermat’s Last Theorem – sort of; Perelman and the Poincaré Conjecture — strangely; or even Newton developing the calculus) as well as many examples of collaboration1 reaching important results. Collaboration is the norm, but important counterexamples (such as Zhang) remain. Explaining why that might be – even tentatively and speculatively – is the issue at hand.
Mathematics is the language we use to communicate with the stars. We have the ability to recognize patterns in the form of simple arithmetic. We also have the ability to construct ever more complex mathematical systems that illuminate the cosmos but which are remote to everyone except the most mathematically astute. It is complex and elegant, even beautiful, but also demonstrably true, even when the demonstration isn’t readily accessible. Thus, as explained by Moon Duchin of Tufts, mainstream mathematics moves forward communally but via objective measurements: “Proofs are right or wrong. The community passes verdict.” The idea is that the standards are objective but it takes an amazing amount of knowledge and experience to ascertain if the objective standards have been met. That objectivity – which is demonstrable by deductive proof – distinguishes math (and logic) from most subjects.
Via deductive reasoning, we derive conclusions from general principles and then apply them in the particular. Logic and mathematics are the quintessential examples and thus these can offer definitive solutions. If 2+2=4 is true it is always true (assuming base 10). Unfortunately, deductive reasoning only works within closed systems.
Inductive reasoning is the opposite in that it uses examples in particular cases to build more general conclusions. Hume famously attacked inductive “proofs” in that there is no way to establish that what has been experienced to any point is true universally. Just because every swan I have seen is white, even if I’ve seen lots and lots of swans, that doesn’t prove that all swans are white (and Australia does in fact have black swans). Inductive logic is far from guaranteed (see here for a particularly interesting example); this is the famous problem of induction. Lots of positive occurrences make the outcome more likely (or seem more likely) but a single negative occurrence undercuts the entire hypothesis. This problem persists across the real world. Thus science (as with economics, the markets and most knowledge in general) advances only and always tentatively.
Most fundamentally, closed systems subject to deductive proof advance directly via verification while every other form of learning advances indirectly, via falsification. Accordingly, when Galileo decided actually to check and see if Aristotle’s (highly intuitive) claim that heavier objects fall faster than lighter objects it was revolutionary – both because he checked (the concept of experimental science is now redundant, due in no small measure to him) and because of what he discovered (that Aristotle was wrong). Significantly, Galileo’s measurements as to the rates at which objects fall were deductive (if prone to error) and thus subject to definitive conclusion while his theories about those measurements – the nature of gravity – were inductive and thus necessarily tentative and subject to amendment or even rejection due to the results of more experiments and further testing.
In just this way the French mathematician Urbain Le Verrier “discovered” the planet Neptune – which to that point had never been observed – using math alone. He recognized that the orbit of Uranus was “off” and deduced that another planet of specific size and mass had to be in solar orbit outside of Uranus to perturb its orbit in that way. Verrier took his calculations to astronomer Johann Gottfried Galle at the New Berlin Observatory; when Galle looked where Verrier told him to look he found Neptune, within 1 degree of where Verrier said it should be.
Because we crave certainty, we want deductive proof, of course, but have to settle for induction most of the time. Science never fully proves anything in the real world because the real world isn’t a closed system. Science analyzes the available data and, when the force of the data is strong enough, it makes tentative (if perhaps powerful) conclusions. Many of these tentative conclusions seem like facts because they are so well supported but they remain necessarily tentative.2 The great value of data is not so much that it points toward the correct conclusion (even though it does), but that it allows us the ability to show that some things are conclusively wrong.
In other words, confirming evidence adds to the inductive case but doesn’t prove anything conclusively. Correlation is not causation and all that. Thus disconfirming evidence is immensely (and far more) valuable. It allows us conclusively to eliminate some ideas, approaches or hypotheses. That said, we don’t like disconfirming evidence and we tend to neglect the limits of induction. Few papers get published establishing that something doesn’t work. Instead, we tend to spend the bulk of our time looking (and data-mining) for an approach that seems to work or even for evidence we can use to support our preconceived notions. We should be spending much more of our time focused upon a search for disconfirming evidence for what we think (there are excellent behavioral reasons for doing so too).
In his important (to philosophers anyway) Word and Object, Quine made Neurath’s boat analogy famous. It compares the holistic nature of language and consequently scientific inquiry with the construction of a boat which is already at sea.
We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction.
It describes the inductive process really well. In the real world, including the investing world, we need to keep on making adjustments on the fly in real-time based upon new data and other evidence. We need to rebuild the boat while at sea.
The world we live in is profoundly complex and is much more difficult for us to navigate than we usually think or assume. According to Nobel laureate Daniel Kahneman, “We systematically underestimate the amount of uncertainty to which we’re exposed, and we are wired to underestimate the amount of uncertainty to which we are exposed.” Accordingly, “we create an illusion of the world that is much more orderly than it actually is.” On the other hand, math is orderly, allowing for definitive conclusions about the math (but not about our interpretations of what the math means about the real world). Thus information is cheap while meaning is expensive.
Our ability to forecast the future, much less control the future, is extremely limited and is far more limited than we want to believe. That’s why the planning fallacy is such a constant and monumental problem. We simply misapprehend (or ignore) the data far too often. Instead, we concoct stories — often wonderful stories — to provide an interpretive framework for our forecasts, expectations and decisions. That framework is necessary for us to “sell” our stories and ourselves.
As ever, we are in a season of competing stories. Bulls have their stories. So do bears. Traders have very differently focused stories. So do long-term investors. Politicians have stories. Even Fed chairs have stories. In the investment world, these stories will come in the form of client letters (often designed to justify performance that wasn’t quite up-to-snuff), projections, forecasts, “best of” lists, Twitter feeds, sound bites, CNBC appearances, podcasts, blog posts, reports and expectations. The data (such as it is) will be handled with great care — comparisons to measures that can be beaten (or nearly so), for example — with thoughtfully wrought stories as explanation (such as “we were right, but too early” or “it was the most likely scenario but…”).
Our success stories — when we’re lucky enough to have them — have lives of their own, too. We love to say things like “as I expected/predicted/forecast…”. Of course, being right isn’t the same as being right for the right reasons — even if and when we can discern causality with some degree of certainty. Because the markets offer so many false positives (seeming verification in an inductive world), we’re usually better off learning from our errors and failures than our successes. Falsification trumps (seeming) verification every time in the real world.
Kahneman again: “we can expect people to be way overconfident, because they have that ability to tell good stories, and because the quality of the stories is what determines their confidence. The extent of that overconfidence is actually quite remarkable.” And when the performance numbers suggest success — whether real or not — we’re going to proclaim it confidently and from the rooftops. Past performance may not be indicative of future results but when past performance is good we’ll always lead with it. It’s very often better to be lucky than good.
Those of us struggling to be honest with ourselves, others and the investment process will be left trying to muddle through, building portfolios and managing money like Neurath’s boat — adapting on the go while trying to keep the whole thing afloat and moving in the right direction, keeping our promises and expectations grounded in the limiting reality of the data. It isn’t always a good recipe for sales success. But it does helps me sleep at night.
We would all like progress and thus real success to come more quickly, more cheaply and more comprehensively than reality allows. And if success comes, we desperately tell ourselves it’s because we’re really good and not because we’re really lucky (as opposed to when “challenges” arise — that’s bad luck, as the self-serving bias predicts).
All of which leads me back to Dr. Zhang and whether math’s susceptibility to deductive proof makes it different and amenable to lone geniuses the way inductive reasoning is not. Since math potentially offers definitive right answers, I think (but can’t prove) that a lone genius with sufficient skill and relentless discipline can get further than a lone genius working in areas where more interpretation is required. Information is (comparatively) cheap while meaning is expensive. Even extreme, crazy-hard math is on the information scale rather than the meaning scale. Great interpretation is all meaning and thus involves more sculpting than tracing. It requires great skill, imagination and even a bit of whimsy as well as collaboration as to whether the various interpretive choices are the best (not to say right) ones. So to commenter “pott” I emphasize that Zhang is the exception that proves the rule but he also works in an area that allows for definitive conclusion and is thus much more conducive to breakthroughs from a lone genius. And Zhang is surely a genius.
1 An interesting basis for the collaborative “Polymath Project” is that it goes against the grain in terms of making and admitting errors. As reported by Wired magazine, “One of the bedrock principles of the Polymath approach is that participants should throw any idea out to the crowd immediately, without stopping to ponder whether it is any good. ‘There’s an explicit license to be wrong in public,’ [Scott] Morrison [of the Australian National University] said. ‘It goes against a lot of people’s instincts, but it makes the project much more efficient when we’re more relaxed about saying stupid things.’” It’s consistent with Kahneman’s idea that the best way to protect against error is to call in your smartest and least empathetic friends to tell you how and where you’re wrong.
2 Advocates who are either wildly ignorant or willfully deceitful will often take advantage of the inherent tentativeness of science to try to minimize its power. Thus exceptionally well-supported conclusions are deemed “just a theory” or “not at all certain.” That said, the list of alleged scientific “facts” that turned out not to be is a distressingly long one.