All nine of this year’s American Nobel Prize laureates shared a stage this week. Not surprisingly, economists Robert Shiller and Eugene Fama sniped at each other, if good-naturedly. But it was the winners from the harder sciences who got the best digs in at the economists. Martin Karplus, a chemist from Harvard, queried, “What understanding of the stock market do you really have?” He even went so far as to note that economics – “if one wants to call it a science” – seemed completely unable to explain market movement. There were others, but you get the point.
The obvious takeaway is a common theme — that economics is conflicted flawed divided wildly inconsistent worthless, a point that has been made by Nassim Taleb, Jeremy Grantham and many others. The idea is that economics desperately wants to be a science, with “physics envy,” but simply isn’t up to the task. And it’s hard to contest the point.
My disagreement is not so much with the main point (even though I think academic economics has a good deal to offer, with Shiller’s work being Exhibit A in my evidence cache), but with its corollary and the casual certainty with which that corollary is expressed. “Hard” scientists increasingly express a belief that their domain is the only bastion of reality available. They see a great divide between things we can know (science) and everything else, which is opinion at best and largely worthless. That the expression of it is so often humorless and irony-challenged makes it all the worse.
Instead, I see a continuum rather than a gulf that cannot be bridged. I see differing levels of confidence and issues relating to the nature and the quality of the evidence presented where they see scientific certainties established in contrast to uncertainties elsewhere that cannot be established.
To be clear, I am not arguing some post-modern notion that objective truth doesn’t exist, that some scientific theories are not so well established that they may as well be facts (like the theories of gravity and evolution, for example), that all ideas are equally plausible or equally well supported evidentially, or that some ideas aren’t just plain nuts. But science isn’t certain. Since it is inductive rather than deductive, every conclusion is tentative and subject to change if and when better evidence is produced. Indeed, many “hard” scientific constructs are or have been matters in tremendous dispute. Classical physics gave way to quantum mechanics. Is light a particle, a wave, or both? Punctuated equilibrium? String theory?
Anyone who has managed money for more than a market cycle recognizes that investing successfully is really hard. Decision-making amidst uncertainty and conflicting evidence seems impossible at times. But unlike the hard sciences, it must be done. We have to make the best decisions we can based upon the best information we have. In our world, even doing nothing is a decision with consequences.
But here’s the point. We are way too sure of ourselves and way too full of ourselves. All of us. Scientists included. Add together confirmation bias (we tend to see what we want to see), motivated reasoning (our tendency to scrutinize ideas more critically when we disagree with them than when we agree; we are also much more likely to recall supporting rather than opposing evidence), optimism bias (we tend to expect things to turn out far better than they typically do), self-serving bias (where the good stuff is my doing and the bad stuff is always someone else’s fault), the planning fallacy (our tendency to overrate our ability to shape the future), and bias blindness (we’re sure that these various foibles don’t apply to us), and the result is excess certainty and hubris.
As a consequence, we tend to think that our own refuse smells delightful and that those who disagree with us aren’t merely wrong, but are rather some combination of delusional, stupid or evil. Even though our ideas and beliefs aren’t as good as we think they are, our opponents aren’t merely wrong, they are intentionally prevaricating. Thus discussions turn into nasty arguments, disagreements turn into ideological scrums and serious disputes become white-hot with rage, invective and ad hominem.
This is not simply a paeon to civility, a plea that we all just get along (even though more civility would be a good thing). Despite our excessive certainty, few arguments are slam dunks. But that doesn’t mean that most arguments are in apparent equipoise either. Instead of spending so much time screaming about how bad the other side’s position is or how made the other is, we’d do well carefully to consider and show how the viewpoint is bad. Of course, that would require that we thoughtfully engage with opposing arguments. Add some humility – the idea that we might, gasp, perhaps be wrong – and there might be some basis for hope.
My sense is that the key element here is that most partisans see “their side” as not just true, but obviously true. It’s a by-product of the bias blind spot. We tend to see bias in others but not in ourselves. Therefore, our strongly held positions aren’t really debatable — they’re seen as objectively and obviously true. After all, if we didn’t think our positions were true, we wouldn’t hold them. And (our thinking goes) since they are objectively true, anyone who makes the effort to try should be able to ascertain that truth.
Our opponents are thus without excuse – they’re stupid, delusional or evil. If they disagree with me, they are denying reality. As usual, information is cheap while meaning is both expensive and elusive.
On November 23, 1951, Dartmouth played Princeton in a big football game. It was hotly and heatedly contested. Tempers flared and accusations flew in both directions. In the game’s aftermath, a famous 1954 study was written which focused on “selective perception” with respect to the game. Quite clearly, each side was “‘seeing’ an entirely different version of the game.” The other side was accused of both dirty play and irrational response. Thus “it is inaccurate and misleading to say that different people have different ‘attitudes’ concerning the same ‘thing.’ For the ‘thing’ simply is not the same for different people whether the ‘thing’ is a football game, a presidential candidate, Communism, or spinach.”
Accordingly, our Nobel-winning hard scientists had good reason to criticize the state of economics. But they were casting stones as sinners themselves.
We typically want to be objective. We try to be objective. We believe we are objective, even though we readily acknowledge that “others” are not. But we’re too sure of our objectivity and too sure of our conclusions.
By half. At least.