At the most basic level, complexity risk encompasses situations such as the Lehman Brothers collapse, where the management of a major investment bank did not fully understand the risks they are taking or the consequences of those risks or the Madoff scandal, where private investors did not understand their investments. Situations such as these are a major problem in and for our industry. But complexity risk today goes much deeper still.
Complexity risk has been brought to the fore most recently on account of several well-publicized market blow-ups relating to high-frequency trading. In late September, the Senate banking committee held a hearing on the issue and the Securities and Exchange Commission got into the act with a recent panel discussion. Even Wall Street veterans have begun to question whether a market trading at warp speed is a good thing. In this context, we’re not merely talking about a distinction between trading and investing. Instead, we’re rewarding trading to the exclusion of investing. As Roger Lowenstein argues, “[i]f market signals are based on algorithms that become outmoded in a nanosecond, we end up with empty factories and useless investment.”
In exchange for providing the markets with more liquidity than they need, high-frequency trading has created a complexity risk problem of potentially enormous scale by subjecting markets to the much increased likelihood of more destabilizing crashes. Moreover, prices may come to reflect (quite literally) the value judgments not of investors, but of high-speed algorithms. As Lowenstein points out, several publicly traded companies lost nearly $1 trillion of market value — albeit briefly — in a so-called “flash crash” in May of 2010 that the SEC said was triggered by a single firm using algorithms rapidly to sell 75,000 futures contracts.
Lawmakers in several countries are proposing to address this problem by imposing new restrictions on high-speed traders. They are also considering modularity-enhancing options like the creation of shutdown switches that might be able to cordon off damage in a crisis. Lowenstein further argues that the better way to discourage this short-term market myopia is to take a page from anti-tobacco efforts: let high taxes discourage the undesirable behavior. But the overall risks of complexity are broader and deeper still, as the 2008-2009 financial crisis aptly demonstrated.
As CalTech system scientist John C. Doyle has established, a wide variety of systems, both natural and man-made, are robust in the face of large changes in environment and system components, and yet they are still potentially fragile to even small perturbations. Such “robust yet fragile” networks are ubiquitous in our world. They are “robust” in that small shocks do not typically spread very far in the system. However, since they are “fragile,” a tiny adverse event can bring down the entire system.
Such systems are efficiently fine-tuned and thus appear almost boringly robust despite the potential for major perturbations and fluctuations. As a consequence, systemic complexity and fragility are largely hidden, often revealed only by rare catastrophic failures. Modern institutions and technologies facilitate robustness and efficiency, but they also enable catastrophes on a scale unimaginable without them — from network and market crashes to war, epidemics, and global warming.
Chaos, criticality, and related ideas from statistical physics have inspired a completely different view of complexity in that behaviors that are typically unpredictable and fragile “emerge” from simple and usually random interconnections among homogeneous components. Since complexity science demonstrates that financial markets are unpredictable and fragile, the risks to investors and to the markets as a whole are both obvious and enormous.
While there are great benefits to complexity as it empowers globalization, interconnectedness and technological advance, there are unforeseen and sometimes unforeseeable yet potentially catastrophic consequences too. Higher and higher levels of complexity mean that we live in an age of inherent and, according to the science, increasing surprise and disruption. The rare (but growing less rare) high impact, low-frequency disruptions are simply part of systems that are increasingly fragile and susceptible to sudden, spectacular collapse. John Casti’s X-Events even argues that today’s highly advanced and overly complex systems and societies have grown highly vulnerable to extreme events that may ultimately result in the collapse of our civilization. Examples could include a global internet or technological collapse, transnational economic meltdown or even robot uprisings.
We are thus almost literally (modifying Andrew Zolli‘s telling phrase slightly) tap dancing in a minefield — we don’t quite know when our next step is going to result in a monumental explosion. One’s goal, therefore, must be first to survive and then to thrive in that sort of disruptive and dangerous environment — in other words, to be resilient.
Unfortunately, while today’s complex systems are generally quite good at dealing with anticipated forms of uncertainty and disruption (Donald Rumsfeld’s “known unknowns”), which makes them highly efficient, it is the unanticipated “unknown unknowns” that are so vexing and problematic. Real crises happen when and where we least expect them and strike at the heart of a system. Thus the Lehman Brothers collapse wasn’t a problem of being too big to fail, but rather a function of being too central to fail without enormous cascading impacts. Its risk models were wildly inadequate yet considered utterly reliable — a classic unknown unknown.
As the complexity of a system grows, both the sources and severity of possible disruptions increases. Resilient systems are not perfect or even perfectly efficient. Indeed, regular modest failures are essential to many forms of resilience (adjusting and adapting are crucial to success). In this context, then, efficiency can be a net negative and redundancy a major positive. Hedges matter. Learning from mistakes is vital.
Meanwhile, the size required for potential ‘triggering events’ decreases in an increasingly complex world. Thus it may only take a tiny event, at the wrong place or at the wrong time, to spark a calamity. While the chances of any of these possibilities actually happening is individually remote, our general susceptibility to that type of catastrophe is surprisingly real.
Thus those who would attempt to manage risk in the aggregate and complexity risk specifically must take these fundamental features of network systems into account. Sadly, this field is much more descriptive than prescriptive. Zolli again: “Resilience is often found in having just the `right’ amounts of [certain] properties – being connected, but not too connected; being diverse, but not too diverse; being able to couple with other systems when it helps, but also being able to decouple from them when it hurts. The picture that emerges is one of strategic looseness, an intentional stance of both fluidity (of strategies, structures, and actions) and fixedness (of values and purpose).”
This “Goldilocks” approach to complexity — everything needs to be at some relatively undefined “just right” level — makes it extremely difficult to try to manage. There is simply no definitive blueprint for managing such risks. But there are some patterns that are helpful. For example, diversity, modularity (a problem with one component or the elimination of one outlet won’t scuttle the entire system), proximity, redundancy, flexibility and adaptability are all extremely valuable. Within interpersonal systems, diversity, flexibility and mutual trust are vital to resilience and success. So is decentralization and shared control.
Fortunately, diversification is already a well-established virtue in our world, even though its value is often honored only in the breach. In this context, resilient diversity means fluidity of structures, strategies and approaches but it does not extend to goals, values and core methodologies. An effective risk mitigation and management approach is thus much like playing jazz. One must be able to improvise often and well but within an established and consistent structure.
Ultimately, reckoning with risk requires a firm grip on reality – both our inherent optimism and our inherent loss aversion must be tempered. When things are not going well, until truth is out on the table (via transparency and trust), no matter how ugly, we are not in a position to deal with the problems at hand. In the event of an unforeseen melt-down in a position, portfolio, market, or even a system-wide collapse, how prepared are you? And no matter how unprepared you turn out to be (remember those pesky unknown unknowns), have you thought through how you can go about getting back on track after the calamity?
If you haven’t considered these questions carefully and systematically, I reckon that the chances of your getting into a whole heap of trouble, at some point at least, are surprisingly high.
______________
I dealt with these issues peripherally here. My entire series on risk is available at these links:
- Reckoning with Risk (1) begins to look at how to deal with risk.
- Reckoning with Risk (2) explains and categorizes different elements and types of risk.
- Reckoning with Risk (3) shows our failings at dealing with low-probability, high-impact events.
- Reckoning with Risk (4) looks at what the Yale Endowment experience can teach us about risk.
- Reckoning with Risk (5) explains that professional managers face different risks than those for whom they manage money and that those differences matter.
- Reckoning with Risk (6): 9.11 Edition takes a look at “black swan” risks.
- Reckoning with Risk (7): Widening Your Lens suggests that dealing with risk requires that you actively manage your life.
- Reckoning with Risk (8): Risk Capacity, Appetite, Tolerance and Perception looks at the problem of (apparent) shifts in risk tolerance.
- Reckoning with Risk (9): Money for Nothing reminds us that risk and return generally correlate.
- Reckoning with Risk (10) deals with complexity risk.