Chaos is a friend of mine

Chaos is a friend of mineOver 50 years ago, Edward Lorenz created an algorithmic computer weather model at MIT to try to provide accurate weather forecasts. During the winter of 1961, as recounted by James Gleick in Chaos: Making a New Science, Professor Lorenz was running a series of weather simulations using his computer model when he decided to repeat one of them over a longer time period. To save time (the computer and the model were primitive by today’s standards) he started the new run in the middle, typing in numbers from the first run for the initial conditions, assuming that the model would provide the same results as the prior run and then go on from there. Instead, the two weather trajectories diverged on completely separate paths.

After ruling out computer error, Lorenz realized that he had not entered the initial conditions for the second run exactly. His computer stored numbers to an accuracy of six decimal places but printed the results to three decimal places to save space. Lorenz had entered the rounded-off numbers for his second run starting point. Astonishingly, this tiny discrepancy altered the end results dramatically.Lorenz

This finding (even using a highly simplified model and confirmed by further testing) allowed Lorenz to make the otherwise counterintuitive leap to the conclusion that highly complex systems are not ultimately predictable. This phenomenon (“sensitive dependence”) came to be called the “butterfly effect” because a butterfly flapping its wings in Brazil can set off a tornado in Texas. Lorenz built upon the work of late 19th century mathematician Henri Poincaré, who demonstrated that the movements of as few as three heavenly bodies are hopelessly complex to calculate, even though the underlying equations of motion seem simple.

Accordingly and at best, complex systems – from the weather to the markets – allow only for probabilistic forecasts with significant margins for error and often seemingly outlandish and hugely divergent potential outcomes. Continue reading

“Assume a Spherical Cow”

Anyone who has managed money for more than about a nanosecond recognizes that the idea that the markets are efficient is a myth, especially in times of crisis, real or perceived. Despite claims of scientific objectivity, economics is as prone to human frailty as anything else. As Seth Godin wrote this week: “Your first mistake might be assuming that people are rational.” For example, on my birthday in 2008, the S&P 500 lost over 9 percent, over a trillion dollars in value, on account of (per CNN) nothing more definitive than “recession talk.” That’s hardly evidence of rationality.

The consistently engaging 3 Quarks Daily has a new piece up this week on economics as religion rather than science. It’s hardly a novel concept, but the argument is an interesting one.  However, author Ben Schreckinger missed or ignored some of the best available evidence, which I’ll get to in a bit of a roundabout way.

Spherical Cow2The above cartoon (from Abstruse Goose) riffs on a classic physics joke that goes something like this:

Milk production at a dairy farm was low, so a farmer wrote to the local university to ask for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist, and two weeks of intensive on-site investigation took place. The scholars then returned to the university, notebooks crammed with data, where the task of writing the report was left to the team leader. Shortly thereafter the physicist returned to the farm, and advised the farmer, “I have the solution, but it only works in the case of spherical cows in a vacuum.”

Thus “spherical cow” has become a metaphor for highly (overly!) simplified scientific models of complex real life phenomena. Economists may be even worse offenders than theoretical physicists. As Hale Stewart wrote yesterday, “complex models that claim to model the entire US economy just aren’t worth the time of day no matter how good the algorithms backing it up. “That economists so frequently suffer from “physics envy” makes for a delightful bit of irony. Yet an even worse offense by economists is their all-too-frequent willingness to elevate a favored ideology ahead of the actual facts. People are routinely driven by their ideologies and their behavioral biases rather than facts and data, of course, but economists claim to be acting as scientists, with a delineated method designed to root out such things. If only. Continue reading

The Tuned Market

winner_of_the_US_National_lego_comp_2007My two-year old grandson loves his Legos.  As a kid, I loved my Legos. These colorful bricks seem to call to us to create elaborate and complex structures.  They don’t invite simple, clean lines. Sure, once in a while somebody (never a kid) makes a Lego Fenway Park or something like that, but crazy and complex is the norm (not that a Lego Fenway Park isn’t a different sort of crazy). That’s why they’re so much fun.

We love complexity.  It’s why it is so hard for us to employ Occam’s Razor.  We should of course go with the simpler explanation or approach unless and until something more complex offers greater explanatory power. But we don’t.  We want to include our pet political ideas, convoluted conspiracy theories or favored market narratives.  We are ideological through-and-through, and the more complex the better. Continue reading

Complexity Risk Management — a lot like Jazz

At the most basic level, complexity risk encompasses situations such as the Lehman Brothers collapse, where the management of a major investment bank did not fully understand the risks they are taking or the consequences of those risks or the Madoff scandal, where private investors did not understand their investments.  Situations such as these are a major problem in and for our industry.  But complexity risk today goes much deeper still.

Complexity risk has been brought to the fore most recently on account of several well-publicized market blow-ups relating to high-frequency trading. In late September, the Senate banking committee held a hearing on the issue and the Securities and Exchange Commission got into the act with a recent panel discussion.  Even Wall Street veterans have begun to question whether a market trading at warp speed is a good thing. In this context, we’re not merely talking about a distinction between trading and investing.  Instead, we’re rewarding trading to the exclusion of investing.  As Roger Lowenstein argues, “[i]f market signals are based on algorithms that become outmoded in a nanosecond, we end up with empty factories and useless investment.”

In exchange for providing the markets with more liquidity than they need, high-frequency trading has created a complexity risk problem of potentially enormous scale by subjecting markets to the much increased likelihood of more destabilizing crashes.  Moreover, prices may come to reflect (quite literally) the value judgments not of investors, but of high-speed algorithms. As Lowenstein points out, several publicly traded companies lost nearly $1 trillion of market value — albeit briefly — in a so-called “flash crash” in May of 2010 that the SEC said was triggered by a single firm using algorithms rapidly to sell 75,000 futures contracts.

Lawmakers in several countries are proposing to address this problem by imposing new restrictions on high-speed traders.  They are also considering modularity-enhancing options like the creation of shutdown switches that might be able to cordon off damage in a crisis. Lowenstein further argues that the better way to discourage this short-term market myopia is to take a page from anti-tobacco efforts: let high taxes discourage the undesirable behavior.  But the overall risks of complexity are broader and deeper still, as the 2008-2009 financial crisis aptly demonstrated.

As CalTech system scientist John C. Doyle has established, a wide variety of systems, both natural and man-made, are robust in the face of large changes in environment and system components, and yet they are still potentially fragile to even small perturbations. Such “robust yet fragile” networks are ubiquitous in our world. They are “robust” in that small shocks do not typically spread very far in the system.  However, since they are “fragile,” a tiny adverse event can bring down the entire system.

Such systems are efficiently fine-tuned and thus appear almost boringly robust despite the potential for major perturbations and fluctuations. As a consequence, systemic complexity and fragility are largely hidden, often revealed only by rare catastrophic failures.  Modern institutions and technologies facilitate robustness and efficiency, but they also enable catastrophes on a scale unimaginable without them — from network and market crashes to war, epidemics, and global warming.

Chaos, criticality, and related ideas from statistical physics have inspired a completely different view of complexity in that behaviors that are typically unpredictable and fragile “emerge” from simple and usually random interconnections among homogeneous components.  Since complexity science demonstrates that financial markets are unpredictable and fragile, the risks to investors and to the markets as a whole are both obvious and enormous.

While there are great benefits to complexity as it empowers globalization, interconnectedness and technological advance, there are unforeseen and sometimes unforeseeable yet potentially catastrophic consequences too.   Higher and higher levels of complexity mean that we live in an age of inherent and, according to the science, increasing surprise and disruption. The rare (but growing less rare) high impact, low-frequency disruptions are simply part of systems that are increasingly fragile and susceptible to sudden, spectacular collapse.  John Casti’s X-Events even argues that today’s highly advanced and overly complex systems and societies have grown highly vulnerable to extreme events that may ultimately result in the collapse of our civilization.  Examples could include a global internet or technological collapse, transnational economic meltdown or even robot uprisings.

We are thus almost literally (modifying Andrew Zolli‘s telling phrase slightly) tap dancing in a minefield — we don’t quite know when our next step is going to result in a monumental explosion.  One’s goal, therefore, must be first to survive and then to thrive in that sort of disruptive and dangerous environment — in other words, to be resilient.

Unfortunately, while today’s complex systems are generally quite good at dealing with anticipated forms of uncertainty and disruption (Donald Rumsfeld’s “known unknowns”), which makes them highly efficient, it is the unanticipated “unknown unknowns” that are so vexing and problematic.  Real crises happen when and where we least expect them and strike at the heart of a system. Thus the Lehman Brothers collapse wasn’t a problem of being too big to fail, but rather a function of being too central to fail without enormous cascading impacts. Its risk models were wildly inadequate yet considered utterly reliable — a classic unknown unknown.

As the complexity of a system grows, both the sources and severity of possible disruptions increases.  Resilient systems are not perfect or even perfectly efficient.  Indeed, regular modest failures are essential to many forms of resilience (adjusting and adapting are crucial to success).  In this context, then, efficiency can be a net negative and redundancy a major positive. Hedges matter.  Learning from mistakes is vital.

Meanwhile, the size required for potential ‘triggering events’ decreases in an increasingly complex world.  Thus it may only take a tiny event, at the wrong place or at the wrong time, to spark a calamity.  While the chances of any of these possibilities actually happening is individually remote, our general susceptibility to that type of catastrophe is surprisingly real.

Thus those who would attempt to manage risk in the aggregate and complexity risk specifically must take these fundamental features of network systems into account.  Sadly, this field is much more descriptive than prescriptive.  Zolli again:  “Resilience is often found in having just the `right’ amounts of [certain] properties – being connected, but not too connected; being diverse, but not too diverse; being able to couple with other systems when it helps, but also being able to decouple from them when it hurts. The picture that emerges is one of strategic looseness, an intentional stance of both fluidity (of strategies, structures, and actions) and fixedness (of values and purpose).”

This “Goldilocks” approach to complexity — everything needs to be at some relatively undefined “just right” level — makes it extremely difficult to try to manage.  There is simply no definitive blueprint for managing such risks.  But there are some patterns that are helpful.  For example, diversity, modularity (a problem with one component or the elimination of one outlet won’t scuttle the entire system), proximity, redundancy, flexibility and adaptability are all extremely valuable.  Within interpersonal systems, diversity, flexibility and mutual trust are vital to resilience and success.  So is decentralization and shared control.

Fortunately, diversification is already a well-established virtue in our world, even though its value is often honored only in the breach.   In this context, resilient diversity means fluidity of structures, strategies and approaches but it does not extend to goals, values and core methodologies.  An effective risk mitigation and management approach is thus much like playing jazz.  One must be able to improvise often and well but within an established and consistent structure.

Ultimately, reckoning with risk requires a firm grip on reality – both our inherent optimism and our inherent loss aversion must be tempered.  When things are not going well, until truth is out on the table (via transparency and trust), no matter how ugly, we are not in a position to deal with the problems at hand.  In the event of an unforeseen melt-down in a position, portfolio, market, or even a system-wide collapse, how prepared are you?  And no matter how unprepared you turn out to be (remember those pesky unknown unknowns), have you thought through how you can go about getting back on track after the calamity?

If you haven’t considered these questions carefully and systematically, I reckon that the chances of your getting into a whole heap of trouble, at some point at least, are surprisingly high.

______________

I dealt with these issues peripherally here.  My entire series on risk is available at these links:

Explosive Risk

By now nearly everyone has seen the video of the San Diego 4th of July fireworks debacle.  Here’s one version (taken from about where I was watching).

 

I took two car-loads of friends and family downtown through the traffic, paid $12 per car to park, waded through the crowds (over 500,000 strong) to get a good spot to watch from, only to have to take everyone home disappointed.

The show was supposed to last 18 minutes and be “one of the most logistically complex displays in the world,” according to Garden State Fireworks, the New Jersey company that produced the show.  In business since 1890, Garden State produced hundreds of other shows across the country on July 4. It has staged pyrotechnic displays for such events as the 1988 Winter Olympics, the Statue of Liberty Bicentennial Celebration, Macy’s New York July Fourth Celebration on the Hudson and the Washington, D.C. July Fourth Celebration.  Only ours failed.

“Everyone’s seen their computers crash, everyone’s seen their cell phones drop calls,” August Santore, Garden State’s owner, told NJ.com.  “The only way to correct anything that’s not working properly, you have to live it. In this particular case, it’s something that was unknown. It’s never happened.”

As Donald Rumsfeld would have it, such unknown unknowns provide our greatest difficulties.

A highly technical (and convoluted) explanation released by Garden State stated that a technical “anomaly” caused about 7,000 shells to go off simultaneously over San Diego Bay. A doubling of code commands in the Big Bay Boom fireworks computer system caused all of the show’s fireworks, from four separate barges and five locations over a 14-mile span, to launch within 30 seconds.  Thankfully, no one was hurt.

Garden State’s statement included an explanation of how fireworks shows are produced through code, with a primary launch file and a secondary back-up. The two files are then merged to create a new launch file, and sent to each of the five fireworks locations. Apparently, an “unintentional procedural step” occurred during that process, causing an “anomaly” that doubled the primary firing sequence.

“The command code was initiated, and the ‘new’ file did exactly what it ‘thought’ it was supposed to do,” the report says. “It executed all sequences simultaneously because the new primary file contained two sets of instructions. It executed the file we designed as well as the file that was created in the back-up downloading process.” The statement placed the blame generally upon its “effort to be over-prepared for any disruption in communications.”

In the world of computer specialization, it’s easy to forget how many of our systems depend on complex code that may be extremely difficult to understand.  More broadly, it’s remarkable how complex nearly everything in our society is.  IT security measures have largely focused on sabotage, but as Edward Tenner points out in The Atlantic, there is also a complexity risk, especially as the scope of cloud computing increases — including risks to back-ups in the cloud. The Technology Review blog recently presented the concerns of Professor Bryan Ford at Yale University:

“Non-transparent layering structures…may create unexpected and potentially catastrophic failure correlations, reminiscent of financial industry crashes,” he says.

But the lack of transparency is only part of the story. A more general risk arises when systems are complex because seemingly unrelated parts can become coupled in unexpected ways.

A growing number of complexity theorists are beginning to recognise this problem. The growing consensus is that bizarre and unpredictable behaviour often emerges in systems made up of “networks of networks”.

An obvious example is the flash crashes that now plague many financial markets in which prices plummet dramatically for no apparent reason.

The issue relates to more than complexity, however.  Complexity, optimization, leverage and efficiency all conspire against redundancy, nature’s primary risk management tool. They also can readily lead to hubris — rather than an intellectual humility — which causes us to think we’ve “got everything covered” (think VAR and the 2008-09 financial crisis).  As Nassim Taleb points out, “Nature builds with extra spare parts (two kidneys), and extra capacity in many, many things (say lungs, neural system, arterial apparatus, etc.), while design by humans tend to be spare [and] overoptimized.” 

My point is not to denigrate complexity, optimization and the like.  Instead, I merely wish to emphasize that these benefits also come with risks and that we are foolish to the extent to which we do not recognize and deal with those risks.  In the broader context, these risks can include market crashes and other catastrophes.  More personally, it can mean the failure of a retirement income portfolio withdrawal plan or other individual catastrophes. In all cases, we should at least explore a good quality back-up plan, insurance of some kind, or both, at least when we’re dealing with important matters.  In all cases, if (when) our plans fail, the resulting explosions can be real and debilitating.