This automated electronic monster seemed to have a life of its own. And Wall Street, who had long reveled in the wonders of technology, marveled at it. Since the eighties, Wall Street companies had been hiring Ph.D. scientists who understood digital technology. Those scientists, called “the quants,” built the software machines that fueled tremendous growth in the financial industry. They applied mathematical models of uncertainty to financial data and increasingly complex products. They readily wrote new algorithms, built mathematical models to quantify risks, and devised procedures and operations to handle the new complexities. As a result, the markets worked faster, more efficiently. But as the years rolled on the financial instruments became more byzantine and opaque. Finally those products, which were designed to manage the risk, were actually creating new risks out of thin air through high-tech obfuscation.
Much of this complexity was done in the name of “innovation.” Financial innovation, like technological innovation, had become a good in and of itself. Alan Greenspan has long been a big proponent of innovation in financial markets. In his autobiography, The Age of Turbulence (2007), Greenspan praised “the development of technologies that have enabled financial markets to revolutionize the spreading of risk. Three or four decades ago, markets could only deal with plain vanilla stocks and bonds. Financial derivatives were simple and few. But with the advent of the ability to do around-the-clock business real-time in today’s linked worldwide markets, derivatives, collateralized debt obligations, and other complex products have arisen that can distribute risk across financial products, geography, and time.” (488) According to Greenspan these financial innovations were all for the good. After all, they contributed to growth, productivity, and increases in market efficiency.
The quants also designed another type of machine, a manufacturing machine, if you will, for creating “innovative” derivatives. And they built a third type of machine: computer models that used scenarios to “demonstrate” how derivatives would perform under certain conditions. In effect, the software models, while complex in themselves, gave a powerful set of easy-to-use tools to Wall Street traders and salespeople. Thus they could start conversations with their customers about very complex derivative products. It didn’t seem to matter that most people on Wall Street didn’t understand them.
In Lecturing Birds on Flying: Can Mathematical Models Destroy Financial Markets? (2010), Pablo Triana, himself a seasoned trader, says these models made it possible to demonstrate with mathematical precision how derivatives would produce returns under given conditions. And many people—both traders and investors—believed in the models. They trusted the numbers that were displayed in all their high-tech glory on the screens. Unfortunately they did not understand the underlying securities, the assumptions built into the models, or the methods by which the models were built.
In fact, the slick and sophisticated models created widespread overconfidence in the forecasts. The traders, the salespeople, and the investors looked at the numerical certainty of the models and were convinced by what they said, ignoring the fact that financial markets are by their nature unpredictable and vulnerable to crises. In some cases, the models, it seemed, just gave bankers justification for taking on more and more risk while at the same time appearing highly sophisticated to the outside world.
This belief in the truth of technology is not uncommon. Alan Greenspan himself expressed a similar kind of blind faith in financial innovation and high-tech complexity when he compared the financial markets to a U.S. Air Force B-2 airplane. Our twenty-first century markets are too big, too complex, and too fast to be governed by twentieth century regulation and supervision, he argues toward the end of his autobiography. The movement of funds is too huge, the entire market system far too complex, the daily transactions far too numerous, to be understood and regulated. And this is OK. After all, a U.S. Air Force pilot does not need to know all the computer-aided adjustments that keep his B-2 in the air. Why should we expect to know how the markets behave?
But this analogy begs the question: After all, there’s a lot of solid scientific knowledge in the B-2. A team of top scientists and engineers worked for years to design, build, and test it. Other crews of highly skilled maintenance workers ensure that its systems are all working correctly before each flight. But the markets are at bottom a social system that does not operate according to such predictable laws.
The nineteenth century didn’t see it that way. At that time, economists adopted certain scientific terms—equilibrium, pressure, momentum— to explain how the economy and financial markets operated, with the underlying assumption that these systems did follow laws similar to the laws of nature upon which sciences like physics and chemistry were based. But we are a long way from that kind of certainty in financial affairs. In the twentieth century, after World War I and the Depression, deep uncertainty started to color our understanding of markets as economists considered the role of human nature and its irrationality in markets.
The last thirty to forty years have seen the rise of two other major changes in the market: Volatility has become a major factor in modern markets. At the same time, and somewhat by coincidence at the beginning, computers came to play a dominant role in those markets. Those two major changes have developed in tandem and now are recombining to create new uncertainties on top of the old ones based on human nature. That combination of computer systems—with all their fallibilities, unintended consequences, and illusion of truth—and highly volatile markets is what we face today. And no one knows how the two will play off one another in the years to come.