Financial models have been severely and widely criticized in the wake of the recent financial crisis. Many, including Pablo Triana in Lecturing Birds on Flying (2009), echo Nassim Taleb’s Black Swan arguments about the importance of rare events in financial markets. Financial models are faulty because they use historical data—often just the previous two years or so—to predict the future. They also produce errors because they use normal probability distribution, that is, they assign negligible odds to the possibility that asset prices will swing wildly. Oddly enough, however, the very use of automated programs can itself increase volatility, creating a downward spiral that feeds upon itself . Economists have long understood the uncertainties inherent in markets because of human behavior. Now it seems there is another significant source of uncertainty built into markets as more and more computer programs are used to manage assets.
In computer-driven asset management, trading companies write computer algorithms that dictate the selection of investments and positions in the markets. Advocates for this sort of automation like to point out that computers have several advantages over human beings when it comes to selecting and managing assets. For one thing, a computer algorithm takes the emotion out of picking stocks and deciding when to buy and sell. It follows clear rules (albeit written by humans) that eliminate the risks of irrational exuberance. There is no fondness for a certain company or industry sector that can bias decisions, nor is there any tendency toward heightened fear in the face of uncertainties. Repeatability and rationality do appear to rule.
Computers can also work around the clock seven days a week without losing judgment or falling asleep. And, in a world where massive amounts of information are being generated both rapidly and constantly, computers are capable of handling much more data than human beings can. In short, computers would seem to be the rational antithesis of the kind of messy and unpredictable human behavior that colors traditional decision-making on and off the trading floor.
Oddly enough it is the very success of these quantitative trading tools that creates the potential for increasing instability into the markets. As Pablo Triana points out, when one company is highly successful using a particular algorithm, other Wall Street firms are quick to imitate that success, incorporating similar schemes in their own models. It’s much easier, Triana goes on to say, to copy another company’s program for trading in great detail than it is, for example, to copy the hunches and insights of traditional trading methods: “Precise mathematical concoctions are much more amenable to exact replication and copying than human emotions.”
This kind of easy replication results in two kinds of trading patterns. The first is that all the professional traders end up holding very similar positions. The second is that they can all start selling off at the same time, an event that can trigger a more general sell-off in the market. This is what occurred in August of 2007. And it seems to have contributed to the downdraft, the “flash crash,” of May 6th of this year. It’s a growing cause for concern because algorithmic trading is rising rapidly. According to the Wall Street Journal, programmed trading is expected to account for 60% of all trades this year, up from just 28% in 2005 ( “FastTraders Face Off with Big Investors over ‘Gaming,’” http://online.wsj.com/article/SB10001424052748703374104575337270344199734.html?mod=WSJ_Markets_LeadStory .)