Digital Athena
  • Blog
  • Essays and Book Reviews
  • Contact Us
  • About Digital Athena
  • Index of Essays and Reviews

Modeling Risks, or the Risks of Models

6/17/2010

0 Comments

 
In The New York Review of Books for June 24th, Paul Volcker has a cautionary piece about the imbalances, deficits, and risks of our fiscal situation. The essay is called “The Time We Have Is Growing Short.” Five years ago, Volcker writes, he saw the need for fundamental changes but saw no public resolve for doing anything either on the part of the public or the politicians. At that time, he predicted that the only way reform was going to occur was if there was a crisis. Little did he know how large a crisis was ahead.

Volcker writes that he did not anticipate the enormity of the crisis in part because “innovations” such as credit default swaps and CDOs had not existed in his day. Nor did the fancy computer models exist that were developed to devise, build, and evaluate the risks of those innovations. Those models assume that financial markets followed laws similar to those of the hard sciences. Volcker sees this as a big mistake: “One basic flaw running through much of the recent financial innovation,” he writes, “is that the thinking embedded in mathematics and physics could be directly adapted to markets.” But financial markets, he points out, do not behave according to changes in natural forces; rather they are human systems and are prone to herd behavior and wide swings in emotions. They are also subject to political influences and various uncertainties.

The quantification of the financial markets in sophisticated computer modeling was not a dominant part of the financial world that Volcker inhabited in the Federal Reserve of the fifties through the eighties. Yet the seeds of change were certainly there. They were planted in the early seventies when a somewhat coincidental rise in market volatility, computation, and financial modeling began to transform the financial industry.

Many factors contributed to the growing volatility of the markets in the early seventies. After the dollar was cut free from the gold standard in 1971, volatility invaded the foreign exchange markets. Oil prices, which had remained stable for decades, exploded. And interest rates and commodity prices saw levels of volatility that would have been unthinkable in the three previous decades. Financial deregulation and inflation contributed to the mix as well.

As Peter Bernstein wrote in his history of risk, Against the Gods (1996), the rising market volatility of the 70s and 80s produced a new demand for risk management. In the face of all this volatility and uncertainty, Wall Street saw its traditional investing strategies as inconsistent and unpredictable. They were old-fashioned, the operating methods, as one senior Wells Fargo Bank officer wrote at the time, of a “cottage industry.” It seemed something new—some more "modern" innovation—was being called for.

That innovation arose from two sources, both of which burst upon the scene in 1973. That was also the year when a new exchange for managing risk, in this case by buying and selling stock options, was opening up. Innovation both in computers and in financial models were jointly destined to create dramatic changes in the financial markets. The extraordinary power of computers greatly expanded the market’s ability to manipulate data and to devise and manage complex strategies and products. Models, on the other hand, seemed to offer some new and supercomplex way to avoid at least some of the new uncertainty investors faced. It was the beginning of the age of modern risk management.

The 1973 series of events incorporated all the major elements o f the changes to come:

·         In April 1973, the Chicago Board Options Exchange opened. The new exchange provided traders with an established process for trading stock options, including standardized contracts and market-makers who would buy and sell on demand, thus providing liquidity to the market. This was seen as a way to manage the risks involved in the stock market itself.

·         The following month, an article appeared in The Journal of Political Economy explaining for the first time the Black-Scholes-Merton method for valuing options. This model, expressed in complex algebra, used four basic elements to calculate the value of an option and in so doing included a quantitative method for determining volatility.

·         At the same time, Texas Instruments introduced its SR-10 handheld electronic calculator. Within months TI was running large ads in The Wall Street Journal pitching new possibilities to traders, “Now you can find the Black-Scholes value using our calculator.”

It didn’t take long for options traders to start using technical expressions from the Black-Scholes-Merton model to calculate the risks of their options. Armed with their new handheld computers, traders on the floor of the Chicago Board Options Exchange could run a formula to quantify risk and automatically calculate the value of a given stock option. As Bernstein points out, a new era had begun in the world of risk management.

What characterized this new era of risk management? Clearly it had much to do with the power of computers. Clearly it had much to do with complex mathematical models for expressing and predicting risk. And clearly it had much to do with an inordinate belief in the efficacy of those models and in the power of those computers to escape uncertainty by “managing” risk.

But how modern, how advanced, was it all? Toward the end of Against the Gods, Peter Bernstein offers a stark comparison between those who trust in complex calculations today and the ancient Greeks: “Those who live only by the numbers,” he observes, “may find that the computer has simply replaced the oracles to whom people resorted in ancient times for guidance in risk management and decision-making.”

So it seems that, as long as belief in the calculations computer models prevails in the markets, we not much better off than those who journeyed to Delphi long ago to worship Apollo and consulted the oracle therein regarding their fates.
0 Comments

The Machines in the Markets (Part 2)

6/8/2010

2 Comments

 
I posted some thoughts about the financial crisis last week. Here’s some more about how technology helped create the derivatives markets in all its complexity and obscurity.

It’s a well-known fact that Wall Street exists to help make businesses run well, helping them raise cash and grow profitably. Financial transactions, it is understood, serve economic and social purposes. But as Roger Lowenstein points out in The End of Wall Street (2010) those purposes were undermined and perverted in the years running up to the 2007-2009 financial crisis. Before the 1990s Wall Street’s profits averaged about 1.2% of GDP. By 2005, that percentage had risen to 3.3% of GDP.  “The proper end of Wall Street is to oil the nation’s business,” Lowenstein writes. That end “became, in the bubble era, a goal in itself, a machine wired to inhuman perfection.”

That Wall Street machine included the automated securitization programs, which, using computer technology, created “structured” products that contained pools of securities. The Wall Street machine also included the mathematical models for forecasting the performance of those complex products under certain conditions over time.

What computers ended up enabling was a series of derivative financial instruments, each type of which was higher in risk and had less real economic purpose as time went on.  At each step along the way the banks added more complexity, which in turn created more abstraction and obscurity. Eventually the machine fed upon itself, building structured products that were detached from economic meaning. They offered investors pure side bets on other financial instruments. It is a picture of a machine gone mad in a market that had ceased to understand its own raison d’être.

Complex digital technology also defined who could participate in the derivatives market. These “innovative” structured products were only available to the large investment banks and commercial banks who could afford the technology and infrastructure needed to define and manage such complexities. In fact, none of this would have possible without sophisticated computers and the quantitative scientists who used them.

Here’s how the series of derivatives developed over a period of years, including what they are, their economic purpose, the pitfalls, and how they evolved (or devolved) over time.

Mortgage-backed securities

What

These were bonds that contained pools of individual home mortgages. Wall Street banks would buy mostly prime mortgages (at least at the start) from smaller banks and other lending institutions. They would then group them into different levels (called “tranches”) based on the amount of risk involved and the return. The ratings agencies would certify the risks and assign ratings such as triple-A or triple-B.

Economic purpose

As Wall Street purchased the mortgages, they gave much-needed cash to the smaller institutions, who in turn lent the money to other home buyers, making mortgages more plentiful. Secondly, by pooling the debt into bonds, Wall Street was able to distribute to investors the risks of those initially fixed-rate mortgages.

Pitfalls

The smaller institutions, who sold whole mortgages to Wall Street, effectively risked nothing in lending money to home buyers, paving the way for mortgages that were far riskier because of floating rates, insufficient credit checks on buyers, and other murky practices. Also, Wall Street banks paid the rating agencies for each ratings project so they could shop around for the rating they wanted.

How things devolved

More mortgages were subprime, often with low teaser interest rates that disappeared in two years. Interest-only mortgages became popular, along with other practices that practically insured high rates of default. In addition, banks hid the details of the mortgages from the ratings agencies, making the rating process less than perfect. Moreover, banks also knew the criteria the ratings agencies used in their models for rating those bonds so they could “game” the system.

Credit default swaps (CDSs)

What

First created in the early nineties, these were a form of insurance for investors in mortgage-backed securities. An investor would pay the insurance company (initially AIG) a premium. In turn, the insurance company would agree to pay the bond holder should the mortgages default.

Economic purpose

Investors in mortgage bonds could hedge against the risk of default on the mortgages in a bond.

Pitfalls

Anyone could buy a credit default swap. You didn’t have to own a mortgage bond. In this way the product was easily disconnected from its original economic purpose. (It’s as if you could buy flood insurance on a house on a flood plain without owning the house.) This made the side bets (selling “short”) possible. This also meant that derivative traders could profitably mint and sell unlimited quantities of these products to whoever wanted to “bet” on mortgage defaults.

Secondly, the insurers, initially AIG, misunderstood the risks involved. For some years, they had been “insuring” corporate bonds, which have an extremely low rate of default. They just assumed that mortgage-backed bonds were similar in their risks. They weren’t. As a result, they sold insurance at far too low a price. Thus they might collect $10M in premiums and guarantee $20B in bonds. This made credit default swaps a cheap gamble for those who wanted to bet against the mortgage bonds.

How things devolved

As the subprime mortgage industry grew, there were many more investors willing to speculate on its demise. The market for credit default swaps burgeoned in the late 90s and onward.

Collateralized debt obligation (CDOs)

What

These were bonds created by packaging up a hundred or so mortgage  bonds (or, to make it yet more complicated, portions of those bonds). Since each mortgage bond might contain 1000 mortgages, a CDO might contain bonds representing hundreds of thousands of mortgages.

Economic purpose

Banks could offer new derivatives with high returns and low risk, and they had spiffy models that seemed to demonstrate this over time. But eventually a CDO became, as Michael Lewis describes it in The Big Short, just a laundering scheme for lower middle class home buyers. It enabled the subprime market to flourish.

Pitfalls

CDOs were so complicated and so abstract that it was just about impossible to understand the risks involved. And all investors had to go on were the models and the ratings. Also, it required $50B worth of original mortgages to create a $1B CDO, making them harder to come by.

How things devolved

Beginning in 2004, the mix in the mortgage-backed bonds began to shift heavily into subprime loans until about 95% of the bonds were subprime (originally only about 2% of mortgage-backed bonds were subprime). Usually the “portions” of the bonds that were packaged into a CDO were the lowest (triple-B) rated mortgages. The models that re-rated the riskiest bonds as triple-A could do so, according to the financial wizards on Wall Street, because the likelihood that many of them would default at the same time was extremely low. Even sophisticated Wall Street investors did not understand how the rules of the game were changing. The ratings agencies didn’t have models for CDOs, so the banks who packaged the derivatives would send their own models to the ratings agency when asking for an official rating. Hence both the models and the ratings were wrong.

Synthetic CDOs

What

These synthetic collateralized debt obligations contained credit default swaps on about 100 triple-B-rated bonds.

Economic purpose

A synthetic CDO simply provided a way for investors to bet on the mortgage and housing industry but, in fact, a CDO served no real economic purpose.

Pitfalls

The ratings agencies didn’t have a clue as to the value of these complex and obscure derivatives. Neither did investors.

How it devolved

In the process of evaluating a synthetic CDO based on the computer models built by the banks, the rating agencies rerated to triple-A  roughly 80% of the credit default swaps on triple-B-rated bonds. Of course the remaining 20%, which were rated triple-B, were harder to sell. So the banks would repackage a bunch of the credit default swaps on triple-Bs and reprocess them. Each time 80% of those bonds were rerated triple-A.

The subprime mortgage market was really only a small percentage of the whole mortgage industry. But because there were no real economic limitations on the number of credit default swaps, the losses in the financial sector through both credit default swaps and synthetic CDOs grew much bigger than the subprime market itself creating a tower of liabililties

And the rest, as they say, is history.

 
2 Comments

    RSS Feed

    Archives

    February 2014
    January 2014
    December 2013
    November 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    January 2012
    December 2011
    November 2011
    October 2011
    June 2011
    May 2011
    April 2011
    March 2011
    February 2011
    December 2010
    July 2010
    June 2010
    May 2010
    March 2010
    February 2010
    January 2010
    December 2009

    Categories

    All
    AI
    Computer Models
    Convergence
    Digital Software
    Division Of Labor
    E Readers
    Facebook
    Financial Markets
    Google
    Innovation Business Cycle
    Internet
    Knowledge
    Learning
    Media Use
    Myths
    Powerpoint
    Robots
    Screen Life
    Screen Life
    Search
    Social Networking
    Targeted Marketing
    Technology And Jobs
    The Nature Of The Digital
    The Nature Of The Digital
    Video Games
    Web 2.0
    Wikis
    Youth

    Cynthia's Blog Plan

    I'll aim to post here a few times a month, based on current events and my ongoing research.