Digital Athena
  • Blog
  • Essays and Book Reviews
  • Contact Us
  • About Digital Athena
  • Index of Essays and Reviews

Big Data: Cold Water from the New York Times

8/21/2013

0 Comments

 
Investigative reporter for the Times James Glanz brings some hard facts and dissenting opinions to bear on the current big claims about Big Data as “the new oil” for our economy. Glanz cites Northwestern economics professor Robert Gordon, who says that comparing Big Data to the impact of oil in the late nineteenth and early twentieth century in terms of economic impact is simply a silly form of exaggeration: “Gasoline made from oil made possible a transportation revolution as cars replaced horses and as  commercial air transportation replaced railroads. If anybody thinks that personal data are comparable to real oil and real vehicles, they don’t appreciate the realities of the last century.”

Nor does the parallel to the rise of the electricity grid hold much credence with some. In terms of numbers, the comparison is certainly tempting:  From 2005 to 2012 the volume of data on the internet increased 1696%. But the revolutions that the unleashing of electricity produced in manufacturing processes, ways of daily living, and transportation have no match in the rise of “Big Data” to date. In fact, during the time that has seen the increase in Big Data, we have experienced a lackluster economy where productivity, which had risen largely due to automation from the 1970s through the start of the 2000s, has actually shrunk. Productivity growth decreased 1.8% annually from 2005 to 2012.

In part making such outlandish claims is one of the hazards of predicting the future, always a difficult if not impossible task to get right. Yet it may also borrow something from the spirit of the times: We have grown so
accustomed to enormous, “revolutionary” sorts of changes in the last two decades that some believe the sheer size of the growth rate in Big Data must signal something equally unprecedented and huge on the horizon. Yet in the end some economists have observed that the new analytics, which companies use to mine Big Data, just allows those companies to cannibalize the customer base of their competitors, or, make the case for digital advertising over print and other traditional media even stronger. It creates incremental or sideways changes, not revolutions in the economy. And still others muse that the current framework for the use of Big Data may be just plain wrong. In the end, they posit, the context in which our futurists have placed Big Data, like cloud computing, may end up simply being  “a mirage.” Big Data and cloud computing may be incorporated into our economic and business practices in ways we have yet to even envision. I think we’ll just have to wait and see on this one . . .

0 Comments

Myth of the Ultimate Machine Age: The Genie and the Bottle

6/25/2013

0 Comments

 
There’s a semi-apocryphal story about Norbert Wiener, the brilliant, visionary MIT mathematician. It is said that he used to walk around the halls of the campus with his eyes closed and a finger on the wall to ensure that he did not lose his way. One day traveling what is fondly known as the “Infinite Corridor,” which stretches 825 feet from the main lobby of MIT’s central building west to east through 5 major buildings in all housing classrooms and offices. On one particular day, one of the classrooms in session  happened to have its door open and Norbert Wiener simply entered the classroom  and walked completely  around the perimeter and out the door again as he made his way toward his destination—to the silent amazement (and amusement) of the professor as well as
his students. 

Recently the New York Times published an excerpt from a long-lost article that Norbert Wiener wrote in 1949. Originally solicited by the oddball Sunday Times editor, Lester Markel,it was mysteriously either lost by Markel or abandoned by Wiener, or both. In any event, a researcher recently found the  among Wiener’s papers at the MIT archives. In the piece Wiener  about “what the ultimate machine age is likely to be.” He expounded  future automated systems well beyond what then existed and about smart computers and smart gauges that would integrate one machine with another machine various manufacturing processes.

Although he did not foresee the economic shift in the value of information versus manufacturing, the revolution he did envision was profound and his predictions dire: “These new machines have a great capacity for upsetting the
present basis of industry, and for reducing the economic value of the routine factory employee to a point at which he is not worth hiring at any price. If we combine our machine-potentials of a factory with the valuation of human beings on which our present factory system is based, we are in for an industrial revolution of unmitigated cruelty. . . Moreover if we move in the direction of  making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us. In short, it is only a humanity which is capable of awe, which will also be capable of controlling the new potentials which we are  opening for ourselves. We can be humble and live a good life with the aid of the machines, or we can be arrogant and die.”

Would that our writers and our thinkers and our leaders of corporations today, instead of blithely hailing the onslaught of robots and marveling at increased productively and the brilliance of our technology, had some of the compassion and wisdom that Wiener possessed in 1949.


  

0 Comments

Can Market Regulation Keep Pace with Technology?

12/20/2010

0 Comments

 
After a hiatus to research and write a long article about how technology affects the markets, I’m wondering if the pace of technological change and so-called “financial innovation” may really move at a pace that’s just too fast to regulate. Alan Greenspan argued this position for years. Paraphrasing Adam Smith in his 2007 autobiography, The Age of Turbulence, Greenspan said that, with rare exceptions, the markets seemed to adjust smoothly “as if guided by an international invisible hand.” And throughout his career as head of the Federal Reserve, he favored “financial innovation,” praising new financial “products” and technological changes for automating the markets. These innovations boosted productivity and increased general wealth, he argued. He believed that the type of regulation that prevailed for much of the twentieth century was simply too slow and cumbersome to work in such rapidly evolving markets. However, given his overall view was that markets needn’t be regulated very much at all, many saw his arguments as blatantly biased.

Some Wall Street traders today, on the other hand, continue to echo Greenspan’s views about the power of technology and the irrelevance of regulation, but with far more self-interest and arrogance in evidence. Here’s how one trader responded to SEC Chair Mary Shapiro’s proposals to eliminate the inequities that result from flash trading, which is when institutions pay a fee to see orders before they are placed so they can try to profit from them using algorithms that trade in milliseconds. The trader declared:  ”We move faster, smarter, and understand risks better than other investors.” And he scoffed at efforts to curb Wall Street’s activities: “I’m not concerned. Profits have always flowed to whoever dominates the marketplace, and we have the technological advantage that it costs millions to match.” Those comments were made just last spring. 
And expert observers continue to see technology as a powerful but also a potentially destabilizing influence, where some can manipulate the markets to their advantage. Thomas Peterffly, chief executive of Interactive Brokers, which is one of the largest brokerage companies in the country, expressed his concern this way: “What we have today is a complete mess. Over the last 10 years, technology delivered great benefits, but in the last year or so, it is not so good. There is more room for the various games some people play.” One SEC official also recently commented that while they are learning something from watching the aberrant behavior in individual stocks, “so far it is hard to extrapolate too much as to the general trends in the market.”


 

What it seems we are learning is that, with so many complex technologies in play, we just may not know what’s going to go wrong next—or why.

0 Comments

Alan Greenspan’s Love Affair with Technology

7/20/2010

0 Comments

 
Throughout his autobiography, The Age of Turbulence, Alan Greenspan expresses a deep fascination for the ways in which technology has transformed our economy. Among other changes, technology has revolutionized the distribution of risk, he maintains, and has also increased the ability of the markets to absorb shocks.  As a result, the economy has a new and--a very modern—flexibility. “Three or four decades ago, markets could deal only with plain vanilla stocks and bonds. Financial derivatives were simple and few,” Greenspan writes. “But with the advent of the ability to do around-the-clock business real-time in today’s linked worldwide markets, derivatives, collateralized debt obligations, and other complex products have arisen that can distribute risk across financial products, geography, and time.  . . . With the exceptions of financial spasms [as in 1987 and 1998], . . . markets seem to adjust smoothly from one hour to the next, one day to the next, as if guided by an ‘international invisible hand,’ if I may paraphrase Adam Smith.” Driven by advanced technology, the modern market process improves market efficiency and hence raises productivity. It is a triumph of modern information technologies.

Greenspan’s habit of mind made his enthusiasm for information technology inevitable. The long-time Chairman of the Federal Reserve never met a number he didn’t love. In his lifelong search for new knowledge and insights about the economy, he liked nothing better than absorbing large quantities of economic data. Lack of emotional bias was central to his search.  In his twenties he was attracted to logical positivism, a school of thought popular with Manhattan Project scientists. According to logical positivism, knowledge could only be obtained from facts and numbers. Values, ethics, personal behavior were not logical in nature. Rather they were shaped by the dominant culture and hence not past of serious thinking on any subject.  Greenspan would later emend this view, particularly regarding values, but the idea that facts and numbers were the path to knowledge remained part of his core beliefs.

A course he took in 1951 in mathematical statistics provided him with a scientific basis for his beliefs. Mathematical statistics proposed that the economy can be measured, modeled, and analyzed mathematically. (It was a nascent form of what is known today as econometrics.) Greenspan was immediately attracted to this discipline—and he excelled in it. Here was a forecasting method based on mathematics and empirical facts. Many prominent economists at the time, Greenspan observed, relied on “quasi-scientific intuition” in their forecasting, but he himself was inclined to develop his thinking in a different way: “My early training was to immerse myself in extensive detail in the workings of some small part of the world and infer from that detail the way that segment of the world behaves.  That is the process I have applied throughout my career.”

Little wonder that when digital computers began to invade the business world, Greenspan naturally saw them as extremely effective in gathering and ordering vast amounts of data and numbers: It is after all what computers do best. In fact, the span of Greenspan’s career did coincide with a revolution in financial markets based on digital computers.  He like many others saw in the innovations and improved efficiency that technology brought to the markets great progress. As long as technology was contributing to productivity growth and to general wealth, he could see nothing wrong with it. In fact, he often makes assumptions and even illogical arguments all in the name of technological progress. One example involves his attitude toward increased debt levels for both individuals and businesses. Yes, he admits there has been a long-term increase in leverage. But the appropriate level of leverage is a relative value that varies over time. Greenspan further minimizes the ramifications of increased leverage by arguing that people are steadfastly and innately averse to risk. Technology has simply added more flexibility in the system. Thus, Greenspan concludes, the general willingness of investors, businesses, and households to take on more leverage must mean that the additional financial “flexibility” allows for increased leverage without increased risk. “Rising leverage,” Greenspan blithely concluded, ”appears to be the result of massive improvements in technology and infrastructure, not significantly more risk-inclined humans.”

In the end, Greenspan was forced to change his mind about technology after the recent financial crisis. In his testimony to Congress in April of this year, he found two major ways in which technology had indeed failed the markets and helped precipitate the crisis. First of all the models that sophisticated investors used to assess risks were wrong. Those models had no relevant data that would have allowed them to forecast the impact of an event such as the failure of Lehmann Brothers. Investors and analysts had relied on pure—and incorrect—conjecture: They decided that they would be able to anticipate such a catastrophic event and retrench to avoid exposure. They were wrong.

Secondly, financial models for assessing risks, combined with huge computational capacities to create highly complex financial products, had left most of the investment community in the dark. They didn’t understand the products or the risks involved. Their only option was to rely on the rating agencies, which were in effect no better at assessing the risks of these products than anyone else.  Technology and those brilliant Ph. D.s known as the “quants” had effectively created their own monsters in the form of credit default swaps and collateralized debt obligations, which were far too opaque for even sophisticated investors to understand.

So much for technological innovations and flexibility. In the end it was in part technology that set up the conditions for the worst economic crisis since the Depression. One is left to wonder where that “international invisible hand” has gone to now.
0 Comments

Modeling Risks, or the Risks of Models

6/17/2010

0 Comments

 
In The New York Review of Books for June 24th, Paul Volcker has a cautionary piece about the imbalances, deficits, and risks of our fiscal situation. The essay is called “The Time We Have Is Growing Short.” Five years ago, Volcker writes, he saw the need for fundamental changes but saw no public resolve for doing anything either on the part of the public or the politicians. At that time, he predicted that the only way reform was going to occur was if there was a crisis. Little did he know how large a crisis was ahead.

Volcker writes that he did not anticipate the enormity of the crisis in part because “innovations” such as credit default swaps and CDOs had not existed in his day. Nor did the fancy computer models exist that were developed to devise, build, and evaluate the risks of those innovations. Those models assume that financial markets followed laws similar to those of the hard sciences. Volcker sees this as a big mistake: “One basic flaw running through much of the recent financial innovation,” he writes, “is that the thinking embedded in mathematics and physics could be directly adapted to markets.” But financial markets, he points out, do not behave according to changes in natural forces; rather they are human systems and are prone to herd behavior and wide swings in emotions. They are also subject to political influences and various uncertainties.

The quantification of the financial markets in sophisticated computer modeling was not a dominant part of the financial world that Volcker inhabited in the Federal Reserve of the fifties through the eighties. Yet the seeds of change were certainly there. They were planted in the early seventies when a somewhat coincidental rise in market volatility, computation, and financial modeling began to transform the financial industry.

Many factors contributed to the growing volatility of the markets in the early seventies. After the dollar was cut free from the gold standard in 1971, volatility invaded the foreign exchange markets. Oil prices, which had remained stable for decades, exploded. And interest rates and commodity prices saw levels of volatility that would have been unthinkable in the three previous decades. Financial deregulation and inflation contributed to the mix as well.

As Peter Bernstein wrote in his history of risk, Against the Gods (1996), the rising market volatility of the 70s and 80s produced a new demand for risk management. In the face of all this volatility and uncertainty, Wall Street saw its traditional investing strategies as inconsistent and unpredictable. They were old-fashioned, the operating methods, as one senior Wells Fargo Bank officer wrote at the time, of a “cottage industry.” It seemed something new—some more "modern" innovation—was being called for.

That innovation arose from two sources, both of which burst upon the scene in 1973. That was also the year when a new exchange for managing risk, in this case by buying and selling stock options, was opening up. Innovation both in computers and in financial models were jointly destined to create dramatic changes in the financial markets. The extraordinary power of computers greatly expanded the market’s ability to manipulate data and to devise and manage complex strategies and products. Models, on the other hand, seemed to offer some new and supercomplex way to avoid at least some of the new uncertainty investors faced. It was the beginning of the age of modern risk management.

The 1973 series of events incorporated all the major elements o f the changes to come:

·         In April 1973, the Chicago Board Options Exchange opened. The new exchange provided traders with an established process for trading stock options, including standardized contracts and market-makers who would buy and sell on demand, thus providing liquidity to the market. This was seen as a way to manage the risks involved in the stock market itself.

·         The following month, an article appeared in The Journal of Political Economy explaining for the first time the Black-Scholes-Merton method for valuing options. This model, expressed in complex algebra, used four basic elements to calculate the value of an option and in so doing included a quantitative method for determining volatility.

·         At the same time, Texas Instruments introduced its SR-10 handheld electronic calculator. Within months TI was running large ads in The Wall Street Journal pitching new possibilities to traders, “Now you can find the Black-Scholes value using our calculator.”

It didn’t take long for options traders to start using technical expressions from the Black-Scholes-Merton model to calculate the risks of their options. Armed with their new handheld computers, traders on the floor of the Chicago Board Options Exchange could run a formula to quantify risk and automatically calculate the value of a given stock option. As Bernstein points out, a new era had begun in the world of risk management.

What characterized this new era of risk management? Clearly it had much to do with the power of computers. Clearly it had much to do with complex mathematical models for expressing and predicting risk. And clearly it had much to do with an inordinate belief in the efficacy of those models and in the power of those computers to escape uncertainty by “managing” risk.

But how modern, how advanced, was it all? Toward the end of Against the Gods, Peter Bernstein offers a stark comparison between those who trust in complex calculations today and the ancient Greeks: “Those who live only by the numbers,” he observes, “may find that the computer has simply replaced the oracles to whom people resorted in ancient times for guidance in risk management and decision-making.”

So it seems that, as long as belief in the calculations computer models prevails in the markets, we not much better off than those who journeyed to Delphi long ago to worship Apollo and consulted the oracle therein regarding their fates.
0 Comments

The Machines in the Markets (Part 2)

6/8/2010

2 Comments

 
I posted some thoughts about the financial crisis last week. Here’s some more about how technology helped create the derivatives markets in all its complexity and obscurity.

It’s a well-known fact that Wall Street exists to help make businesses run well, helping them raise cash and grow profitably. Financial transactions, it is understood, serve economic and social purposes. But as Roger Lowenstein points out in The End of Wall Street (2010) those purposes were undermined and perverted in the years running up to the 2007-2009 financial crisis. Before the 1990s Wall Street’s profits averaged about 1.2% of GDP. By 2005, that percentage had risen to 3.3% of GDP.  “The proper end of Wall Street is to oil the nation’s business,” Lowenstein writes. That end “became, in the bubble era, a goal in itself, a machine wired to inhuman perfection.”

That Wall Street machine included the automated securitization programs, which, using computer technology, created “structured” products that contained pools of securities. The Wall Street machine also included the mathematical models for forecasting the performance of those complex products under certain conditions over time.

What computers ended up enabling was a series of derivative financial instruments, each type of which was higher in risk and had less real economic purpose as time went on.  At each step along the way the banks added more complexity, which in turn created more abstraction and obscurity. Eventually the machine fed upon itself, building structured products that were detached from economic meaning. They offered investors pure side bets on other financial instruments. It is a picture of a machine gone mad in a market that had ceased to understand its own raison d’être.

Complex digital technology also defined who could participate in the derivatives market. These “innovative” structured products were only available to the large investment banks and commercial banks who could afford the technology and infrastructure needed to define and manage such complexities. In fact, none of this would have possible without sophisticated computers and the quantitative scientists who used them.

Here’s how the series of derivatives developed over a period of years, including what they are, their economic purpose, the pitfalls, and how they evolved (or devolved) over time.

Mortgage-backed securities

What

These were bonds that contained pools of individual home mortgages. Wall Street banks would buy mostly prime mortgages (at least at the start) from smaller banks and other lending institutions. They would then group them into different levels (called “tranches”) based on the amount of risk involved and the return. The ratings agencies would certify the risks and assign ratings such as triple-A or triple-B.

Economic purpose

As Wall Street purchased the mortgages, they gave much-needed cash to the smaller institutions, who in turn lent the money to other home buyers, making mortgages more plentiful. Secondly, by pooling the debt into bonds, Wall Street was able to distribute to investors the risks of those initially fixed-rate mortgages.

Pitfalls

The smaller institutions, who sold whole mortgages to Wall Street, effectively risked nothing in lending money to home buyers, paving the way for mortgages that were far riskier because of floating rates, insufficient credit checks on buyers, and other murky practices. Also, Wall Street banks paid the rating agencies for each ratings project so they could shop around for the rating they wanted.

How things devolved

More mortgages were subprime, often with low teaser interest rates that disappeared in two years. Interest-only mortgages became popular, along with other practices that practically insured high rates of default. In addition, banks hid the details of the mortgages from the ratings agencies, making the rating process less than perfect. Moreover, banks also knew the criteria the ratings agencies used in their models for rating those bonds so they could “game” the system.

Credit default swaps (CDSs)

What

First created in the early nineties, these were a form of insurance for investors in mortgage-backed securities. An investor would pay the insurance company (initially AIG) a premium. In turn, the insurance company would agree to pay the bond holder should the mortgages default.

Economic purpose

Investors in mortgage bonds could hedge against the risk of default on the mortgages in a bond.

Pitfalls

Anyone could buy a credit default swap. You didn’t have to own a mortgage bond. In this way the product was easily disconnected from its original economic purpose. (It’s as if you could buy flood insurance on a house on a flood plain without owning the house.) This made the side bets (selling “short”) possible. This also meant that derivative traders could profitably mint and sell unlimited quantities of these products to whoever wanted to “bet” on mortgage defaults.

Secondly, the insurers, initially AIG, misunderstood the risks involved. For some years, they had been “insuring” corporate bonds, which have an extremely low rate of default. They just assumed that mortgage-backed bonds were similar in their risks. They weren’t. As a result, they sold insurance at far too low a price. Thus they might collect $10M in premiums and guarantee $20B in bonds. This made credit default swaps a cheap gamble for those who wanted to bet against the mortgage bonds.

How things devolved

As the subprime mortgage industry grew, there were many more investors willing to speculate on its demise. The market for credit default swaps burgeoned in the late 90s and onward.

Collateralized debt obligation (CDOs)

What

These were bonds created by packaging up a hundred or so mortgage  bonds (or, to make it yet more complicated, portions of those bonds). Since each mortgage bond might contain 1000 mortgages, a CDO might contain bonds representing hundreds of thousands of mortgages.

Economic purpose

Banks could offer new derivatives with high returns and low risk, and they had spiffy models that seemed to demonstrate this over time. But eventually a CDO became, as Michael Lewis describes it in The Big Short, just a laundering scheme for lower middle class home buyers. It enabled the subprime market to flourish.

Pitfalls

CDOs were so complicated and so abstract that it was just about impossible to understand the risks involved. And all investors had to go on were the models and the ratings. Also, it required $50B worth of original mortgages to create a $1B CDO, making them harder to come by.

How things devolved

Beginning in 2004, the mix in the mortgage-backed bonds began to shift heavily into subprime loans until about 95% of the bonds were subprime (originally only about 2% of mortgage-backed bonds were subprime). Usually the “portions” of the bonds that were packaged into a CDO were the lowest (triple-B) rated mortgages. The models that re-rated the riskiest bonds as triple-A could do so, according to the financial wizards on Wall Street, because the likelihood that many of them would default at the same time was extremely low. Even sophisticated Wall Street investors did not understand how the rules of the game were changing. The ratings agencies didn’t have models for CDOs, so the banks who packaged the derivatives would send their own models to the ratings agency when asking for an official rating. Hence both the models and the ratings were wrong.

Synthetic CDOs

What

These synthetic collateralized debt obligations contained credit default swaps on about 100 triple-B-rated bonds.

Economic purpose

A synthetic CDO simply provided a way for investors to bet on the mortgage and housing industry but, in fact, a CDO served no real economic purpose.

Pitfalls

The ratings agencies didn’t have a clue as to the value of these complex and obscure derivatives. Neither did investors.

How it devolved

In the process of evaluating a synthetic CDO based on the computer models built by the banks, the rating agencies rerated to triple-A  roughly 80% of the credit default swaps on triple-B-rated bonds. Of course the remaining 20%, which were rated triple-B, were harder to sell. So the banks would repackage a bunch of the credit default swaps on triple-Bs and reprocess them. Each time 80% of those bonds were rerated triple-A.

The subprime mortgage market was really only a small percentage of the whole mortgage industry. But because there were no real economic limitations on the number of credit default swaps, the losses in the financial sector through both credit default swaps and synthetic CDOs grew much bigger than the subprime market itself creating a tower of liabililties

And the rest, as they say, is history.

 
2 Comments

The Machines in the Markets

5/31/2010

0 Comments

 
If there is a dominant metaphor in the books and articles about the recent financial crisis, it is that of a machine, an invisible, incomprehensible engine that increased in complexity, scale, and speed until it grew out of control. Michael Lewis entitles his book The Big Short: Inside the Doomsday Machine (2010), making a forbidding allusion to the fated mechanisms that led to the meltdown on Wall Street. In Thirteen Bankers (2010), economist Simon Johnson refers to the derivatives process as a “securitization machine.” That machine automatically and efficiently created derivatives like credit default swaps. One leading investor who  figures prominently in The Big Short , Steve Eisman, called the derivative process the “engine of doom” and spoke of the “madness of the machine.” “It was like an unthinking machine,” he told Michael Lewis, “that could not stop itself.”

This automated electronic monster seemed to have a life of its own. And Wall Street, who had long reveled in the wonders of technology, marveled at it. Since the eighties, Wall Street companies had been hiring Ph.D. scientists who understood digital technology. Those scientists, called “the quants,” built the software machines that fueled tremendous growth in the financial industry. They applied mathematical models of uncertainty to financial data and increasingly complex products. They readily wrote new algorithms, built mathematical models to quantify risks, and devised procedures and operations to handle the new complexities. As a result, the markets worked faster, more efficiently. But as the years rolled on the financial instruments became more byzantine and opaque. Finally those products, which were designed to manage the risk, were actually creating new risks out of thin air through high-tech obfuscation.

Much of this complexity was done in the name of “innovation.” Financial innovation, like technological innovation, had become a good in and of itself. Alan Greenspan has long been a big proponent of innovation in financial markets.  In his autobiography, The Age of Turbulence (2007), Greenspan praised “the development of technologies that have enabled financial markets to revolutionize the spreading of risk. Three or four decades ago, markets could only deal with plain vanilla stocks and bonds. Financial derivatives were simple and few. But with the advent of the ability to do around-the-clock business real-time in today’s linked worldwide markets, derivatives, collateralized debt obligations, and other complex products have arisen that can distribute risk across financial products, geography, and time.” (488) According to Greenspan these financial innovations were all for the good. After all, they contributed to growth, productivity, and increases in market efficiency.

The quants also designed another type of machine, a manufacturing machine, if you will, for creating “innovative” derivatives. And they built a third type of machine: computer models that used scenarios to “demonstrate” how derivatives would perform under certain conditions. In effect, the software models, while complex in themselves, gave a powerful set of easy-to-use tools to Wall Street traders and salespeople. Thus they could start conversations with their customers about very complex derivative products. It didn’t seem to matter that most people on Wall Street didn’t understand them.

In Lecturing Birds on Flying: Can Mathematical Models Destroy Financial Markets? (2010),  Pablo Triana, himself a seasoned trader, says these models made it possible to demonstrate with mathematical precision how derivatives would produce returns under given conditions. And many people—both traders and investors—believed in the models. They trusted the numbers that were displayed in all their high-tech glory on the screens. Unfortunately they did not understand the underlying securities, the assumptions built into the models, or the methods by which the models were built.

In fact, the slick and sophisticated models created widespread overconfidence in the forecasts. The traders, the salespeople, and the investors looked at the numerical certainty of the models and were convinced by what they said, ignoring the fact that financial markets are by their nature unpredictable and vulnerable to crises. In some cases, the models, it seemed, just gave bankers justification for taking on more and more risk while at the same time appearing highly sophisticated to the outside world.

This belief in the truth of technology is not uncommon.  Alan Greenspan himself expressed a similar kind of blind faith in financial innovation and high-tech complexity when he compared the financial markets to a U.S. Air Force B-2 airplane. Our twenty-first century markets are too big, too complex, and too fast to be governed by twentieth century regulation and supervision, he argues toward the end of his autobiography. The movement of funds is too huge, the entire market system far too complex, the daily transactions far too numerous, to be understood and regulated. And this is OK. After all, a U.S. Air Force pilot does not need to know all the computer-aided adjustments that keep his B-2 in the air. Why should we expect to know how the markets behave?

But this analogy begs the question: After all, there’s a lot of solid scientific knowledge in the B-2. A team of top scientists and engineers worked for years to design, build, and test it. Other crews of highly skilled maintenance workers ensure that its systems are all working correctly before each flight. But the markets are at bottom a social system that does not operate according to such predictable laws.

 The nineteenth century didn’t see it that way. At that time, economists adopted certain scientific terms—equilibrium, pressure, momentum— to explain how the economy and financial markets operated, with the underlying assumption that these systems did follow laws similar to the laws of nature upon which sciences like physics and chemistry were based. But we are a long way from that kind of certainty in financial affairs. In the twentieth century, after World War I and the Depression, deep uncertainty started to color our understanding of markets as economists considered the role of human nature and its irrationality in markets.

The last thirty to forty years have seen the rise of two other major changes in the market: Volatility has become a major factor in modern markets. At the same time, and somewhat by coincidence at the beginning, computers came to play a dominant role in those markets. Those two major changes have developed in tandem and now are recombining to create new uncertainties on top of the old ones based on human nature. That combination of computer systems—with all their fallibilities, unintended consequences, and illusion of truth—and highly volatile markets is what we face today. And no one knows how the two will play off one another in the years to come.

 

 

 

 

 

 

 
0 Comments

    RSS Feed

    Archives

    February 2014
    January 2014
    December 2013
    November 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    January 2012
    December 2011
    November 2011
    October 2011
    June 2011
    May 2011
    April 2011
    March 2011
    February 2011
    December 2010
    July 2010
    June 2010
    May 2010
    March 2010
    February 2010
    January 2010
    December 2009

    Categories

    All
    AI
    Computer Models
    Convergence
    Digital Software
    Division Of Labor
    E Readers
    Facebook
    Financial Markets
    Google
    Innovation Business Cycle
    Internet
    Knowledge
    Learning
    Media Use
    Myths
    Powerpoint
    Robots
    Screen Life
    Screen Life
    Search
    Social Networking
    Targeted Marketing
    Technology And Jobs
    The Nature Of The Digital
    The Nature Of The Digital
    Video Games
    Web 2.0
    Wikis
    Youth

    Cynthia's Blog Plan

    I'll aim to post here a few times a month, based on current events and my ongoing research.