Digital Athena
  • Blog
  • Essays and Book Reviews
  • Contact Us
  • About Digital Athena
  • Index of Essays and Reviews

The Machines in the Markets

5/31/2010

0 Comments

 
If there is a dominant metaphor in the books and articles about the recent financial crisis, it is that of a machine, an invisible, incomprehensible engine that increased in complexity, scale, and speed until it grew out of control. Michael Lewis entitles his book The Big Short: Inside the Doomsday Machine (2010), making a forbidding allusion to the fated mechanisms that led to the meltdown on Wall Street. In Thirteen Bankers (2010), economist Simon Johnson refers to the derivatives process as a “securitization machine.” That machine automatically and efficiently created derivatives like credit default swaps. One leading investor who  figures prominently in The Big Short , Steve Eisman, called the derivative process the “engine of doom” and spoke of the “madness of the machine.” “It was like an unthinking machine,” he told Michael Lewis, “that could not stop itself.”

This automated electronic monster seemed to have a life of its own. And Wall Street, who had long reveled in the wonders of technology, marveled at it. Since the eighties, Wall Street companies had been hiring Ph.D. scientists who understood digital technology. Those scientists, called “the quants,” built the software machines that fueled tremendous growth in the financial industry. They applied mathematical models of uncertainty to financial data and increasingly complex products. They readily wrote new algorithms, built mathematical models to quantify risks, and devised procedures and operations to handle the new complexities. As a result, the markets worked faster, more efficiently. But as the years rolled on the financial instruments became more byzantine and opaque. Finally those products, which were designed to manage the risk, were actually creating new risks out of thin air through high-tech obfuscation.

Much of this complexity was done in the name of “innovation.” Financial innovation, like technological innovation, had become a good in and of itself. Alan Greenspan has long been a big proponent of innovation in financial markets.  In his autobiography, The Age of Turbulence (2007), Greenspan praised “the development of technologies that have enabled financial markets to revolutionize the spreading of risk. Three or four decades ago, markets could only deal with plain vanilla stocks and bonds. Financial derivatives were simple and few. But with the advent of the ability to do around-the-clock business real-time in today’s linked worldwide markets, derivatives, collateralized debt obligations, and other complex products have arisen that can distribute risk across financial products, geography, and time.” (488) According to Greenspan these financial innovations were all for the good. After all, they contributed to growth, productivity, and increases in market efficiency.

The quants also designed another type of machine, a manufacturing machine, if you will, for creating “innovative” derivatives. And they built a third type of machine: computer models that used scenarios to “demonstrate” how derivatives would perform under certain conditions. In effect, the software models, while complex in themselves, gave a powerful set of easy-to-use tools to Wall Street traders and salespeople. Thus they could start conversations with their customers about very complex derivative products. It didn’t seem to matter that most people on Wall Street didn’t understand them.

In Lecturing Birds on Flying: Can Mathematical Models Destroy Financial Markets? (2010),  Pablo Triana, himself a seasoned trader, says these models made it possible to demonstrate with mathematical precision how derivatives would produce returns under given conditions. And many people—both traders and investors—believed in the models. They trusted the numbers that were displayed in all their high-tech glory on the screens. Unfortunately they did not understand the underlying securities, the assumptions built into the models, or the methods by which the models were built.

In fact, the slick and sophisticated models created widespread overconfidence in the forecasts. The traders, the salespeople, and the investors looked at the numerical certainty of the models and were convinced by what they said, ignoring the fact that financial markets are by their nature unpredictable and vulnerable to crises. In some cases, the models, it seemed, just gave bankers justification for taking on more and more risk while at the same time appearing highly sophisticated to the outside world.

This belief in the truth of technology is not uncommon.  Alan Greenspan himself expressed a similar kind of blind faith in financial innovation and high-tech complexity when he compared the financial markets to a U.S. Air Force B-2 airplane. Our twenty-first century markets are too big, too complex, and too fast to be governed by twentieth century regulation and supervision, he argues toward the end of his autobiography. The movement of funds is too huge, the entire market system far too complex, the daily transactions far too numerous, to be understood and regulated. And this is OK. After all, a U.S. Air Force pilot does not need to know all the computer-aided adjustments that keep his B-2 in the air. Why should we expect to know how the markets behave?

But this analogy begs the question: After all, there’s a lot of solid scientific knowledge in the B-2. A team of top scientists and engineers worked for years to design, build, and test it. Other crews of highly skilled maintenance workers ensure that its systems are all working correctly before each flight. But the markets are at bottom a social system that does not operate according to such predictable laws.

 The nineteenth century didn’t see it that way. At that time, economists adopted certain scientific terms—equilibrium, pressure, momentum— to explain how the economy and financial markets operated, with the underlying assumption that these systems did follow laws similar to the laws of nature upon which sciences like physics and chemistry were based. But we are a long way from that kind of certainty in financial affairs. In the twentieth century, after World War I and the Depression, deep uncertainty started to color our understanding of markets as economists considered the role of human nature and its irrationality in markets.

The last thirty to forty years have seen the rise of two other major changes in the market: Volatility has become a major factor in modern markets. At the same time, and somewhat by coincidence at the beginning, computers came to play a dominant role in those markets. Those two major changes have developed in tandem and now are recombining to create new uncertainties on top of the old ones based on human nature. That combination of computer systems—with all their fallibilities, unintended consequences, and illusion of truth—and highly volatile markets is what we face today. And no one knows how the two will play off one another in the years to come.

 

 

 

 

 

 

 
0 Comments

The Coming of Posthumanism, or How to Build a Better God

5/4/2010

0 Comments

 

What do technologists, especially futurists, really want?

What inspires their dramatic visions of our future?

“We technologists are ceaselessly intrigued by rituals in which we attempt to pretend that people are obsolete.” So opines Jaron Lanier, the father of virtual reality, in his new book, You Are Not a Gadget. Lanier is talking about people like Kevin Kelly, who thinks that, once Google has digitized all books, we won’t need authors anymore. We can just assemble all the fragments into one big book and mix them up however we please. Lanier is also talking about Ray Kurzweil and other proponents of the Singularity, a future time when humans are supposed to merge into a larger consciousness, a consciousness that will encompass both our electronic machines and ourselves in a single digital system of reality.

Kurzweil anticipates a time, which he calculates to be around 2045, when machine intelligence will outpace that of humans. It will then be feasible for human beings to gain more intelligence by merging with machines. In this future time, machines will be better than humans at pattern-recognition, at problem-solving, and even, Kurzweil claims, at emotional and moral intelligence. Humans will use the advantages of machines to transcend the human brain’s limitations. Such advantages include superior processing and memory capacity, speed, and a so-called “knowledge-transfer” capability (which is really a fancy word for copying information from one machine to another). At that time, Kurzweil predicts, the distinctions between machines and humans would disappear.

 In 1854, Henry David Thoreau was profoundly worried that the products of the industrial revolution, steam engines, railroads, etc., were radically changing American culture and beginning to dominate so many facets of our lives. Men were becoming “tools of their tools.” Now, one hundred and fifty years later, we have Ray Kurzweil actually looking forward to a time when human intelligence will be truly subservient to their machines. Thoreau would be appalled. So why does Kurzweil think this is a good thing? It is not so much that he wants to get rid of people. It’s just that the potential power of the machines is so fascinating, indeed so seductive, that he cannot resist the temptation to conjure up a future in which the best of human intelligence can be captured and improved upon in machines.

Naturally there are many objections to such bold predictions—technical, moral, ethical, visceral, even common sensible objections. For the moment, however, I want to set aside such arguments and look at what Kurzweil’s futuristic vision is in response to. Broadly speaking, Kurzweil and many others evaluate human brain power by comparing it to computer processors. And they use the language and technical measurements used to describe computer science to do so. Thus what is quantifiable in the realm of computer hardware and, to a lesser extent, in human brains, are the only grounds for comparison. But many other aspects of human intelligence—all those messy emotions, for example, or creativity—are left out.

Here are the major lines of comparison as Kurzweil sees them:

The circuitry in the brain is slow

For human beings, simple tasks such as recognizing objects typically take about 150 milliseconds. The process of thinking something over or evaluating something takes even longer. Computers, by comparison, are much faster: the typical cycle speed for computers are measured in millions or even billions of cycles per second, already much faster than the human brain.

The brain is massively parallel

Parallel processing computers are machines that process multiple portions of tasks concurrently in order the speed up the entire process. The brain is massively parallel in the sense that one hundred trillion synapses (connection sites) can potentially be firing simultaneously. While humans currently hold an advantage here,  Kurzweil is quick to point out that today’s largest supercomputers are nearing the computational capacity of the brain.

The brain’s memory is limited

Based on the development of expert systems for medicine, it is estimated that humans, for any domain, can master 100,000 concepts. Kurzweil uses his own experience in rules-based and self-organizing pattern-recognition systems to estimate that the total capacity of a human functional memory is 1013 or 1 trillion bits. Current estimates project that by 2018 it will be possible to buy 1 trillion bits of memory for one thousand dollars.

Kurzweil catalogs other characteristics using the brain vs computer comparison but the the outline of the argument remains much the same as the examples already given. Computer hardware is already faster, has more capacity, and greater memory and storage than the human brain. Where computer hardware lags behind the human brain, it will catch up by the year 2020. As for the software, Kurzweil believes that, once we have the ability to scan the entire human brain with our powerful new hardware, we can then create brain models and come to understand the workings of the brain well enough to begin uploading of the human brain to machines.

All these projections assume exponential growth in technology. The source for all this optimism is Moore’s Law. Intel’s George Moore originally penned this law in the mid-seventies when he observed that the number of transistors that could be placed on an integrated circuit had been doubling every two years. Moore went on to predict that integrated circuits would continue to double the number of transistors at the same rate well into the future. Processing speed would also increase since the electrons would have less distance to travel.

Moore was right, and this rate of progress captured the imagination of technologists, especially when they think about the future. It leads to the widely held belief that, in every technology, exponential growth is inevitable, indeed unstoppable, and always to be desired. This belief can lead to some pretty weird predictions, such as Kurzweil’s speculation that even the speed of light might be increased or somehow be circumvented in this never-ending chase for the ultimate in technology.

Perhaps technologists like Kurzweil aren’t so much trying to render people unnecessary as they are pursuing a seemingly endless attraction to digital technology and its power. In many ways, the situation is reminiscent of religion in the Middle Ages: in comparison to an all-powerful God, humans saw themselves as far less worthy; the hardship and pain of life on earth, all our sins and weaknesses, would be swept away in an afterlife of bliss and being one with God.

 Nowadays, technology makes humans look slow, inadequate, and prone to error. Many look forward to a time when machine intelligence (will surpass human intelligence and when we humans can become one with the digital consciousness, achieving a new form of immortality in the process. We will upload ourselves into the “cloud.” Essentially, we will build a better God.  Once this singularity is achieved, “nonbiological intelligence,” which is currently called artificial intelligence, will quickly overshadow our human intelligence. Kurzweil is looking forward to it. As for the rest of us, the Singularity just seems to foretell a time when we may truly become “tools of our tools.” Or will we just be obsolete?

 

 

 

 
0 Comments

    RSS Feed

    Archives

    February 2014
    January 2014
    December 2013
    November 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    January 2012
    December 2011
    November 2011
    October 2011
    June 2011
    May 2011
    April 2011
    March 2011
    February 2011
    December 2010
    July 2010
    June 2010
    May 2010
    March 2010
    February 2010
    January 2010
    December 2009

    Categories

    All
    AI
    Computer Models
    Convergence
    Digital Software
    Division Of Labor
    E Readers
    Facebook
    Financial Markets
    Google
    Innovation Business Cycle
    Internet
    Knowledge
    Learning
    Media Use
    Myths
    Powerpoint
    Robots
    Screen Life
    Screen Life
    Search
    Social Networking
    Targeted Marketing
    Technology And Jobs
    The Nature Of The Digital
    The Nature Of The Digital
    Video Games
    Web 2.0
    Wikis
    Youth

    Cynthia's Blog Plan

    I'll aim to post here a few times a month, based on current events and my ongoing research.