Digital Athena
  • Blog
  • Essays and Book Reviews
  • Contact Us
  • About Digital Athena
  • Index of Essays and Reviews

The Human Brain Is a Computer? The Limits of Metaphor

4/26/2014

0 Comments

 
Metaphors matter. They matter a lot, shaping not only the way we communicate but also how we think, feel, and even behave. George Lakoff and Mark Johnson explained this well in their now classic work, Metaphors We Live By. Their premier example in that book analyzed how the concept "argument" becomes colored by its close association with the metaphor "war." Thus "argument is war." Here are some of the expressions they found that structure how we think about an argument as a war:

Your claims are indefensible.

He attacked every weak point in my argument.

His criticisms were right on target.

He shot down all my arguments.

Essentially, Lakoff and Johnson contend that metaphors impact the way we experience and understand  the associated concepts, so that in the case of argument, for example, we in part understand, act, and talk about it in terms of war. It's not a dance. It's not a writing process. It's a battle.

The widespread use of the metaphor of the computer to describe the workings of the human brain today has a similar effect. By using such an analogy, people are accepting the implications that the human brain is simply a logical device. This leads to such statements and by implications activities as the following:

IBM's Blue Brain Project is attempting to reverse-engineer the human brain.

Modern architectural design acknowledges that how buildings are structured influences how people interface.

The position of department head requires an expert multitasker capable of processing multiple projects at any given time.

His behavior does not compute.

Human beings do possess logical functions. But the danger with using the digital computer, which runs algorithms based on IF, THEN, ELSE, and COPY logical gates, as a metaphor for the brain is what it leaves out: messy feelings, ambiguous behaviors, irrational thoughts, and the natural ebb and flow of memories. It also leaves out the influences of our subconscious--and the rest of our physical, organic bodies--on how we think, act, and make decisions. Thinking of the brain as a computer addresses very little about what it feels like to be a human being, very little about what it feels like to be alive.

In The Myth of the Machine, Lewis Mumford argued that far too much emphasis has been placed on distinguishing humans from animals because of our tool-making capacities. He wrote that there was nothing uniquely human in our tool-making. After all, apes use tools. Rather it was the human mind "based on the fullest use of all his bodily organs" and that mind's capacity to create language and use symbols that allowed human beings to build social organizations, civilizations, that distinguish us from other animals. It was through symbols and language that humans rose above a purely animal state. But the ability to create symbols, to be conscious of life and death, of past and future, of tears and hopes, distinguishes humans from other animals far more than any tool-making capability. "The burial of the body tells us more about man's nature than would the tool that dug the grave."

If we continue to distinguish human beings from other animals along the lines of tool-making, Mumford believed, the trajectory would be quite dire:

"In terms of the currently accepted picture of the relation of man to technics, our age is passing from the primeval state of man, marked by his invention of tools and weapons for the purpose of achieving mastery over the forces of nature, to a radically different condition, in which he will have not only conquered nature, but detached himself as far as possible from the organic habitat."

So we need to be careful about using the metaphor of the computer, our most modern of tools, to describe our minds and what it means to be a human being.









0 Comments

Ray Kurzweil's Mind

1/23/2014

0 Comments

 
Ray Kurzweil incessantly dreams of the future. And it's a future he describes as a "human-machine civilization." In How To Create a Mind: The Secret of Human Thought Revealed, Kurzweil looks forward to a time when technology will have advanced to where it will be possible to gradually replace all the parts of the body and brain with nonbiological parts. And he claims that it will not change people's identities any more than the natural, gradual replacement of the cells in our body does now. All this will come about after scientists and engineers, who are currently working on brain models in many different organizations and areas of the world, succeed in creating a complete model of the human brain. Kurzweil contends that the neocortex functions hierarchically and that it works according to pattern recognition. Therefore, he argues, it is possible to write algorithms that will  simulate how the brain actually works. That, in combination with increasing miniaturization, will make such substitution of nonbiological components possible by the 2030s.

That the human brain is akin to a digital computer is still a big and a very contentious issue in neuroscience and cognitive psychology circles. In the January issue of Scientific American, Yale professor of psychology John Bargh summarizes some of the latest thinking about this problem. Specifically he addresses the major role of the unconscious in how people make decisions, how they behave in various situations, and how they perceive themselves and the world around them. There is a complex dynamic between ourcontrolled conscious thought processes and the unconscious, often automatic, processes of which we are not aware. Nobelist Daniel Kahneman explained this phenomenon in Thinking Fast and Slow. Automatic thought processes happen quickly and do not include planning or deliberation.

Even Daniel Dennett, an eminent philosopher and cognitive scientist who has long held that neurons functioned as simple on-off switches that make them a logical switch similar to a digital bit, has recently changed his mind about the analogy of the human mind to a computer: "We're beginning to come to grips with the idea," he says in a recent Edge talk, "that your brain is not this well-organized hierarchical control system where everything is in order,  . . . In fact, it's much more like anarchy. . . ."  Yet even with this concession Dennett is still inclined to use the computer as a metaphor for the human brain. This leads him to make a curious statement, one which actually begs the question: "The vision of the brain as a computer, which I still champion, is changing so fast. The brain's a computer, but it's so different from any computer you're used to. It's not your desktop or your laptop at all."

By his own admission, Dennett's talk is highly speculative: "I'd be thrilled if 20 percent of it was right." What I think he means is that the brain is like a computer that is far more complex than existing machines but that it also has intention. The neurons are "selfish," and they are more like agents than computer instructions, which in turn are more like slaves. "You don't have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn't want to do." Computers, on  the other hand, are made up of "mindless little robotic slave prisoners." So I'm not sure how helpful it is for Dennett to think of the brain as a computer at all. And Dennett's views on neurons and agents, combined with the more recent thinking about the impact of the unconscious on conscious thought, lead me to conclude that Ray Kurzweil's dream of someday replacing the human brain with robotic switches is just that: a dream.
0 Comments

How Big Is Big Data?

7/12/2013

1 Comment

 
Big Data. The very concept seems to demand, indeed require, that massive pronouncements and claims of Herculean proportions should follow. Such a concept must inevitably overwhelm previous trends and satisfy even the most unbelievable expectations. But what in truth is the story that proponents of Big Data are (loudly) proclaiming? And is it a fad that's here today, only to be gone tomorrow? Or does it indicate a more deeply embedded belief system, part of a living myth, for our time?

To find an answer to this question, I turned to the latest book on the subject, appropriately entitled Big Data, with one of those absolutely headline-grabbing subtitles that is designed to boggle the mind (and presumably make the casual observer pick up the book and, hopefully, buy it):  A Revolution That Will Transform How We Live, Work, and Think. OK, I thought, so what kind of a transformation are we talking about here?

First let me say that the authors come well credentialed. Victor Mayer-Schönberger teaches at the Oxford Internet
Institute at Oxford University and, we are told, is the author of eight books and countless articles. He is a "widely recognized authority “on big data. His co-author, Kenneth Cukier, hails from the upper echelons of journalism: he's the data editor for The Economist and has written for other prominent publications as well, including Foreign
Affairs
.

This was a good place to start, I thought, to learn about the story of big data and the kind of changes—oops, I
mean transformations—that it was inevitably going to produce in our world. The major transformation the authors predict is that soon computer systems will be replacing or at the very least augmenting human judgment in countless areas of our lives. The chief reason for this is the enormous amount of data that has recently become available. Digital technology now gives us access to, both easily and cheaply, large amounts of information, frequently collecting it passively, invisibly, and automatically.

The result is a major change in the general mindset. People are looking at data to find patterns and correlations
rather than setting up hypotheses to prove causality: "The ideal of identifying causal mechanisms is a self-congratulatory illusion; big data overturns this. Yet again we are at a historical impasse where 'god is dead.' That is to say, the certainties that we believed in are once again changing. But this time they are being replaced, ironically, by better evidence." 

So there you have it. God is dead, yet again. Only this time the god is the god of the scientific method, of causality.
Out with the "why," in with the "what." If Google can identify an outbreak of the H1N1 flu and specify particular areas of significantly large instances of infection, is there any reason that we should worry about why this is occurring in such places, when we already know the what: there's an outbreak of flue and it is especially heavy in these locations, the authors ask. 

We have, my friends, slid into the  gentle valley of the "Good Enough." Correlation is good enough for now. It's
fast, it's cheap, it's here, let's use it. We'll get around to the why later, maybe, if it's not too complicated and expensive to find out. And here are some of the examples the authors use for proof of the good enough of correlations: “After all, Amazon can recommend the ideal book. Google can rank the most
relevant website, Facebook knows our likes, and LinkedIn divines whom we know.”
Such exaggerated attribution of insight and intuition to computer algorithms is
so common these days that it’s seldom even called out.


That's the transformation, according to the authors, that we have to look forward to. And behind their predictions lies a sense that the movement toward reliance on the results of big data to understand our world is not just inevitable but that the data itself, the vast invisible presence in our modern lives, also contains within itself a power and energy of incalculable value and ever-improving predictive powers. They call it “big-data consciousness”: Seeing the world as information, as oceans of data that can be explored at ever greater breadth and depth, offers us a perspective on reality that we did not have before. It is a mental outlook that may penetrate all areas of life. Today we are a numerate society because we presume hat the world is understandable with numbers and math. . . . Tomorrow, subsequent generations may have a “big-data consciousness”—the presumption that there is a quantitative component to all that we do, and that data is indispensible for society to learn from.”

And the heroes of this transformation? They are the people who can wield this data well—who can write the algorithms that will move us beyond our superstitions and preconceptions to new insights into the world in which we live. These are the new Galileos of our day because they will be confronting existing institutions and ways of
  thinking. In a clever turn of what I like to call "The Grandiose Analogy," the authors compare the use of statistics by Billy Beane of Moneyball fame to Galileo's pioneering observations using  a telescope to support Copernicus’s theory that the Earth was not the center of the universe: "Beane was challenging the dogma of the dugout, just as Galileo's heliocentric views had affronted the authority of the Catholic Church." It's another attempt to elevate by association the comparatively banal practices of putting a winning baseball team together on a shoestring to the level of the world-shattering scientific observation that the earth and by extension mankind is not at the center of God's universe after all.

If you can ignore the hyperboles in this book, however--and given the number of them this is no small challenge—you can come to see the reality of what big data actually is and what kinds of contributions its use might make to our lives. The scientific method isn't going away. The march of science to discover and explain its best hypotheses at any given time will continue. In fact the patterns and correlations unearthed by big-data methods may form the basis for new hypotheses and bring us even closer to understanding the "why" of many things to come. 

Nonetheless, within some contexts, big data can produce actionable information. In marketing,  Amazon, for example, can use knowing that people who read Civil War histories may also like a particular subset of mystery writers to boost sales through their customer recommendation algorithms. Google's ability to detect flu outbreaks
also produces actionable information. The NIH and other medical institutions can take actions based on such findings to make vaccines plentiful in certain areas, produce more vaccines if feasible, prepare hospitals and medical offices for the spike in needs, and publish other public health guidelines.

Still there some real problems with heralding the quantification of everything into digitally manipulatable form as
the answer to myriad issues. The supposition fails to take into account any fundamental issues except those obvious ones involving privacy and surveillance. First of all there are the insurmountable problems that complex algorithms
create. That very complexity produces higher and higher risks for errors in the writing and executing of the code. That same complexity makes it very difficult to judge whether the results reflect reality. The very fact that such algorithms may challenge our intuition makes it difficult to validate their results without having an understanding of the "why," or even a sense of the assumptions and content of the algorithms themselves. 

Statistics can be powerful tools but there was also a wonderful book called How To Lie with Statistics that came out nearly sixty years ago and is no doubt still relevant today. The authors of Big Data claim that knowledge and experience may not be so important in the big data world: "When you are stuffed silly with data, you can tap that instead, and to greater effect. Thus those who can analyze big data may see past superstitions and conventional thinking not because they're smart, but because they have the data." The authors also suggest that a special team of “algorithmists” could oversee all the algorithms to ensure that they do not invade the privacy of individuals or cross other boundaries.  I’m afraid Mayer-Schönberger and Cukier really ought to talk to the SEC about Wall Street
and its algorithms to see how well that’s been working out!

Finally, the proponents of big data want to discount intuition, common sense, experience, knowledge, insight, and even serendipity and ingenuity, never mind wisdom. In their quest to elevate the digitalization of everything, they neglect those very qualities, qualities which cannot be digitized. As Einstein once famously reminded us:  "Not
everything that can be counted counts, and not everything that counts can be counted."

1 Comment

Alan Greenspan’s Love Affair with Technology

7/20/2010

0 Comments

 
Throughout his autobiography, The Age of Turbulence, Alan Greenspan expresses a deep fascination for the ways in which technology has transformed our economy. Among other changes, technology has revolutionized the distribution of risk, he maintains, and has also increased the ability of the markets to absorb shocks.  As a result, the economy has a new and--a very modern—flexibility. “Three or four decades ago, markets could deal only with plain vanilla stocks and bonds. Financial derivatives were simple and few,” Greenspan writes. “But with the advent of the ability to do around-the-clock business real-time in today’s linked worldwide markets, derivatives, collateralized debt obligations, and other complex products have arisen that can distribute risk across financial products, geography, and time.  . . . With the exceptions of financial spasms [as in 1987 and 1998], . . . markets seem to adjust smoothly from one hour to the next, one day to the next, as if guided by an ‘international invisible hand,’ if I may paraphrase Adam Smith.” Driven by advanced technology, the modern market process improves market efficiency and hence raises productivity. It is a triumph of modern information technologies.

Greenspan’s habit of mind made his enthusiasm for information technology inevitable. The long-time Chairman of the Federal Reserve never met a number he didn’t love. In his lifelong search for new knowledge and insights about the economy, he liked nothing better than absorbing large quantities of economic data. Lack of emotional bias was central to his search.  In his twenties he was attracted to logical positivism, a school of thought popular with Manhattan Project scientists. According to logical positivism, knowledge could only be obtained from facts and numbers. Values, ethics, personal behavior were not logical in nature. Rather they were shaped by the dominant culture and hence not past of serious thinking on any subject.  Greenspan would later emend this view, particularly regarding values, but the idea that facts and numbers were the path to knowledge remained part of his core beliefs.

A course he took in 1951 in mathematical statistics provided him with a scientific basis for his beliefs. Mathematical statistics proposed that the economy can be measured, modeled, and analyzed mathematically. (It was a nascent form of what is known today as econometrics.) Greenspan was immediately attracted to this discipline—and he excelled in it. Here was a forecasting method based on mathematics and empirical facts. Many prominent economists at the time, Greenspan observed, relied on “quasi-scientific intuition” in their forecasting, but he himself was inclined to develop his thinking in a different way: “My early training was to immerse myself in extensive detail in the workings of some small part of the world and infer from that detail the way that segment of the world behaves.  That is the process I have applied throughout my career.”

Little wonder that when digital computers began to invade the business world, Greenspan naturally saw them as extremely effective in gathering and ordering vast amounts of data and numbers: It is after all what computers do best. In fact, the span of Greenspan’s career did coincide with a revolution in financial markets based on digital computers.  He like many others saw in the innovations and improved efficiency that technology brought to the markets great progress. As long as technology was contributing to productivity growth and to general wealth, he could see nothing wrong with it. In fact, he often makes assumptions and even illogical arguments all in the name of technological progress. One example involves his attitude toward increased debt levels for both individuals and businesses. Yes, he admits there has been a long-term increase in leverage. But the appropriate level of leverage is a relative value that varies over time. Greenspan further minimizes the ramifications of increased leverage by arguing that people are steadfastly and innately averse to risk. Technology has simply added more flexibility in the system. Thus, Greenspan concludes, the general willingness of investors, businesses, and households to take on more leverage must mean that the additional financial “flexibility” allows for increased leverage without increased risk. “Rising leverage,” Greenspan blithely concluded, ”appears to be the result of massive improvements in technology and infrastructure, not significantly more risk-inclined humans.”

In the end, Greenspan was forced to change his mind about technology after the recent financial crisis. In his testimony to Congress in April of this year, he found two major ways in which technology had indeed failed the markets and helped precipitate the crisis. First of all the models that sophisticated investors used to assess risks were wrong. Those models had no relevant data that would have allowed them to forecast the impact of an event such as the failure of Lehmann Brothers. Investors and analysts had relied on pure—and incorrect—conjecture: They decided that they would be able to anticipate such a catastrophic event and retrench to avoid exposure. They were wrong.

Secondly, financial models for assessing risks, combined with huge computational capacities to create highly complex financial products, had left most of the investment community in the dark. They didn’t understand the products or the risks involved. Their only option was to rely on the rating agencies, which were in effect no better at assessing the risks of these products than anyone else.  Technology and those brilliant Ph. D.s known as the “quants” had effectively created their own monsters in the form of credit default swaps and collateralized debt obligations, which were far too opaque for even sophisticated investors to understand.

So much for technological innovations and flexibility. In the end it was in part technology that set up the conditions for the worst economic crisis since the Depression. One is left to wonder where that “international invisible hand” has gone to now.
0 Comments

Modeling Risks, or the Risks of Models

6/17/2010

0 Comments

 
In The New York Review of Books for June 24th, Paul Volcker has a cautionary piece about the imbalances, deficits, and risks of our fiscal situation. The essay is called “The Time We Have Is Growing Short.” Five years ago, Volcker writes, he saw the need for fundamental changes but saw no public resolve for doing anything either on the part of the public or the politicians. At that time, he predicted that the only way reform was going to occur was if there was a crisis. Little did he know how large a crisis was ahead.

Volcker writes that he did not anticipate the enormity of the crisis in part because “innovations” such as credit default swaps and CDOs had not existed in his day. Nor did the fancy computer models exist that were developed to devise, build, and evaluate the risks of those innovations. Those models assume that financial markets followed laws similar to those of the hard sciences. Volcker sees this as a big mistake: “One basic flaw running through much of the recent financial innovation,” he writes, “is that the thinking embedded in mathematics and physics could be directly adapted to markets.” But financial markets, he points out, do not behave according to changes in natural forces; rather they are human systems and are prone to herd behavior and wide swings in emotions. They are also subject to political influences and various uncertainties.

The quantification of the financial markets in sophisticated computer modeling was not a dominant part of the financial world that Volcker inhabited in the Federal Reserve of the fifties through the eighties. Yet the seeds of change were certainly there. They were planted in the early seventies when a somewhat coincidental rise in market volatility, computation, and financial modeling began to transform the financial industry.

Many factors contributed to the growing volatility of the markets in the early seventies. After the dollar was cut free from the gold standard in 1971, volatility invaded the foreign exchange markets. Oil prices, which had remained stable for decades, exploded. And interest rates and commodity prices saw levels of volatility that would have been unthinkable in the three previous decades. Financial deregulation and inflation contributed to the mix as well.

As Peter Bernstein wrote in his history of risk, Against the Gods (1996), the rising market volatility of the 70s and 80s produced a new demand for risk management. In the face of all this volatility and uncertainty, Wall Street saw its traditional investing strategies as inconsistent and unpredictable. They were old-fashioned, the operating methods, as one senior Wells Fargo Bank officer wrote at the time, of a “cottage industry.” It seemed something new—some more "modern" innovation—was being called for.

That innovation arose from two sources, both of which burst upon the scene in 1973. That was also the year when a new exchange for managing risk, in this case by buying and selling stock options, was opening up. Innovation both in computers and in financial models were jointly destined to create dramatic changes in the financial markets. The extraordinary power of computers greatly expanded the market’s ability to manipulate data and to devise and manage complex strategies and products. Models, on the other hand, seemed to offer some new and supercomplex way to avoid at least some of the new uncertainty investors faced. It was the beginning of the age of modern risk management.

The 1973 series of events incorporated all the major elements o f the changes to come:

·         In April 1973, the Chicago Board Options Exchange opened. The new exchange provided traders with an established process for trading stock options, including standardized contracts and market-makers who would buy and sell on demand, thus providing liquidity to the market. This was seen as a way to manage the risks involved in the stock market itself.

·         The following month, an article appeared in The Journal of Political Economy explaining for the first time the Black-Scholes-Merton method for valuing options. This model, expressed in complex algebra, used four basic elements to calculate the value of an option and in so doing included a quantitative method for determining volatility.

·         At the same time, Texas Instruments introduced its SR-10 handheld electronic calculator. Within months TI was running large ads in The Wall Street Journal pitching new possibilities to traders, “Now you can find the Black-Scholes value using our calculator.”

It didn’t take long for options traders to start using technical expressions from the Black-Scholes-Merton model to calculate the risks of their options. Armed with their new handheld computers, traders on the floor of the Chicago Board Options Exchange could run a formula to quantify risk and automatically calculate the value of a given stock option. As Bernstein points out, a new era had begun in the world of risk management.

What characterized this new era of risk management? Clearly it had much to do with the power of computers. Clearly it had much to do with complex mathematical models for expressing and predicting risk. And clearly it had much to do with an inordinate belief in the efficacy of those models and in the power of those computers to escape uncertainty by “managing” risk.

But how modern, how advanced, was it all? Toward the end of Against the Gods, Peter Bernstein offers a stark comparison between those who trust in complex calculations today and the ancient Greeks: “Those who live only by the numbers,” he observes, “may find that the computer has simply replaced the oracles to whom people resorted in ancient times for guidance in risk management and decision-making.”

So it seems that, as long as belief in the calculations computer models prevails in the markets, we not much better off than those who journeyed to Delphi long ago to worship Apollo and consulted the oracle therein regarding their fates.
0 Comments

    RSS Feed

    Archives

    February 2014
    January 2014
    December 2013
    November 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    January 2012
    December 2011
    November 2011
    October 2011
    June 2011
    May 2011
    April 2011
    March 2011
    February 2011
    December 2010
    July 2010
    June 2010
    May 2010
    March 2010
    February 2010
    January 2010
    December 2009

    Categories

    All
    AI
    Computer Models
    Convergence
    Digital Software
    Division Of Labor
    E Readers
    Facebook
    Financial Markets
    Google
    Innovation Business Cycle
    Internet
    Knowledge
    Learning
    Media Use
    Myths
    Powerpoint
    Robots
    Screen Life
    Screen Life
    Search
    Social Networking
    Targeted Marketing
    Technology And Jobs
    The Nature Of The Digital
    The Nature Of The Digital
    Video Games
    Web 2.0
    Wikis
    Youth

    Cynthia's Blog Plan

    I'll aim to post here a few times a month, based on current events and my ongoing research.