Digital Athena
  • Blog
  • Essays and Book Reviews
  • Contact Us
  • About Digital Athena
  • Index of Essays and Reviews

The Human Brain Is a Computer? The Limits of Metaphor

4/26/2014

0 Comments

 
Metaphors matter. They matter a lot, shaping not only the way we communicate but also how we think, feel, and even behave. George Lakoff and Mark Johnson explained this well in their now classic work, Metaphors We Live By. Their premier example in that book analyzed how the concept "argument" becomes colored by its close association with the metaphor "war." Thus "argument is war." Here are some of the expressions they found that structure how we think about an argument as a war:

Your claims are indefensible.

He attacked every weak point in my argument.

His criticisms were right on target.

He shot down all my arguments.

Essentially, Lakoff and Johnson contend that metaphors impact the way we experience and understand  the associated concepts, so that in the case of argument, for example, we in part understand, act, and talk about it in terms of war. It's not a dance. It's not a writing process. It's a battle.

The widespread use of the metaphor of the computer to describe the workings of the human brain today has a similar effect. By using such an analogy, people are accepting the implications that the human brain is simply a logical device. This leads to such statements and by implications activities as the following:

IBM's Blue Brain Project is attempting to reverse-engineer the human brain.

Modern architectural design acknowledges that how buildings are structured influences how people interface.

The position of department head requires an expert multitasker capable of processing multiple projects at any given time.

His behavior does not compute.

Human beings do possess logical functions. But the danger with using the digital computer, which runs algorithms based on IF, THEN, ELSE, and COPY logical gates, as a metaphor for the brain is what it leaves out: messy feelings, ambiguous behaviors, irrational thoughts, and the natural ebb and flow of memories. It also leaves out the influences of our subconscious--and the rest of our physical, organic bodies--on how we think, act, and make decisions. Thinking of the brain as a computer addresses very little about what it feels like to be a human being, very little about what it feels like to be alive.

In The Myth of the Machine, Lewis Mumford argued that far too much emphasis has been placed on distinguishing humans from animals because of our tool-making capacities. He wrote that there was nothing uniquely human in our tool-making. After all, apes use tools. Rather it was the human mind "based on the fullest use of all his bodily organs" and that mind's capacity to create language and use symbols that allowed human beings to build social organizations, civilizations, that distinguish us from other animals. It was through symbols and language that humans rose above a purely animal state. But the ability to create symbols, to be conscious of life and death, of past and future, of tears and hopes, distinguishes humans from other animals far more than any tool-making capability. "The burial of the body tells us more about man's nature than would the tool that dug the grave."

If we continue to distinguish human beings from other animals along the lines of tool-making, Mumford believed, the trajectory would be quite dire:

"In terms of the currently accepted picture of the relation of man to technics, our age is passing from the primeval state of man, marked by his invention of tools and weapons for the purpose of achieving mastery over the forces of nature, to a radically different condition, in which he will have not only conquered nature, but detached himself as far as possible from the organic habitat."

So we need to be careful about using the metaphor of the computer, our most modern of tools, to describe our minds and what it means to be a human being.









0 Comments

Ray Kurzweil's Mind

1/23/2014

0 Comments

 
Ray Kurzweil incessantly dreams of the future. And it's a future he describes as a "human-machine civilization." In How To Create a Mind: The Secret of Human Thought Revealed, Kurzweil looks forward to a time when technology will have advanced to where it will be possible to gradually replace all the parts of the body and brain with nonbiological parts. And he claims that it will not change people's identities any more than the natural, gradual replacement of the cells in our body does now. All this will come about after scientists and engineers, who are currently working on brain models in many different organizations and areas of the world, succeed in creating a complete model of the human brain. Kurzweil contends that the neocortex functions hierarchically and that it works according to pattern recognition. Therefore, he argues, it is possible to write algorithms that will  simulate how the brain actually works. That, in combination with increasing miniaturization, will make such substitution of nonbiological components possible by the 2030s.

That the human brain is akin to a digital computer is still a big and a very contentious issue in neuroscience and cognitive psychology circles. In the January issue of Scientific American, Yale professor of psychology John Bargh summarizes some of the latest thinking about this problem. Specifically he addresses the major role of the unconscious in how people make decisions, how they behave in various situations, and how they perceive themselves and the world around them. There is a complex dynamic between ourcontrolled conscious thought processes and the unconscious, often automatic, processes of which we are not aware. Nobelist Daniel Kahneman explained this phenomenon in Thinking Fast and Slow. Automatic thought processes happen quickly and do not include planning or deliberation.

Even Daniel Dennett, an eminent philosopher and cognitive scientist who has long held that neurons functioned as simple on-off switches that make them a logical switch similar to a digital bit, has recently changed his mind about the analogy of the human mind to a computer: "We're beginning to come to grips with the idea," he says in a recent Edge talk, "that your brain is not this well-organized hierarchical control system where everything is in order,  . . . In fact, it's much more like anarchy. . . ."  Yet even with this concession Dennett is still inclined to use the computer as a metaphor for the human brain. This leads him to make a curious statement, one which actually begs the question: "The vision of the brain as a computer, which I still champion, is changing so fast. The brain's a computer, but it's so different from any computer you're used to. It's not your desktop or your laptop at all."

By his own admission, Dennett's talk is highly speculative: "I'd be thrilled if 20 percent of it was right." What I think he means is that the brain is like a computer that is far more complex than existing machines but that it also has intention. The neurons are "selfish," and they are more like agents than computer instructions, which in turn are more like slaves. "You don't have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn't want to do." Computers, on  the other hand, are made up of "mindless little robotic slave prisoners." So I'm not sure how helpful it is for Dennett to think of the brain as a computer at all. And Dennett's views on neurons and agents, combined with the more recent thinking about the impact of the unconscious on conscious thought, lead me to conclude that Ray Kurzweil's dream of someday replacing the human brain with robotic switches is just that: a dream.
0 Comments

Myths for Our Time (II): The Internet as Planetary Computer

6/1/2012

7 Comments

 
“People say that what we’re all seeking is a meaning for life. I  don’t think that’s what we’re really seeking. I think what we’re seeking is an  experience of being alive, so that our life experiences on the purely physical plane have resonances within our own innermost being and reality, so that we  actually feel the rapture of being alive. That’s what it’s all finally about,  and that’s what these clues in myths help us to find within ourselves. . . .   Myths are clues to the spiritual potentialities of the human life.   . . . We need myths [today] that will identify the individual not with is  local group but with the planet.” Joseph Campbell, The Power of Myth

For many today, the Internet seems to be a powerful presence. Why does it have such a deep resonance within our imaginations? How has it become so central to our contemporary life? And what does that say about the lives we live and our values? People find the phenomenon of the Web full of possibilities. Many believe that there’s something magical in its very existence and that it offers access to knowledge and powerful modes of communication that are fundamentally different from what we have had in the past. There is also the pervading sense that the Internet is changing us, both individually and communally, in very important ways. 

One way of understanding the role of the Internet in our culture is to consider it as a metaphor and potentially part of a mythology that expresses some essence of what it means to be alive today. Many envision the Internet as an ever-expanding, boundless entity with near-infinite connections both to other people and to sources of knowledge. And this powerful pull of the Internet seems to me to come from its similarities both to our sense of our
outer world—that ever-expanding universe of which we are such a minute part—and to our inner world—the endless depth of our own psyche, imagination, and unconscious with its potential links to communal metaphors and myths. Both these worlds, the outer and the inner, are ineffable, boundless, and to a certain
extent mysterious, unknowable. 

The Internet shares these characteristics and hence seems to offer a similar potential for knowledge, insight, even adventure. It’s cyberspace, after all, a place for journeys. One clicks on an icon (our computers do have their own “iconographies,” just as mythologies do). Microsoft Windows offers users an “Explorer” program to cross the threshold into the vast and unknown space called the Internet. The potential seen in the boundless, gargantuan phenomenon of the Web leads many people to make large claims: Kevin Kelly, founding editor of Wired,
calls the Internet a “planetary computer,” a “global computer,” and even a “large-scale sentience” with a distributed and vast intelligence that grows “smarter” by the second as millions of users provide evermore information merely
by clicking on a specific website because, in so doing, they indicate their preferences, their interests.

Is there transcendence here, one might ask, the kind of move beyond our ordinary life toward the ultimate mystery of the universe and the source of life itself? Kevin Kelly and others seem to think this is possible, that there is the promise of ultimate knowledge, the unknowable, in the Internet: “Currently,” he writes in What Technology Wants, “we are prejudiced against machines, because all the machines we have met so far have  been uninteresting. As they gain in sentience, that won’t be true,” Kelly writes. “What technology wants is increasing sentience. This does not mean evolution will move us only toward one universal supermind. Rather in the course of time the technium tends to self-organize into as many varieties of mind as is possible. . . The universe is so huge, so vast in its available mysteries, that it will require every possible type of mind to comprehend it. The technium’s job  is to invent a million or a billion varieties of  comprehension.”

Much hinges on what Kelly means by technium, a word he coined because he found that “culture” was too “small” and does not for him convey a sense of “self-propelling momentum.” (I would point out that the word "culture" is also associated with organic growth.) Kelly reaches towards a new kind of mystical sense in defining his technium: it is “the greater, global, massively interconnected system of technology vibrating around us.” And this is not just
hardware and software, but all “culture, art, social institutions, and intellectual creations” along with this impulse (and here he quite anthropomorphizes technology) of essentially “what technology wants,” which is to generate more technology, more inventions, more connections. For Kelly, as for many enthusiasts of the Internet, the “technium” seems to be alive. But is it? And is it truly self-organizing, or is it just some version of Larry Paige
 standing behind a curtain like the Wizard of Oz?

But the real question at the end of the day about what this technology “wants” is: does it have any place left for humanity and the spiritual potential that Joseph Campbell alludes to when he talks about the myths human beings create out of their own dreams, their own imaginations, their own psyches. Or is this new myth a myth of the machine as an all-knowing  and all-powerful deity—in short, a god? Or maybe it’s just all that smoke and mirrors, concocting something rather more illusory than elusive.

7 Comments

The Coming of Posthumanism, or How to Build a Better God

5/4/2010

0 Comments

 

What do technologists, especially futurists, really want?

What inspires their dramatic visions of our future?

“We technologists are ceaselessly intrigued by rituals in which we attempt to pretend that people are obsolete.” So opines Jaron Lanier, the father of virtual reality, in his new book, You Are Not a Gadget. Lanier is talking about people like Kevin Kelly, who thinks that, once Google has digitized all books, we won’t need authors anymore. We can just assemble all the fragments into one big book and mix them up however we please. Lanier is also talking about Ray Kurzweil and other proponents of the Singularity, a future time when humans are supposed to merge into a larger consciousness, a consciousness that will encompass both our electronic machines and ourselves in a single digital system of reality.

Kurzweil anticipates a time, which he calculates to be around 2045, when machine intelligence will outpace that of humans. It will then be feasible for human beings to gain more intelligence by merging with machines. In this future time, machines will be better than humans at pattern-recognition, at problem-solving, and even, Kurzweil claims, at emotional and moral intelligence. Humans will use the advantages of machines to transcend the human brain’s limitations. Such advantages include superior processing and memory capacity, speed, and a so-called “knowledge-transfer” capability (which is really a fancy word for copying information from one machine to another). At that time, Kurzweil predicts, the distinctions between machines and humans would disappear.

 In 1854, Henry David Thoreau was profoundly worried that the products of the industrial revolution, steam engines, railroads, etc., were radically changing American culture and beginning to dominate so many facets of our lives. Men were becoming “tools of their tools.” Now, one hundred and fifty years later, we have Ray Kurzweil actually looking forward to a time when human intelligence will be truly subservient to their machines. Thoreau would be appalled. So why does Kurzweil think this is a good thing? It is not so much that he wants to get rid of people. It’s just that the potential power of the machines is so fascinating, indeed so seductive, that he cannot resist the temptation to conjure up a future in which the best of human intelligence can be captured and improved upon in machines.

Naturally there are many objections to such bold predictions—technical, moral, ethical, visceral, even common sensible objections. For the moment, however, I want to set aside such arguments and look at what Kurzweil’s futuristic vision is in response to. Broadly speaking, Kurzweil and many others evaluate human brain power by comparing it to computer processors. And they use the language and technical measurements used to describe computer science to do so. Thus what is quantifiable in the realm of computer hardware and, to a lesser extent, in human brains, are the only grounds for comparison. But many other aspects of human intelligence—all those messy emotions, for example, or creativity—are left out.

Here are the major lines of comparison as Kurzweil sees them:

The circuitry in the brain is slow

For human beings, simple tasks such as recognizing objects typically take about 150 milliseconds. The process of thinking something over or evaluating something takes even longer. Computers, by comparison, are much faster: the typical cycle speed for computers are measured in millions or even billions of cycles per second, already much faster than the human brain.

The brain is massively parallel

Parallel processing computers are machines that process multiple portions of tasks concurrently in order the speed up the entire process. The brain is massively parallel in the sense that one hundred trillion synapses (connection sites) can potentially be firing simultaneously. While humans currently hold an advantage here,  Kurzweil is quick to point out that today’s largest supercomputers are nearing the computational capacity of the brain.

The brain’s memory is limited

Based on the development of expert systems for medicine, it is estimated that humans, for any domain, can master 100,000 concepts. Kurzweil uses his own experience in rules-based and self-organizing pattern-recognition systems to estimate that the total capacity of a human functional memory is 1013 or 1 trillion bits. Current estimates project that by 2018 it will be possible to buy 1 trillion bits of memory for one thousand dollars.

Kurzweil catalogs other characteristics using the brain vs computer comparison but the the outline of the argument remains much the same as the examples already given. Computer hardware is already faster, has more capacity, and greater memory and storage than the human brain. Where computer hardware lags behind the human brain, it will catch up by the year 2020. As for the software, Kurzweil believes that, once we have the ability to scan the entire human brain with our powerful new hardware, we can then create brain models and come to understand the workings of the brain well enough to begin uploading of the human brain to machines.

All these projections assume exponential growth in technology. The source for all this optimism is Moore’s Law. Intel’s George Moore originally penned this law in the mid-seventies when he observed that the number of transistors that could be placed on an integrated circuit had been doubling every two years. Moore went on to predict that integrated circuits would continue to double the number of transistors at the same rate well into the future. Processing speed would also increase since the electrons would have less distance to travel.

Moore was right, and this rate of progress captured the imagination of technologists, especially when they think about the future. It leads to the widely held belief that, in every technology, exponential growth is inevitable, indeed unstoppable, and always to be desired. This belief can lead to some pretty weird predictions, such as Kurzweil’s speculation that even the speed of light might be increased or somehow be circumvented in this never-ending chase for the ultimate in technology.

Perhaps technologists like Kurzweil aren’t so much trying to render people unnecessary as they are pursuing a seemingly endless attraction to digital technology and its power. In many ways, the situation is reminiscent of religion in the Middle Ages: in comparison to an all-powerful God, humans saw themselves as far less worthy; the hardship and pain of life on earth, all our sins and weaknesses, would be swept away in an afterlife of bliss and being one with God.

 Nowadays, technology makes humans look slow, inadequate, and prone to error. Many look forward to a time when machine intelligence (will surpass human intelligence and when we humans can become one with the digital consciousness, achieving a new form of immortality in the process. We will upload ourselves into the “cloud.” Essentially, we will build a better God.  Once this singularity is achieved, “nonbiological intelligence,” which is currently called artificial intelligence, will quickly overshadow our human intelligence. Kurzweil is looking forward to it. As for the rest of us, the Singularity just seems to foretell a time when we may truly become “tools of our tools.” Or will we just be obsolete?

 

 

 

 
0 Comments

    RSS Feed

    Archives

    February 2014
    January 2014
    December 2013
    November 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    January 2012
    December 2011
    November 2011
    October 2011
    June 2011
    May 2011
    April 2011
    March 2011
    February 2011
    December 2010
    July 2010
    June 2010
    May 2010
    March 2010
    February 2010
    January 2010
    December 2009

    Categories

    All
    AI
    Computer Models
    Convergence
    Digital Software
    Division Of Labor
    E Readers
    Facebook
    Financial Markets
    Google
    Innovation Business Cycle
    Internet
    Knowledge
    Learning
    Media Use
    Myths
    Powerpoint
    Robots
    Screen Life
    Screen Life
    Search
    Social Networking
    Targeted Marketing
    Technology And Jobs
    The Nature Of The Digital
    The Nature Of The Digital
    Video Games
    Web 2.0
    Wikis
    Youth

    Cynthia's Blog Plan

    I'll aim to post here a few times a month, based on current events and my ongoing research.