What do technologists, especially futurists, really want?
What inspires their dramatic visions of our future?
“We technologists are ceaselessly intrigued by rituals in which we attempt to pretend that people are obsolete.” So opines Jaron Lanier, the father of virtual reality, in his new book, You Are Not a Gadget. Lanier is talking about people like Kevin Kelly, who thinks that, once Google has digitized all books, we won’t need authors anymore. We can just assemble all the fragments into one big book and mix them up however we please. Lanier is also talking about Ray Kurzweil and other proponents of the Singularity, a future time when humans are supposed to merge into a larger consciousness, a consciousness that will encompass both our electronic machines and ourselves in a single digital system of reality.
Kurzweil anticipates a time, which he calculates to be around 2045, when machine intelligence will outpace that of humans. It will then be feasible for human beings to gain more intelligence by merging with machines. In this future time, machines will be better than humans at pattern-recognition, at problem-solving, and even, Kurzweil claims, at emotional and moral intelligence. Humans will use the advantages of machines to transcend the human brain’s limitations. Such advantages include superior processing and memory capacity, speed, and a so-called “knowledge-transfer” capability (which is really a fancy word for copying information from one machine to another). At that time, Kurzweil predicts, the distinctions between machines and humans would disappear.
In 1854, Henry David Thoreau was profoundly worried that the products of the industrial revolution, steam engines, railroads, etc., were radically changing American culture and beginning to dominate so many facets of our lives. Men were becoming “tools of their tools.” Now, one hundred and fifty years later, we have Ray Kurzweil actually looking forward to a time when human intelligence will be truly subservient to their machines. Thoreau would be appalled. So why does Kurzweil think this is a good thing? It is not so much that he wants to get rid of people. It’s just that the potential power of the machines is so fascinating, indeed so seductive, that he cannot resist the temptation to conjure up a future in which the best of human intelligence can be captured and improved upon in machines.
Naturally there are many objections to such bold predictions—technical, moral, ethical, visceral, even common sensible objections. For the moment, however, I want to set aside such arguments and look at what Kurzweil’s futuristic vision is in response to. Broadly speaking, Kurzweil and many others evaluate human brain power by comparing it to computer processors. And they use the language and technical measurements used to describe computer science to do so. Thus what is quantifiable in the realm of computer hardware and, to a lesser extent, in human brains, are the only grounds for comparison. But many other aspects of human intelligence—all those messy emotions, for example, or creativity—are left out.
Here are the major lines of comparison as Kurzweil sees them:
The circuitry in the brain is slow
For human beings, simple tasks such as recognizing objects typically take about 150 milliseconds. The process of thinking something over or evaluating something takes even longer. Computers, by comparison, are much faster: the typical cycle speed for computers are measured in millions or even billions of cycles per second, already much faster than the human brain.
The brain is massively parallel
Parallel processing computers are machines that process multiple portions of tasks concurrently in order the speed up the entire process. The brain is massively parallel in the sense that one hundred trillion synapses (connection sites) can potentially be firing simultaneously. While humans currently hold an advantage here, Kurzweil is quick to point out that today’s largest supercomputers are nearing the computational capacity of the brain.
The brain’s memory is limited
Based on the development of expert systems for medicine, it is estimated that humans, for any domain, can master 100,000 concepts. Kurzweil uses his own experience in rules-based and self-organizing pattern-recognition systems to estimate that the total capacity of a human functional memory is 1013 or 1 trillion bits. Current estimates project that by 2018 it will be possible to buy 1 trillion bits of memory for one thousand dollars.
Kurzweil catalogs other characteristics using the brain vs computer comparison but the the outline of the argument remains much the same as the examples already given. Computer hardware is already faster, has more capacity, and greater memory and storage than the human brain. Where computer hardware lags behind the human brain, it will catch up by the year 2020. As for the software, Kurzweil believes that, once we have the ability to scan the entire human brain with our powerful new hardware, we can then create brain models and come to understand the workings of the brain well enough to begin uploading of the human brain to machines.
All these projections assume exponential growth in technology. The source for all this optimism is Moore’s Law. Intel’s George Moore originally penned this law in the mid-seventies when he observed that the number of transistors that could be placed on an integrated circuit had been doubling every two years. Moore went on to predict that integrated circuits would continue to double the number of transistors at the same rate well into the future. Processing speed would also increase since the electrons would have less distance to travel.
Moore was right, and this rate of progress captured the imagination of technologists, especially when they think about the future. It leads to the widely held belief that, in every technology, exponential growth is inevitable, indeed unstoppable, and always to be desired. This belief can lead to some pretty weird predictions, such as Kurzweil’s speculation that even the speed of light might be increased or somehow be circumvented in this never-ending chase for the ultimate in technology.
Perhaps technologists like Kurzweil aren’t so much trying to render people unnecessary as they are pursuing a seemingly endless attraction to digital technology and its power. In many ways, the situation is reminiscent of religion in the Middle Ages: in comparison to an all-powerful God, humans saw themselves as far less worthy; the hardship and pain of life on earth, all our sins and weaknesses, would be swept away in an afterlife of bliss and being one with God.
Nowadays, technology makes humans look slow, inadequate, and prone to error. Many look forward to a time when machine intelligence (will surpass human intelligence and when we humans can become one with the digital consciousness, achieving a new form of immortality in the process. We will upload ourselves into the “cloud.” Essentially, we will build a better God. Once this singularity is achieved, “nonbiological intelligence,” which is currently called artificial intelligence, will quickly overshadow our human intelligence. Kurzweil is looking forward to it. As for the rest of us, the Singularity just seems to foretell a time when we may truly become “tools of our tools.” Or will we just be obsolete?