If recent surveys and current trends are any indication, by the time my niece is 15, she will be checking her Facebook account, watching TV, texting several friends, and doing her homework in a rapid cycle of sequences for seven or eight or more hours per day. She will have acquired 365 friends on Facebook and sleep with her cell phone under her pillow. She will spend a great deal of her time tethered to her machines, alone, “communicating” with others through a truncated set of texting words, abbreviations, and acronyms. The closest she might come on some days to deep emotion will be expressed in a string of emoticons. Her time alone will resemble not solitude, where some contemplation of oneself and one’s life might occur. Rather it will be more of a muffled isolation within an electronic cocoon.
What draws people to the spell of multitasking? Why is this goal so valued as a continuous activity today? I think it began with a set of metaphors that started making their way into our language, probably in the 1970s, possibly even earlier. I was first personally struck when I was having a conversation with a businessman conversant with computer programming as he described how he “interfaced” with his client. When I asked him what he meant by “interface,” he told me he meant how people connected, just like the 8- or 12-pronged plugs that connected a
computer terminal to a mainframe. By the 70s, we began to speak and think of some mechanical aspects of human thinking. By the eighties, the use of computer terminology to describe human thought became commonplace. We “processed” information. We “transferred” knowledge. We “crunched” the numbers. In short, we began to think of ourselves more as calculators than as people. Multiprocessing seemed a natural after that.
With the ubiquity of digital devices today, people have begun to emulate the microprocessors with which they share their lives. They have adopted the rhythm of the multitasking, breaking down large tasks into smaller steps and
processing multiple activities in a nearly simultaneous way. There are many problems with these analogies and the changes in our behavior they foster, but I’ll just mention two. One, we humans are not made to be multitaskers. We
basically can do only one relatively involved task at a time (most of us can walk and chew gum at the same time, but that’s different from activities that require real focus). The second problem involves the whole idea of equating
human activity with computers. It leaves out very large parts of what makes us human in the first place: creativity, self-awareness, morality, and our abilities to love, trust, empathize, grieve, and experience a whole range of
emotions that machines can never understand. All these experiences color our thoughts, one would hope, and make them more deeply human along the way.