Digital Athena
  • Blog
  • Essays and Book Reviews
  • Contact Us
  • About Digital Athena
  • Index of Essays and Reviews

How Will People Interact with Computers in 2020?

“The question persists and indeed grows whether the computer will make it easier or harder for human beings to know who they really are, to identify their real problems, to respond more fully to beauty, to place adequate value on life, and to make their world safer than it now is.” Norman Cousins, “The Poet and the Computer,” 1966

By 2020, technology will support or enhance every action we take. If this is true, what will it mean to be human? What will our relationship with technology be? Designing how humans relate to computers, formally known as human-computer interaction (HCI), got its start in the early 1980s. Back then it focused on “usability,” which is now an established discipline with a large body of research and best practices. Today HCI designers are shifting their focus away from the mechanics of graphical user interfaces, of the keyboard and the mouse. A recent forum of computer scientists, designers, sociologists, and psychologists met to discuss the direction designers should focus on as they work forward to 2020. Out of that HCI forum came, “Being Human: Human-Computer Interaction in the Year 2020,” a comprehensive overview of the issues we'll face as computers and their interfaces become more varied, more sophisticated, and more autonomously interactive and complex.

In the years ahead, interface designers aim to better understand how computers embody, shape, and reflect human values. Computers alter how we live our lives. It’s not just the growing numbers of computers that makes things different. There have also been fundamental changes in how we live--from how we accomplish everyday tasks such as buying food and paying bills, to more exotic experiences in virtual worlds, and everything in between. Digital computers are not simply a substitute for older technologies. They are actually creating real changes, often radical and far-reaching ones:

Multiple Interfaces—Some Embedded in Us

No longer just a separate box with a single interface, tomorrow’s computers will have many different interfaces, some recognizable as computers, some not. Paradoxically, the more natural the design of an embedded devices is, the less obvious the computer is to users. Sometimes, we won’t even realize that we’re “computing.” Already, the boundaries between human beings and computers are rapidly changing, from standard GUI screens to smartphones, e-book readers, and even devices embedded within us, like pacemakers and hearing aids. Manufacturers and designers will also be embedding computers in furniture, rooms, cars, doors, and clothing.

Speculating about the future often raises more questions than it answers and the HCI forum was no exception: If so many everyday objects contain computing devices that are hidden, how will people know when they are being indirectly monitored? How can we sort out the privacy issues when computers are digitizing information about bodily functions? And as computers become more and more part of everyday living, will the current notions of “user” and “interface” become just plain obsolete?

As computers proliferate, they will increasingly work together, often without human intervention. The unintended as well as intended results will create highly dynamic and somewhat unpredictable networks. With no good understanding of how this complex infrastructure works, people will probably feel less safe and secure. Or they may just rely more heavily on their automated environment and understand only vaguely how things really work. If the designers themselves can’t understand the unintended consequences of such networks, who will be responsible when something goes wrong? How will people cope when the Internet breaks down, or devices malfunction?

Cleverer Computers Making Decisions

Computers will be making decisions for us. Businesses already have many decision-making processes in their ERP systems that, for example, automatically reprioritize manufacturing schedules, ship times, and inventory replenishments. Recommender systems on sites such as Amazon are advising buyers about what else they might like to purchase. In Japan, some propose to develop robots as companions for the elderly. How will we (or should we) relate to more robotic devices? Will we be happy with robots as companions or pets? Will we trust them to intervene medically if needed? And when they fail, what will the consequences be?

Risk of Extreme Connectivity

As the demands for people to continuously interact with various computer devices spirals, managing personal time will become a big issue. The line between work and home life is already blurred for many. As these activities become inextricably mixed, people will need time to be “unplugged,” or they’ll risk the intense stress resulting from “extreme connectivity.”

Our Digital Shadows

Like it or not, our digital footprint, those odds and ends that make up the details of our lives, are being archived either explicitly or implicitly. Closed-circuit TVs already record many of our public activities. Soon radio-frequency ID tags (RFID tags) will track our commercial transactions. How can these things be done responsibly and ethically? How can we as a society prevent abuse and misuse of such personal information? Will people have the right to delete information from public records? How should we manage the storage of and access to such large amounts of personal information?

One thing is certain: governments and other institutions will have access to more data about us. Schools, hospitals, and other organizations regularly monitor and analyze our behavior. In the future, researchers say, we are likely to have less control over our growing digital footprints unless designers take care to build in some way for us to control our own data. It will be a tricky balance between the need to capture information on the one hand and the right to privacy on the other.

Creative Engagement and Personal Control over Apps

Participants in the HCI forum also agreed that ubiquitous computing and Web 2.0 have given people more control over their applications and potentially more knowledge. Sophisticated systems are starting to visualize and reason about highly complex problems in new ways, creating new types of research. New computational tools are already being developed across disciplines to better understand such phenomena as climate and the basics of global pandemics.

Mash-ups, for example, can recombine the functionality of Web 2.0 tools with large data sets to analyze and discover new patterns, such as determining the effects of deforestation on different continents. A mash-up may sound simple (and even fun) but deep technical difficulties can ruin everything. The first problem here—and it is a big one—is ensuring that data from different sources is intelligible, usable, and meaningful. Second, once systems are up and running, it is also hard to know when these increasingly complex tools and big data sets aren’t working correctly. And then there’s the matter of assumptions: To what extent, researchers wonder, do system designers need to make their underlying assumptions and constraints explicit to users? In complex systems, is this even feasible?

Users Will Be Human After All

Users are a rambunctious bunch. Designing new human-computer interfaces has always been a process of trial and error. Users seldom (if ever) adopt a technology in the way its designers intended them to. Email, for example, was developed to help coordinate computer time-sharing decades ago. At that time no one could have guessed that it would become the backbone of communications at multinational corporations and the glue that binds families and friends across town and around the world.

In more recent years, designers continue to be surprised at what users do with new gadgets:

Ø       The iPod can promote urban isolation as users walk the streets and ride public transportation while encased in their own musical world.

Ø        Some people are addicted to using their mobile phones. (Ever wonder why people are talking nonstop on their phones in the grocery store?) Car accidents involving mobile phones are now commonplace, and researchers have demonstrated that people using cell phones are about as competent behind the wheel as those who are legally drunk.

Ø       In planning mobile phones with built-in TVs, designers assumed devices would need to provide a picture clear enough to see small objects, such as footballs or hockey pucks (some methods for compressing signals to transmit TV pictures eliminate small objects). But in fact people use TV-phones not to watch TV but for other purposes, such as viewing instant replays when actually at a game or sharing a clip with a friend.

Clearly designers need to test their products, analyzing and evaluating how people are actually using them. What may be the most reassuring finding about this study is that people will continue to pursue their own goals, fulfill their own desires, and focus on those things that make them human in relationship to technology: they will use technologies to connect with others, to educate children, to care for those in need, and to grow old safely and comfortably. It looks like we’ll continue to be human after all.

Post a comment

Download "Being Human in 2020" 

"The Future of the Internet III," a Pew Internet and American Life Project, contains individual responses from experts, as well as a summary, of how the Internet will affect social, political, and economic life in 2020. Read abstract.

See also The Long Rant, a review of Lee Siegel's "Against the Machine: Being Human in the Age of the Electronic Mob"