Digital Athena
  • Blog
  • Essays and Book Reviews
  • Contact Us
  • About Digital Athena
  • Index of Essays and Reviews

Does the Internet Have a Future?                                                

Will a Manhattan-type project and reliance on the good will of users save the Internet from its own success?

The Internet is in for some big trouble. Researchers estimate that attacks on the Web doubled every year from 1997 to 2003, when the organization that tracks the numbers said that incidents had become so common and wide-ranging that it was impossible to distinguish one from another to make a reliable count. Between 2005 and 2006 IBM’s Internet Security System reported an increased vulnerability of 40%, and more than half of those attacks, all initiated remotely, gained full access to a computer’s programs and its data. Google and StopBadSoftware.com, who work together to identify and eliminate incidents on Web servers, reported a hundredfold increase in malware activity from August 2006 to March 2007.


Many professionals are already aware of the extent of the problem: Fully two-thirds of industry experts, analysts, and academics predict that within the next decade there will be serious attacks on the Internet’s infrastructure or the power grid. Why are we so vulnerable to attack? Is there any way to stop it?

In The Future of the Internet and How to Stop It, Jonathan Zittrain undertakes to explain why and to examine possible ways for avoiding disaster. He warns against protecting products and communication channels by using more dedicated, tethered appliances, a path that he claims can lead to overregulation, lack of innovation, and external corporate control. To avoid these scenarios, he proposes some guidelines for a future in which the PC and the Internet continue as open, general-purpose platforms, which encourage innovation, rather than becoming overly regulated and controlled in an effort to prevent lethal attacks.

Generative Technologies

Both the PC and the Internet are generative technologies. Typically such technologies get their start in obscurity with free and open contributions from a wide variety of people in many fields. Each technology and the culture it engenders develop to encourage innovation and accept improvements from many. Thus with the Internet, a group of DARPA scientists and engineers started the project as a way to easily communicate with various people in different locations. In the case of the PC, the Apple II was a platform that invited people to tinker with it and write programs.

Eventually these technologies unexpectedly and quite spectacularly hit the mainstream and, as they did so, they became victims of their own success. The innovative and congenial spirit of the early community leaves them vulnerable to many types of users who would abuse or subvert the technology or just have no interest in contributing to further innovation. The increase in bad code has made PCs even more vulnerable to attack, as the proliferation of viruses and worms attests to. There are two ways for the Internet to end: the bang and the whimper, Zittrain speculates. The intense proliferation of malware could cause widespread, simultaneous failures of servers and systems, or it could exact “death by a thousand cuts,” that slow disabling of networks and computers in a random but nevertheless lethal fashion.

Top-Down Control

Social, political, and business organizations tend to respond to such risks and threats of failure by trading off openness and innovation for security and stability. Zittrain is concerned that PCs may be locked down, becoming more like tethered and sterile appliances controlled remotely by corporations and/or governments. It’s a dystopian view in which systems of control slide inexorably into regulation and censorship. Today’s tethered appliances include mobile phones, video game consoles, TiVos, iPods, iPhones, and BlackBerries. They ship with carefully circumscribed functionality and no one but the vendor can change them. Zittrain further observes that software as a service also has the potential to turn PCs into mere dumb terminals that access web sites to run software, thereby stifling innovation even further. And users have fewer choices and potentially less privacy as the issue of data ownership enters a gray area.

Taking a Page from Wikipedia

Zittrain does have a plan for how to avoid either the collapse of the Internet or the equally dark road to control and censorship. The problem is, however, it’s based on the Wikipedia model, which achieves some stability and security without resorting to “lock-down” methods. Zittrain praises Wikipedia as an example not just of fruitful user-generated content, but also a testament to the concept that people from vastly different backgrounds can achieve something together. Here’s what he sees as Wikipedia’s recipe for success: “A light regulatory touch coupled with an openness to flexible public involvement, including a way for members of the public to make changes, good or bad, with immediate effect; a focus on earnest discussion, including reference to neutral dispute resolution policies, and a core of people prepared to model an ethos that others can follow. With any of these pieces missing Wikipedia would likely not have worked.”

So can the Internet remain innovative and open or is it doomed to a future of locked-downs, regulation, and control? Zittrain fervently hopes the former is possible. To that end he envisions a loose collaboration of business organizations and individuals working to re-imagine the structure of the Internet and create robust and easily deployed tools that will protect websites, servers, and PCs alike. He calls for good will and a light regulatory touch, a kind of Manhattan Project for the Internet. Such an endeavour would bring together the smartest technologists to solve the problems of security and privacy without overregulation and government control. The project would also require intellectually and artistically talented people to stimulate users to participate and become more than consumers. Zittrain depends on people of “good will” to contribute to the work.

Why the Manhattan Project Worked—in the 1940s

The problems to be solved—malware, lack of security, poor code—are certainly difficult ones that will take significant and sustained focus from the best minds in order to be resolved.  But while Wikipedia attracts a lot of volunteers, who contribute uncountable numbers of hours building the site, that project also fosters a culture where the kinda’ good and the sorta’ right will do—hardly the recommended modus operandi for a Manhattan-type project. And it is unclear that these are the best minds our society has to offer. Most truly gifted and talented people are already busy doing other things. The Manhattan Project worked, but the government paid the best scientists and ablest administrators who collaborated on it. And it was under the threat of a world at war that those researchers did their brilliant work. Depending on good will and a light regulatory touch may work for Wikipedia and voluntary projects like it; however it is unlikely to do the trick in solving the knotty problems that plague the Internet today. Although Mr. Zittrain has ably and fully laid out the hazards the Internet faces, his plan to resolve the problems, while admirable, is unviable, given the commercial foundation of the Internet itself. The vastness of what many hands has wrought is only matched by the problems its future holds.

Post a Comment

More information

The Culture of Wikipedia
The Future of the Internet Official Website

Buy The Future of the Internet at Amazon