Digital Athena
  • Blog
  • Essays and Book Reviews
  • Contact Us
  • About Digital Athena
  • Index of Essays and Reviews

Is Our Digital Future Inevitable or Do We Have Options?

12/10/2012

1 Comment

 
Back to my blog after some professional and personal interruptions. I thought I’d begin again by talking about the way many people so readily embrace the new technologies that stream out of software and hardware  companies and into their lives. Most dismiss objections about the changes in our lives, in our relationships—indeed in our brains— that those new technologies may trigger.  For better or worse, it’s inevitable, people say. Stopping the changes, or even the rate of change, is impossible now. Many pundits and members of the digerati enjoy not just defining the current trends but also predicting the future, whether it be the next new thing or a broad vision of social change over the upcoming twenty years or more.

But is it all inevitable? I recently came across another take on the issue of inevitability and the impossibility of stopping the relentless march of change over time. In Thomas Mann’s Doctor Faustus, the narrator reflects on the consensus in his intellectual circle in Munich that as the 1930s unfolded, Germany was in for “hard and dark times
that would scoff at humanity, for an age of great wars and sweeping revolution, presumably leading far back beyond the Christian civilization of the Middle Ages and restoring instead the Dark Ages that preceded its birth and had followed the collapse of antiquity.”

Yet Mann’s narrator observes that no one objected to those conclusions. No one said this dark version of the future must be changed, must be avoided. No one said: ”We must somehow intervene and stop this from happening.” Instead they reveled in the cleverness of their insights, in their recognition of the facts and their inevitable results. They said: “’It’s coming. It’s coming, and once it’s here we will find ourselves at the crest of the moment. It is interesting, it is even good—simply because it is what is coming, and to recognize that fact is both achievement and enjoyment enough. It is not up to us to take measures against it as well.’”

It is a predicament well worth remembering, I believe, as we listen to our own technology enthusiasts. Our dark age ahead my not have death camps and atomic bombs but it has the possibility of being just as pernicious and inhumane. It could well be a time where in celebrating the wonders of technology we ignore what is the best essence of what it means to be human. We would do well to consider our choices while we still can.

1 Comment

The Non-Stop Now of Social Media: From Wise Crowds to Group Narcissism

4/2/2012

5 Comments

 
Yesterday I attended a conference on social networking at the Boston Museum of Fine Arts and was once again struck with how absolutely overwhelmed and engulfed our modern lives are with digital machines and  technology. They are omnipresent. We carry them with us wherever we go. We  transact business through them. We communicate with friends and family and even  with that larger amorphous network of acquaintances and distant mutual friends  and their mutual acquaintances. And if we are honest, most of us will admit that  we even take those devices into our bedrooms with us when we retire for the  night. It is now so easy for our lives, our minute-to-minute experience of life,  to become permeated and mediated by our technologies. 

The conference, “Are Social Networks Really Social? was sponsored jointly by the museum and the Boston Institute for Psychotherapy so I would guess therapists of various sorts were in the overwhelming majority. But  there were more ordinary folk like myself who came out of curiosity to hear  three speakers on the topic of social media: (1) a psychologist and author who specializes in technology, Sherry Turkle, (2)  a  novelist, Helen Schulman, who has written about the nature and consequences of all things digital on the lives of ordinary families, and (3) an artist, Rachel Perry Welty, who has explored media like Facebook and Twitter as performance spaces. Over a long and rich afternoon those speakers and the audience pondered how social media—everything from email to smartphones to  Facebook—affects both our relationships with others and our own psyches. There  was a general consternation, even fear, and some sadness too, about how distracted, unfocused, and isolated individuals are becoming in our society. 

Many ramifications of such behavior came up: People are less productive, and they’re less capable of sustained and complex thinking. Some observed that there’s an intolerance, perhaps even an inability to actually cultivate
solitude. Not unrelated is a strong tendency to avoid, even again to fear, having direct conversations with others. And this of course leads to a lack of intimacy as well as empathy. Sherry Turkle worries that many people are actually substituting “connections” for “conversations.”

I’m thinking it may even be worse than that because when people post some thoughts on Facebook, send out a Tweet, or text someone, they are often not “connecting” so much as they are “performing.” My own experience with
Facebook and its current invitation to participate: “What on your mind?”is that it is a site for self-promotion, or as Norman Mailer once humbly (or maybe wryly) called a collection of his short works: “Advertisements for Myself.”
(Mailer was very good at the self-promotion thing, well before 2.0.)

The Yale computer scientist David Gelernter recently wrote a diatribe in The Wall Street Journal primarily against the careless disposable nature of “digital words,” and how sloppy and lazy (and idiotic) texting and smiley faces
really are. Yes, I agree that it’s all regrettable and one can only hope that everyone will come to their senses eventually. But what is even more interesting is Gelernter’s observation that “Generation-i is stuck in then“now,” neither pondering the past nor planning for the future. It’s the permeation and flow of the continuously new. “Merely glance at any digital gadget and you learn immediately what all your friends are doing, seeing, hearing, 
.  .  .  and (if you care) what’s going on now any place on earth. The whole world is screaming at you. Under the circumstances, young people abandon themselves to the present. Group narcissism takes over, as Generation–i falls in love with its own image reflected on the surface of the
cybersphere.”

Group narcissism. It seems we have far more of this phenomenon than we do of wise crowds and smart mobs. But it’s not clinical narcissism, which is a serious personality disorder characterized by dramatic emotional
behavior, a deep need for admiration, an inflated sense of self-importance, and an underlying fragile esteem for oneself. No this narcissism refers to the simpler classical myth of the beautiful Narcissus, who fell deeply in love with his own image in a pool of water as he drank from it. But he could never embrace or possess the image so rather than relinquish it he lay down by the side of the pool and was gradually consumed by the flame of his own
self-passion.

5 Comments

Facebook Era: Is the End of Its Dominance Near?

3/23/2012

5 Comments

 
It's starting to look like the beginning of the end. Signs are appearing, here and there, that suggest the dominance of Facebook in the public imagination and the widespread obsession in popular culture with all things Facebook may by waning, oddly enough, even before it launches its initial public offering later this spring. A recent New York Times story about a new social website called Pinterest suggests some reasons why.

Pinterest's express goal is similar to Facebook's but with a twist. Pinterest intends to "connect everyone in the world through the 'things' they find interesting." It declares itself a "virtual pinboard," which allows users to place pictures of objects they like and organize them. Others can view them and copy them for their own uses. Interest categories include food and recipes, wedding planning, garden styles, and just about anything else you can think of. Piniterest at once both leverages Facebook and has certain advantages over the Facebook model. It uses a "tie-in" with Facebook, so that each time a user signs up, the Pinterest website automatically "follows" all of that users Facebook friends who are also members of Pinterest, sending them email notifications of the new enrollee.

The new website also distinguishes itself from Facebook by expressly discouraging self-promotion: "If there is a photo or porject you're proud of, pin away! However, try not to use Pinterest purely as a tool for self-promotion."  Take that, Facebook! Pinterest also may have a market advantage over Facebook. The site attracts a certain type of person, one who likes to collect or curate or scrapbook. The members tend to be hobbyists and as such they tend to be avid. One investor characterizes them as "voracious" in their use of the website. Ironically it tends to encourage true sharing in a way Facebook does not.  As Susan Etlinger, a technology analyst and consult describes the difference: "Facebook used to be about connecting with your friends, but now it's do focused on the individual, with curating your timeline and how you present yourself to the world. " Facebook isn't structured to bring together communities of interest.

Furthermore, because Pinterest members tend to be both collectors and people who shop online, they represent an active group of purchasers with vast market potential down the road. Like so many start-ups Pinterest has to date relied on private investors and venture capital so far--to the tune of $40M. But eventually they'll have to figure out how they're going to make money and many investors believe their member base could represent a real gold mine.
 
Other news out of the South by Southwest technology conference in the past few days indicates that start-ups are distancing themselves from Facebook as they see the social networking giant becoming more profit-oriented. They worry that the rich data Facebook currently provides to its users could become accessible only at a price in the future. Although they for the most part continue to have Facebook as part of their model, more and more start-ups are making sure they build their own apps in ways that allow them to be independent of Facebook. As one entrepreneur puts it, Facebook is "our greatest opportunity and our greatest risk." For more see "Start-Ups Resist Facebook's Pull" in last Wednesday's edition of The Wall Street Journal.
5 Comments

Does Thinking Have a Future?

1/11/2012

2 Comments

 
I recently bought a promising book called The Future of Thinking: Learning Institutions in the Digital Age at the MIT Press Bookstore. Great title, I thought, and it looks like it's written by two smart people, Cathy Davidson from Duke, and David Theo Goldberg from the University of California system. Plus the project was underwritten by the MacArthur Foundation as part of a series for the John D. and Catherine T. MacArthur Foundation Reports on Digital Media and Learning. I enthusiastically plopped a copy into my basket and went on browsing. This’ll be good, I thought.  

The book itself, however, disappointed, and it’s been a little hard to figure out why. The actual book is less about thinking than it is about the learning environment the authors envision for the future.  But that’s OK.  I understand that the structures of our existing siloed educational institutions were conceived of centuries ago. They certainly don’t reflect the way people are learning informally outside of those institutions today. But wait. Do they have to, I asked myself? The authors seem to say yes. Is this the new reality?

Although from time to time the authors claim that they do not advocate using digital tools and technologies just because they are there, they do in fact believe that learning within institutions should better  reflect how people interact outside of formal learning environments today, including social networking tools, massively multiplayer online video gaming, virtual learning institutions, interactive collaborations, and open-access public forums. So let’s see where that takes us.

Existing traditional educational institutions are failing us, the authors argue. They envision future institutions as  "mobilizing networks." This is all very up-to-date indeed, I thought. Instead of top-down authoritative teaching and learning, the mobilizing network would support peer-to-peer learning and collaborative knowledge production. Digital learning, they emphasize, is participatory learning— which is in part code for not "teaching to the test." That’s fine. Teaching to the test has never worked very well anyway.

But there was still another problem with this book about the future of thinking.  There's a way of arguing in this book that says: “Here we are. Here are roughly the outlines of the arguments. Here are some drawbacks. But still we must go on with our vision, mustn’t we?” And they assume an audience that is fully onboard with their collaborative thinking: For example, the authors are concerned about how Web 2.0 as a network of "many-to-many collaborating and customizing together" may evolve in the wrong way as corporations such as Google gain control over more and more personal and institutional, and national information. But never mind: "Yet even though the concept is vague or open to exploitative, monopolistic, or oligopolistic (wow!) practices, Web 2.0 is a convenient way of signaling a new type of institution. It is one where contributions are distributed rather than coming from a single physical location and where ideas are shared outside the normal rules of tenure, credentialing, and professional peer review." Is there any room in their collaborative world for skepticism? For questioning whether loose collaboration-for-all is right for every age group and every discipline at all times?

There's also often a troubling lack of in-depth reasoning behind their advocacy of certain processes in the new forms of learning. Many people read differently these days, the authors argue. By implication our institutions should reflect these new processes, apparently with no analysis of their inherent value. Here's how the authors redefine reading for the digital age: "Even online reading . . . has become collaborative, interactive, nonlinear, and relational, engaging multiple voices. We browse, scan, connect in mid-paragraph if not mid-sentence to related material, look up information relevant or related to what we are reading. Sometimes this mode of relational reading might draw us completely away from the original text, hypertextually streaming us into completely new threads and pathways." It's an interesting description of what often happens online, but does it have anything to do with learning? Is it supposed to in the future?

Collaborative, many-to-multitudes, virtual, peer-to-peer—the authors  present a remix if you will of some au courante concepts. From Chris Anderson’s The Long Tail, they project from economic and business theory onto the role of the university:  "If we do indeed live on the long tail,  . . . then  virtual institutions may be the long virtual tail that wags the dog of traditional institutions without which it could not exist." Huh? Other popular ideas, such as those from Clay Shirky’s Here Comes Everybody: The Power of Organizing without Organizations and Yoklai Benkler’s The Wealth of Networks are added into the mix, all seemingly part of the ideal audience for this book.

I suspect part of the repetitive, jingoistic, and sometime contradictory statements that emerge from the text reflect on how the text was generated. You got it:  very collaboratively. Not only did the two authors collaborate but they then posted the draft online and invited comments—for a year. The draft was also presented in three additional public forums. Lastly, the authors worked to incorporate many of the comments and concerns voiced by others. It is a form of writing by committee that can wobble under the weight of the various points of view if not carefully shepherded by one (or even two) good writers with a single strong vision.

It isn’t that the book doesn’t offer some food for thought on many issues. It does. How do we create multidisciplinary forums and projects within the currently rigid institutions? Is learning more a process of learning how to learn, than learning what,  these days? Is it less about actually acquiring the information, since the information will always be there to be acquired when needed? And what about credibility on the Internet? How prominent an emphasis do we need to give to teaching students how to discern credible sources of information as a new part of that learning "how" process?

But as for the future of thinking . . . well, it seems there still needs to be more thought put into that. I just don’t know how collaborative it has to be.

2 Comments

Starting to Face Facebook

12/16/2011

2 Comments

 
I'm not going to declare this a full-fledged trend yet but it seems to be starting to be acceptable to question the merits of Facebook. Just this week the New York Times reported that some people are opting out. And Daniel Gulati blogged on the Harvard Business Review site the Facebook is actually making people "miserable."  Based on previous comments on his blog and research he did with young business entrepreneurs for a recent book entitled Passion & Purpose, Gulati  finds that there are three ways that Facebook is adversely affecting personal and working lives these days:
 (1) It creates constant comparison and competition. Because Facebook tends to promote, if you will, self-promotion, people find themselves comparing their own lives and achievements to the top 1% of their friends'.
(2) Time becomes fragmented. This is of course a problem with our digital mobile lives in general but Gulati observes that because one can log onto Facebook from multiple devices, people tend to switch back and forth a lot, resulting in the multitasking that lowers productivity and decreases in people's ability to focus on a single task for a sufficient period of time. 
(3) Ironically Facebook usurps real-life interchanges—face-to-face meetings and phones calls—thereby negatively affecting close relationships.
Gulati thinks that quitting Facebook isn't a realistic choice, but many others, both in his comments, and those in Jenna Wortham's article in the New York Times, disagree. Growth figures for Facebook in the US may support this view. The growth rate for the year ending October 2011 was 10% for the US, down from 56% the previous year. Some of this may reflect reaching a saturation point but it will be interesting to see how the numbers look in the spring as the company approaches its public offering. The perennial problem at Facebook, according to Ray Valdes, an analyst at Gartner, is keeping the millions of users they already have and making sure they are actively participating in the site. “They are likely more worried about the novelty factor wearing off,” observes Mr. Valdes.

2 Comments

Analyzing Social Media: The New Way to Pitch Pepsi

11/18/2011

3 Comments

 
Every day, somewhere in the cloud, Bluefin Labs of Cambridge. MA, is analyzing 200 channels of US television broadcasts around the clock. That’s something like an unfathomable 172 terabytes of raw video data (and a goodly bit of drivel one might add). The company then collects programming information including channels, broadcast schedules, and keywords to tag each show and ad. For each group of ads, Bluefin workers make the initial product identification and then their analytics system automatically identifies repeat airings.

But that’s not all: Bluefin additionally monitors 300 million public social-media comments per day for the keywords associated with programs and ads.  Out of that huge number there are on average about 10 million comments about TV content. About 1.4 million of those are made in what Bluefin considers the relevant context for a particular show or advertisement, which is about three hours before to three hours after the actual broadcast. These comments pop up primarily in tweets but there are also public Facebook posts and some other media sites. The company also tracks the online activity of the 9.8 million people who have made online comments about television in the last 90 days.

What does all this add up to (no pun intended)? Based on research originally begun at the prestigious MIT Media Lab, Bluefin Labs is building a business out of creating a context in which television advertisers can understand which ads have more success so that they can make better decisions about where to place their ads in the future. Television producers can also tap into viewers responses and modify content to better please and perhaps increase their audiences.

The effort to better understand audience responses through analyses of social media is still in its infancy. Target marketing is the new Holy Grail for marketing executives. And what Bluefin has been able to accomplish is in some ways impressive.  It has created a context that shows how the same viewers might respond to different ads when watching different shows. It  ultimately provides two measurements as well. The first is “response level,” which counts the number of people commenting on any given show or ad. The second is “ response share,” which indicates the percentage of all responses that a single show or ad has garnered. The company is already attracting some large corporate clients, such as Pepsico, CBS, Comcast, Fox Sports, and Turner Broadcasting, writes David Talbot in the recent print issue of Technology Review.

That’s a remarkable list and it indicates just how important analytics is becoming for broadcasters and manufacturers.TV advertising is big business, after all. It dictates the success or failure of TV broadcasting because companies spend $72 billion annually these days on TV advertising. And  Americans watch a lot of television. According to the Nielson Corporation they spend 20% of their waking hours watching television, some of them multitasking by using their laptops or smartphones at the same time.

Still there are some reasons to be skeptical. For one, as Hill Holiday advertising executive Mike Proulx points out, the connection between social media reaction and the impact of content is still just a theory. There’s no solid evidence to date that proves it is a valid connection. Another major drawback, it seems to me, is that people who actively tweet or post comments on Facebook or elsewhere about television programs are only a subset of the couch potato population, no doubt skewing younger for one thing. What’s more, it’s also becoming more and more common for people to record television shows and watch them at random times, making the correlation between the three hours before and after a broadcast irrelevant for many viewers these days.  

So it remains to be seen if all the effort and advanced expertise that is being poured into advertising analytics and social media will generate better advertising, or whether corporations will still be saying ten years from now what the merchant John Wanamaker said over ninety years ago: “Half of the money I spend on advertising is wasted; the problem is I don’t know which half.”

3 Comments

Texts and the Texter

10/25/2011

0 Comments

 
 “We shape our buildings; thereafter they shape us,” Winston Churchill observed about the symbiotic relationship between our architecture and ourselves. The same may be said for how we interact with our technologies.

Take a look at texting. The numbers seem to grow all the time but as of the Kaiser Foundation Study published in January 2010, young people were sending on average 3000 texts per month and were spending four times the amount of time texting than they were actually talking on their phones. And texting has created has influenced communications in several ways:

First of all, because people text on their cellphones, most must use a virtual keyboard on a touchscreen (Blackberry owners get to use the tiny physical keys, which is slightly more user friendly, I suppose.). In either case, the keys are much smaller than the average computer keyboard’s keys, so it’s easy to make mistakes. Plus, using the virtual keyboard also creates another level of awkwardness because you have to shift to a second (and on some cellphones a third) view to access all the characters on the QWERTY keyboard. In addition texting has the “short message service” limitation of 160 characters.

Then there’s the speed at which the communication is sent. Texts are delivered pretty much instantaneously. This leads people to think that they must respond at roughly the same speed. Delaying a response seems for many to imply that you’re ignoring the person contacted you.

The combination of a virtual awkward keyboard, the limited length, and the pressure to rapidly respond engenders the kind of shorthand of contracted words (Xlnt for excellent, rite for write), pictograms (b4 for before, @om for atom), initializations (N for no, LOL for laughing out loud, CWOT for complete waste of time), and nonstandard acronyms (anfscd for and now for something completely different, btdt for been there, done that, hhoj for ha, ha, only joking. Notice how the shorthand becomes more and more cryptic and we haven’t even talked about the emoticons—those variations on the ubiquitous smiley face using strings of punctuation

I know I’m old—way over thirty—but texting seems to me like the new pig Latin—another code designed to communicate secretly and to exclude others. In the case of pig Latin, the aim was to exclude parents. And for some ages the same may be true to today’s texting. It’s a silent and secret form of communication one can do in one’s lap under the dinner table. So essentially the technology of sending written messages via cell phones creates private languages.

Texting can be a convenient way to quickly notify someone, but the effects, especially for younger people, can be more far-reaching and burdensome and hardly convenient. Sherry Turkle met with one sixteen-year-old named Sanjay during her research for her new book Alone/Together. He expressed anxiety and frustration around texting. He turned off his phone while he spoke with Turkle for an hour. Turkle writes: “At the end of our conversation, he turns his phone back on. He looks at me ruefully, almost embarrassed. He has received over a hundred text messages as were speaking. Some are from his girlfriend, who, he says, “is having a meltdown.” Some are from a group of close friends trying to organize a small concert. He feels a lot of pressure to reply to both situations and begins to pick up his books and laptop so he can find a quiet place to set himself to the task. . . . “I can’t imagine doing this when I get older.” And then, more quietly, “How long do I have to continue doing this?” Sounds more like he’s facing a prison sentence rather than the joy of continuous connection  . . .

0 Comments

Our Worst Selves, or Toward an Understanding of Anthony Weiner

6/20/2011

0 Comments

 
We are not, apparently, our best selves when we are online, according to Elias Aboujaoude, author of the new book, Virtually You: The Dangerous Powers of the e-Personality. In fact being online seems to bring out the worst in some people, from harboring delusions of grandeur (thinking we have God-like capabilities) to out-of-control gambling (there are currently 1300 web sites devoted to gambling and the numbers of participants is growing at an alarming rate every year). Impulsivity, infantile regression, viciousness, narcissism, and aggression—Aboujaoude sees these as the five major unpleasant traits that may emerge in the e-personality of any of us.

Dr. Aboujaoude is a psychiatrist and the director of both the Obsessive Compulsive Disorder Clinic and the Impulse Control Disorders Clinic at Stanford University School of Medicine. Based on his clinical experience and a good bit of research, Aboujaoude observes that tendencies developed online in one’s e-personality can affect real-life behaviors as well. Along the way he offers many very thoughtful insights into how online behaviors evolve and how they can ultimately change overall personality.

One quite relevant one involves how texting and sites like Second Life can impact real-life behaviors:

“The inner child that comes alive online or on the iPhone or Blackberry keypad acquires a ‘voice’ that is playful and orthographically challenged like a kid’s, but calculating and potentially dangerous, like an adult’s.” He says it is this “toxic mix of dark desires in the virtual world and the immature, barely oral phase to which many adults regress online” that contribute to much of the sexual, pornographic, and predatory communications on the Internet.

Aboujaoude also observes that the speed of online communication, coupled with the facelessness of such interactions, inclines people to more quickly enter into discussions of intimate details of their lives with strangers. Online communication can rapidly become “hyperpersonal,” which in turn contributes to real-life sexual “hook-ups” when people first meet—because by the time they meet they feel they know everything about the other person. Anthony Weiner aside, these kinds of insights go a long way to explaining how people we thought to be perfectly sensible can behave quite differently online (and perhaps offline in their real lives as well).

In the end, the author does not advocate that we simply turn off our connections to the Interent and return to life as it was fifteen years or so ago. No matter how serious the consequences of our being online can become, he has no illusions about a return to yesteryear. Rather he suggests we use caution in our actions online. He believes we need to first know ourselves before we can safely and maturely interact online.  Aboujaoude calls for more research into online behavior as well as for new paradigms both for parenting and educating our children. And he holds onto the hope that we will survive the changes the Internet is making just as we survived the invention and proliferation of the steam engine in the Industrial Revolution. The question remains, however, whether the changes the Internet brings are not more insidious, more pernicious, and more pervasive than those brought about by the inventions of the Industrial Age. No one as yet seems to know. I suppose we’ll just have to wait and see.

0 Comments

Is Addiction a Useful Concept for Media Use?

5/26/2011

0 Comments

 
Let’s face it, we are all “users.” Anyone who accesses a computer software program has been known as a “user” for many years now. No doubt, the term originated somewhere back in the mainframe age when people had to sit at a terminal, log on, and identify themselves before they could access the mammoth machines, machines that filled whole buildings in the sixties and whose power is now dwarfed by that on your average smartphone. Today, however, many experts from various specialties  are starting to denounce the addictive capacity of the latest technologies. They are saying many users are in fact becoming addicts. But how meaningful is it to talk about addiction when referring to people who constantly or continually use computers and their mobile devices to surf the Web, text with their friends, and check for email?

The latest draft of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, a set of standards published by the American Psychiatric Association, posits a new category of mental disorder, called “Behavioral Addictions,” and it suggests, for starters, just one such disorder: gambling. The physiological rationale for the new category is that such behavior has the same clinical pattern as substance addictions, that is, an activity originally undertaken for pleasure becomes compulsive. The addicted person ceases to derive much pleasure from the activity but continues to pursue the pattern despite diminishing gains and increasing cost. Addicts lose control over their behavior. The activity begins to control them. Neurobiologically, experts claim, addictive behaviors follow the same path in the brain, generating the euphoria that dopamine creates, and leading addicts to repeat their behavior in search of new pleasures.

So how do experts define the criteria for “Internet” addiction? To begin with, they’ve identified six criteria that must be present:

Preoccupation—Thinking constantly about previous online activity or anticipating the next one.

Tolerance—Needing longer periods online in order to feel satisfied.

Lack of control—Finding it impossible to cut back or stop.

Withdrawal—Stopping induces restlessness, irritability, other changes in mood.

Unintended overuse—Repeatedly staying online longer than intended.

Also, the user must also experience one of three criteria that indicate the online activity is negatively affecting his life. These include (1) losing or jeopardizing the loss of something important, such as a job, a big opportunity, or personal relationship, (2) concealing and/or lying about time spent online, and (3) using the activity to escape real-life difficulties.

Psychologist Sherry Turkle doesn’t like the idea of labeling computer overuse as an addiction because, she claims, it calls for one solution: stopping. And she believes we must learn how to live with our technologies, that we can’t go back. “The idea of addiction, with its one solution that we know we won’t take, makes us feel hopeless. We have to find a way to live with seductive technology and make it work to our purpose,” Turkle writes in her new book Alone Together: Why We Expect More from Technology and Less from Each Other.

 It is certainly true that we can’t go back—the Internet and cell phones aren’t going away and there will no doubt be even more seductive technologies to come. Still it is hard to ignore the neurological science that tells us some people are rewiring their brains in ways that make them crave more media use, causing them to lose control of their time and how they spent it. Yet it may also be true that only those types of personalities are at risk who are predisposed to develop some sort of compulsive, addictive behavior in any event. But I do have one more nagging thought that just won’t go away. It’s what one of Sherry Turkle’s young research subjects observed about the pull and the power of our modern technology: This sixteen-year-old girl perhaps identifies the real problem with this postmodern life of ours: “Technology is bad because people are not as strong as its pull.”

0 Comments

Web 2.0: A Conversation Lost

5/13/2011

0 Comments

 
The art of conversation is so twentieth century. It seems that Web 2.0 has replaced the need for conversing entirely. For those who send hundreds of text messages each day, who constantly check and updates their Facebook Walls, even phone calls are passé—they’re far too time-consuming, too emotionally demanding, and just plain too complicated. Deval, a senior in high school whom Sherry Turkle cites in her new book Alone Together, observes: “A long conversation with someone you don’t want to talk to that badly can be a waste of time.” By texting, Deval explains, he only has to deal with direct information and not waste time on conversation fillers. At the same time, however, the high school senior confesses that he doesn’t really know how to have a conversation, at least yet. He thinks he might soon start to talk on the phone as a way to learn how to have an actual conversation: “For later in life, I’ll need to learn how to have a conversation, learn how to find common ground as I can have something to talk about, rather than spending my life in awkward silence.”

Neurologists and psychologists worry a lot today about the lack of face-to-face and voice-to-voice interaction that Web 2.0 enables. They point out that it is especially important for adolescents to have direct interaction with others because it is during the late teenage years and early twenties that the brain develops the ability to understand how others feel and how one’s actions may affect others around them. The underdeveloped frontal lobes of younger teenagers, explains Dr. Gary Small, Director of the UCLA Memory and Aging Research Center, lead teenagers to seek out situations that provide instant gratification. Younger teenagers tend to be self-absorbed. They also tend to lack mature judgment, are unable to understand danger in certain situations, and have trouble putting things in perspective.

One prevalent habit that impedes the normal development of the frontal lobes to the level of maturity one expects to see in adults by their mid-twenties is multitasking, says Dr. Small. The ability of multiple gadgets to allow young adults (and others) to listen to music, watch TV, email or text, and work on homework at the same time can lead to a superficial understanding of information. And all this technology feeds the desire for novelty and instant gratification, not complex thinking or deep learning. Abstract reasoning also remains undeveloped in such an environment.

High school senior Deval believes he can learn to have conversations by talking on the phone. But mastering the art of conversation is not the same kind learning as figuring out how to use the latest smartphone. Experts say it takes practice in listening to other people and learning how to read their faces and other gestures to fully understand what another person is feeling and saying. There are deeply intuitive aspects to learning how to fully converse with someone, what Gary Small calls the “empathetic neural circuitry” that is part of mature emotional intelligence. Researchers say it is too early to know how and if  “Digital Natives,” those born after 1980 who have grown up using all kinds of digital devices as a natural part of the rhythm of their lives, will develop empathy at all and if they do develop it, how it might differ from what empathy means today.

What the experts do know is that the more hours spent in front of electronic screens can actually atrophy the neural circuitry that people develop to recognize and interpret nonverbal communication. And these skills are a significant part of what makes us human. Their mastery helps define personal and professional successes as well.  Understanding general body language, reading facial expressions, and making eye contact are all part of the art of empathy. So in this age of superconnectivity, where communications are everywhere and we always on, we seem to risk losing many of the basic skills that are the hallmarks of effective communication itself.

See also

Alone Together by Sherry Turkle


iBrain: Surviving the Technological Alteration of the Modern Mind by Gary Small MD and Gigi Vorgan



0 Comments
<<Previous

    RSS Feed

    Archives

    February 2014
    January 2014
    December 2013
    November 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    January 2012
    December 2011
    November 2011
    October 2011
    June 2011
    May 2011
    April 2011
    March 2011
    February 2011
    December 2010
    July 2010
    June 2010
    May 2010
    March 2010
    February 2010
    January 2010
    December 2009

    Categories

    All
    AI
    Computer Models
    Convergence
    Digital Software
    Division Of Labor
    E Readers
    Facebook
    Financial Markets
    Google
    Innovation Business Cycle
    Internet
    Knowledge
    Learning
    Media Use
    Myths
    Powerpoint
    Robots
    Screen Life
    Screen Life
    Search
    Social Networking
    Targeted Marketing
    Technology And Jobs
    The Nature Of The Digital
    The Nature Of The Digital
    Video Games
    Web 2.0
    Wikis
    Youth

    Cynthia's Blog Plan

    I'll aim to post here a few times a month, based on current events and my ongoing research.