Digital Athena
  • Blog
  • Essays and Book Reviews
  • Contact Us
  • About Digital Athena
  • Index of Essays and Reviews

Is Our Digital Future Inevitable or Do We Have Options?

12/10/2012

1 Comment

 
Back to my blog after some professional and personal interruptions. I thought I’d begin again by talking about the way many people so readily embrace the new technologies that stream out of software and hardware  companies and into their lives. Most dismiss objections about the changes in our lives, in our relationships—indeed in our brains— that those new technologies may trigger.  For better or worse, it’s inevitable, people say. Stopping the changes, or even the rate of change, is impossible now. Many pundits and members of the digerati enjoy not just defining the current trends but also predicting the future, whether it be the next new thing or a broad vision of social change over the upcoming twenty years or more.

But is it all inevitable? I recently came across another take on the issue of inevitability and the impossibility of stopping the relentless march of change over time. In Thomas Mann’s Doctor Faustus, the narrator reflects on the consensus in his intellectual circle in Munich that as the 1930s unfolded, Germany was in for “hard and dark times
that would scoff at humanity, for an age of great wars and sweeping revolution, presumably leading far back beyond the Christian civilization of the Middle Ages and restoring instead the Dark Ages that preceded its birth and had followed the collapse of antiquity.”

Yet Mann’s narrator observes that no one objected to those conclusions. No one said this dark version of the future must be changed, must be avoided. No one said: ”We must somehow intervene and stop this from happening.” Instead they reveled in the cleverness of their insights, in their recognition of the facts and their inevitable results. They said: “’It’s coming. It’s coming, and once it’s here we will find ourselves at the crest of the moment. It is interesting, it is even good—simply because it is what is coming, and to recognize that fact is both achievement and enjoyment enough. It is not up to us to take measures against it as well.’”

It is a predicament well worth remembering, I believe, as we listen to our own technology enthusiasts. Our dark age ahead my not have death camps and atomic bombs but it has the possibility of being just as pernicious and inhumane. It could well be a time where in celebrating the wonders of technology we ignore what is the best essence of what it means to be human. We would do well to consider our choices while we still can.

1 Comment

Mythology for Our Time: The Hero As Multiprocessor

7/18/2012

0 Comments

 
“I am a multitasker,” my ten-year-old niece declared with a triumphant grin at a recent family get-together. I was horrified, frankly. After all the neuroscientists have been telling us lately about the limitations of our working memory—most people can only hold about seven items in  their working memory at any given moment—and about how switching back and forth between tasks actually makes people more inefficient, I was appalled to see a  member of our younger generation expressing multitasking as a positive achievement and a model for how to negotiate life.

If recent surveys and current trends are any indication, by the time my niece is 15, she will be checking her Facebook account, watching TV, texting several friends, and doing her homework in a rapid cycle of sequences for seven or eight or more hours per day. She will have acquired 365 friends on Facebook and sleep with her cell phone under her pillow. She will spend a great deal of her time tethered to her machines, alone, “communicating” with others through a truncated set of texting words, abbreviations, and acronyms. The closest she might come on some days to deep emotion will be expressed in a string of emoticons. Her time alone will resemble not solitude, where some contemplation of oneself and one’s life might occur. Rather it will be more of a muffled isolation within an electronic cocoon.

What draws people to the spell of multitasking? Why is this goal so valued as a continuous activity today? I think it began with a  set of metaphors that started making their way into our language, probably in the 1970s, possibly even earlier. I was first personally struck when I was having a conversation with a businessman  conversant with computer programming as he described how he “interfaced” with his client. When I asked him what he meant by “interface,” he told me he meant how people connected, just like the 8- or 12-pronged plugs that connected a
  computer terminal to a mainframe. By the 70s, we began to speak and think of some mechanical aspects of human thinking. By the eighties, the use of computer terminology to describe human thought became commonplace. We “processed” information. We “transferred” knowledge. We “crunched” the numbers. In short, we began to think of ourselves more as calculators than as people. Multiprocessing seemed a natural after that.

With the ubiquity of digital devices today, people have begun to emulate the microprocessors with which they share their lives. They have adopted the rhythm of the multitasking, breaking down large tasks into smaller steps and
processing multiple activities in a nearly simultaneous way. There are many problems with these analogies and the changes in our behavior they foster, but I’ll just mention two. One, we humans are not made to be multitaskers. We
basically can do only one relatively involved task at a time (most of us can walk and chew gum at the same time, but that’s different from activities that require real focus). The second problem involves the whole idea of equating
human activity with computers. It leaves out very large parts of what makes us human in the first place: creativity, self-awareness, morality, and our abilities to love, trust, empathize, grieve, and experience a whole range of
emotions that machines can never understand. All these experiences color our thoughts, one would hope, and make them more deeply human along the way.

0 Comments

Mythology for Our Time III: Using Video Games to Fix Reality

6/26/2012

0 Comments

 
“The world without spirit  is a wasteland. . . . What is the nature of a wasteland? It is a land where everybody is living an inauthentic life, doing as other people do, doing as you’re told, with no courage for your own life. “ Joseph Campbell, The Power of Myth

Reality is broken, and video gaming may well provide a way for fixing it, according to Jane McGonigal, a game designer and author of Reality Is Broken: Why Games Make Us Better and How They Can Change the World. The subtitle actually sums up the argument for the book. McGonigal argues that playing video games can help people find their core strengths. Essentially she believes that one can use video gaming as positive psychology therapy to learn how to become more optimistic, proactive, engaged, and creative in solving real-world problems. Not surprisingly the heroes in her book are the video game designers. She believes they can inspire people to give their lives more meaning and lead them to believe they are participating in epic actions, epic lives. She also suggests that people are likely to be more optimistic if they create alternate reality games in real life based on their favorite superhero mythology.

However, it is the subject of the main title, this  so-called“brokenness” of reality that provides a real clue to the mythology of  our time.  Reality (that is, real life) is disappointing, and in a series of bold statements, McGonigal tells us just how reality is failing us and why games are better. Here’s a sample:

 “Compared with games, reality is too easy. Games challenge us with the voluntary obstacles and help us put our personal strengths to better use.” Behind this statement is the sad and abiding idea that our real lives are boring, our real work an involuntary burden of unwanted tasks done at someone else’s bidding. “We are wasting our lives,” McGonigal explains.

And again:

“Compared with games, reality is unproductive. Games give us clearer missions and more satisfying, hands-on work.” Reality it seems  is  unstructured and offers few if any opportunities for satisfying work. Again, the work of our everyday lives is inherently tedious and the goals often ill-defined and hard to figure out.

One last sample:

“Compared with games, reality is disconnected. Games build strong social bonds and lead to more active social networks.” Real life is isolating, the author says. She cites the demise of extended communities in our everyday lives and refers to Robert Putnam’s landmark work Bowling Alone (2000) about the collapse of organizations and civic participation in the latter part of the twentieth century.

McGonigal argues that video gaming and alternate reality games can be powerful paths to help boost happiness, improve problem-solving and perseverance, and even provide sparks of a sense of community, all of which can
be applied to real-world experiences. To be sure, games of all sorts can be fun and give players a change of pace and respite from the responsibilities of life. But McGonigal goes way beyond the fun part. She is right in saying that we need to take games more seriously, that they are not just an evil force in society offering opportunities for people to waste their time or play incessantly and additively. But it is questionable whether she is also right in claiming that
video games are truly transformational and provide positive experiences that can influence the way people act and think in their real lives away from the video game screen. Her evidence is anecdotal and largely unconvincing.

In the end, it is McGonigal’s perspective is truly askew. Reality isn’t broken. It’s the relationship between people’s inner lives and their external reality that is out of whack. Life is complex, messy, full of demands, disappointments, inconveniences, and responsibilities. Virtual worlds and  gmes, on the other hand, offer more structure, clearer goals, and hence new ways to feel successful and to communicate. But this does not by any means lead to authentic living. In the mid-1980s, the renowned mythology expert Joseph Campbell observed that many people were leading inauthentic lives. He said that they weren’t connected to their own inner spirit. Nor did they have a sense of the fundamental mystery of life in general. Without a sense of who they really were and their place in the universe, it was not possible to be genuinely engaged with others. And all this basis for leading an authentic life, Campbell
wrote repeatedly, is what a living myth can provide.

Reality may seem broken for video gamers because the life on the screen is so vivid, so complete in its opportunity for vicarious heroism. It is  the land of superheros and super tasks, mythological in the sense that  characters and events are larger than life. But these things are not representative of a living mythology, which would inspire inward illumination and outer wonder through its symbols and narratives about modern life. “Myths inspire the realization of the possibility of your perfection, the fullness of your strength, and the bringing of solar light into the world. Slaying monsters [and here Joseph Campbell meant slaying the monsters within the individual] is slaying the dark things.” Campbell told Bill Moyers. “Myths grab you somewhere down inside.” Video games may excite, may amuse, may well elevate one’s mood, but they do not hit you down deep within your spirit. They do not change your life as Campbell defined it when he spoke of living myths.

0 Comments

Texts and the Texter

10/25/2011

0 Comments

 
 “We shape our buildings; thereafter they shape us,” Winston Churchill observed about the symbiotic relationship between our architecture and ourselves. The same may be said for how we interact with our technologies.

Take a look at texting. The numbers seem to grow all the time but as of the Kaiser Foundation Study published in January 2010, young people were sending on average 3000 texts per month and were spending four times the amount of time texting than they were actually talking on their phones. And texting has created has influenced communications in several ways:

First of all, because people text on their cellphones, most must use a virtual keyboard on a touchscreen (Blackberry owners get to use the tiny physical keys, which is slightly more user friendly, I suppose.). In either case, the keys are much smaller than the average computer keyboard’s keys, so it’s easy to make mistakes. Plus, using the virtual keyboard also creates another level of awkwardness because you have to shift to a second (and on some cellphones a third) view to access all the characters on the QWERTY keyboard. In addition texting has the “short message service” limitation of 160 characters.

Then there’s the speed at which the communication is sent. Texts are delivered pretty much instantaneously. This leads people to think that they must respond at roughly the same speed. Delaying a response seems for many to imply that you’re ignoring the person contacted you.

The combination of a virtual awkward keyboard, the limited length, and the pressure to rapidly respond engenders the kind of shorthand of contracted words (Xlnt for excellent, rite for write), pictograms (b4 for before, @om for atom), initializations (N for no, LOL for laughing out loud, CWOT for complete waste of time), and nonstandard acronyms (anfscd for and now for something completely different, btdt for been there, done that, hhoj for ha, ha, only joking. Notice how the shorthand becomes more and more cryptic and we haven’t even talked about the emoticons—those variations on the ubiquitous smiley face using strings of punctuation

I know I’m old—way over thirty—but texting seems to me like the new pig Latin—another code designed to communicate secretly and to exclude others. In the case of pig Latin, the aim was to exclude parents. And for some ages the same may be true to today’s texting. It’s a silent and secret form of communication one can do in one’s lap under the dinner table. So essentially the technology of sending written messages via cell phones creates private languages.

Texting can be a convenient way to quickly notify someone, but the effects, especially for younger people, can be more far-reaching and burdensome and hardly convenient. Sherry Turkle met with one sixteen-year-old named Sanjay during her research for her new book Alone/Together. He expressed anxiety and frustration around texting. He turned off his phone while he spoke with Turkle for an hour. Turkle writes: “At the end of our conversation, he turns his phone back on. He looks at me ruefully, almost embarrassed. He has received over a hundred text messages as were speaking. Some are from his girlfriend, who, he says, “is having a meltdown.” Some are from a group of close friends trying to organize a small concert. He feels a lot of pressure to reply to both situations and begins to pick up his books and laptop so he can find a quiet place to set himself to the task. . . . “I can’t imagine doing this when I get older.” And then, more quietly, “How long do I have to continue doing this?” Sounds more like he’s facing a prison sentence rather than the joy of continuous connection  . . .

0 Comments

Web 2.0: A Conversation Lost

5/13/2011

0 Comments

 
The art of conversation is so twentieth century. It seems that Web 2.0 has replaced the need for conversing entirely. For those who send hundreds of text messages each day, who constantly check and updates their Facebook Walls, even phone calls are passé—they’re far too time-consuming, too emotionally demanding, and just plain too complicated. Deval, a senior in high school whom Sherry Turkle cites in her new book Alone Together, observes: “A long conversation with someone you don’t want to talk to that badly can be a waste of time.” By texting, Deval explains, he only has to deal with direct information and not waste time on conversation fillers. At the same time, however, the high school senior confesses that he doesn’t really know how to have a conversation, at least yet. He thinks he might soon start to talk on the phone as a way to learn how to have an actual conversation: “For later in life, I’ll need to learn how to have a conversation, learn how to find common ground as I can have something to talk about, rather than spending my life in awkward silence.”

Neurologists and psychologists worry a lot today about the lack of face-to-face and voice-to-voice interaction that Web 2.0 enables. They point out that it is especially important for adolescents to have direct interaction with others because it is during the late teenage years and early twenties that the brain develops the ability to understand how others feel and how one’s actions may affect others around them. The underdeveloped frontal lobes of younger teenagers, explains Dr. Gary Small, Director of the UCLA Memory and Aging Research Center, lead teenagers to seek out situations that provide instant gratification. Younger teenagers tend to be self-absorbed. They also tend to lack mature judgment, are unable to understand danger in certain situations, and have trouble putting things in perspective.

One prevalent habit that impedes the normal development of the frontal lobes to the level of maturity one expects to see in adults by their mid-twenties is multitasking, says Dr. Small. The ability of multiple gadgets to allow young adults (and others) to listen to music, watch TV, email or text, and work on homework at the same time can lead to a superficial understanding of information. And all this technology feeds the desire for novelty and instant gratification, not complex thinking or deep learning. Abstract reasoning also remains undeveloped in such an environment.

High school senior Deval believes he can learn to have conversations by talking on the phone. But mastering the art of conversation is not the same kind learning as figuring out how to use the latest smartphone. Experts say it takes practice in listening to other people and learning how to read their faces and other gestures to fully understand what another person is feeling and saying. There are deeply intuitive aspects to learning how to fully converse with someone, what Gary Small calls the “empathetic neural circuitry” that is part of mature emotional intelligence. Researchers say it is too early to know how and if  “Digital Natives,” those born after 1980 who have grown up using all kinds of digital devices as a natural part of the rhythm of their lives, will develop empathy at all and if they do develop it, how it might differ from what empathy means today.

What the experts do know is that the more hours spent in front of electronic screens can actually atrophy the neural circuitry that people develop to recognize and interpret nonverbal communication. And these skills are a significant part of what makes us human. Their mastery helps define personal and professional successes as well.  Understanding general body language, reading facial expressions, and making eye contact are all part of the art of empathy. So in this age of superconnectivity, where communications are everywhere and we always on, we seem to risk losing many of the basic skills that are the hallmarks of effective communication itself.

See also

Alone Together by Sherry Turkle


iBrain: Surviving the Technological Alteration of the Modern Mind by Gary Small MD and Gigi Vorgan



0 Comments

Web 2.0 Connecting: Better Than What?

4/26/2011

1 Comment

 
For teenagers and early twenty-somethings, many of whom text on average one to three thousand times per month, the whole experience of texting is a conundrum: On the one hand it offers continual connections with friends; on the other hand, it leaves the texter with a poignant sense of isolation. Says one high schooler in Sherry Turkle’s new book, Alone Together, “Texting feels lonely . . . just typing by oneself all day.” And so it seems with much of 2.0 messaging—not just texting, but short-emailing, IM’ing, and Facebook as well.  Web 2.0 does have its attractions: Chiefly, it allows people to communicate without the messiness of real-time interactions. The mediation also puts a screen up. On the one side, writers can carefully construct their image, choosing when they say it and editing what they actually say and then making it all look oh-so-casual. On the other side of the screen, the receivers don’t know fully what the senders mean—there’s the lack of expression that comes with face-to-face or voice-to-voice communication. And receivers often don’t know how much attention or effort a writer has  invested in the message, whether the sender was multitasking, driving, carrying on a conversation, etc. Much of the general context of interpersonal communication is lost.

From the letter to the telegram to the telephone, we progressed in our technology toward better, more direct, and faster means of communicating with one another. But, as newer technologies intervened, starting with the answering machine, followed by voice mail and caller ID, people gained more control. At first this was simply a matter of being able to screen messages and retrieve them remotely.

But then we got email, cell phones, instant messaging, Facebook, tweets, and texting. The result is that we have many and various options for staying in ever closer touch with everybody we know, wherever they are—we don’t even have to know where they are in order to contact them. That’s the good part of the story. However there’s a whole lot more to this revolution in communication that makes interactions more complicated. Many people have re-assigned the bulk of their social lives to the digital realm. Some shun the telephone—not just their landlines but their cell lines as well. “Voice-to-voice” has become passé. “Can you hear me now?” practically irrelevant.

Testing and IM’ing have actually affected how many compose emails, so that what one communicates becomes spare, even truncated, cryptic, verging on the primitive. And the emoticons, which have become another code for roughly expressing or boldly dictating the tone a message is meant to be written in and understood, seem to be no more than a half-hearted effort to make up for the failures of the little language that is left.  

1 Comment

On Facebook, Friends, and Intimacy

2/13/2011

2 Comments

 
I’ve known Susan Verran for a couple of decades now. Actually we’re related: She’s my sister-in-law, my husband’s sister. So it was an odd thing to look on her Facebook Wall and read “Susan Verran and Cynthia Rettig are now friends.” Weren’t we friends last year? And the year before? And the year before that? Haven’t we shared family secrets, weddings, births, deaths for many years now? And all of a sudden I find it announced semi-publicly that we are “friends.” That’s how new technologies appropriate (or misappropriate) a common word and given it a new meaning. To “friend” someone on Facebook actually means to subscribe to their data and to let them look at your data, that is, the basic information and ongoing updates of what you are thinking about and doing and where you are going that many Facebook users broadcast on a regular basis.

What is this meaning of “friend”? One Washington Post executive, when he first met with Mark Zuckerberg, saw a fundamental insight that Facebook embodies. In late 2004, Chris Ma, a senior manager for investments and acquisitions at the Post, met with Zuckerberg and concluded that at bottom he was a psychologist. The Facebook founder saw that “kids have a deep-seated desire to have certain kinds of social interactions in college and what drives them is their extreme interest in their friends—what they’re doing, what they’re thinking, and where they’re going.” That is the kind of interest that Facebook serves and institutionalizes—this intense adolescent interest in what their peers are doing, who is friends with whom, who’s going to what party on Friday night. For the most part, it’s what used to be called “gossip.”

There’s a problem with encoding in software a set of behaviors favored by teenagers and early twenty-somethings who can become addictively enmeshed in the Facebook culture, visiting the site many times each day, checking up on their friends, adding in their own thoughts or reporting on their own activities, perhaps even with the odd compromising photo. The risk is one of getting stuck in the kind of behavior that most of us outgrow at some point as we aspire to more intimacy, deeper relationships, and personal growth that requires solitude as well as fellowship. And we end up with work and responsibilities that consume lots of time in our days. In short, we end up having lives. We may not end up having 738 friends and a high-tech social graph to show for it. Instead we may have a small circle of friends and family that means something. And we don’t have to “friend” anybody to have it.

2 Comments

    RSS Feed

    Archives

    February 2014
    January 2014
    December 2013
    November 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012
    January 2012
    December 2011
    November 2011
    October 2011
    June 2011
    May 2011
    April 2011
    March 2011
    February 2011
    December 2010
    July 2010
    June 2010
    May 2010
    March 2010
    February 2010
    January 2010
    December 2009

    Categories

    All
    AI
    Computer Models
    Convergence
    Digital Software
    Division Of Labor
    E Readers
    Facebook
    Financial Markets
    Google
    Innovation Business Cycle
    Internet
    Knowledge
    Learning
    Media Use
    Myths
    Powerpoint
    Robots
    Screen Life
    Screen Life
    Search
    Social Networking
    Targeted Marketing
    Technology And Jobs
    The Nature Of The Digital
    The Nature Of The Digital
    Video Games
    Web 2.0
    Wikis
    Youth

    Cynthia's Blog Plan

    I'll aim to post here a few times a month, based on current events and my ongoing research.