Last Sunday, I found myself sunk in my couch, arranged with a plate of chicken and rice for dinner, a drink in one hand and a remote control in the other. I had a blanket over my legs and my iPhone on the cushion next to me. I was ready to watch my favorite television shows on HBO. Gradually, however, a thought began to inch into my mind. Like a tick, this thought began to discomfit me. I wanted my laptop. I could see it in the other room. It beckoned me. How was I supposed to surf the Internet during my shows without it?
I observe situations like this almost everywhere I go: I notice it on the elevator when I grab for my iPhone to avoid interacting socially with the strangers in my proximity; I see it at the gym when exercisers flip through television channels as they pump their legs on treadmills; I see it at dinner parties when my friends lower their eyelids like poker players to tap out texts during a lull in conversation. It appears that now, with cell phones in our pockets and with Google algorithms primed fantastically to feed us answers to our questions, the tiniest windows of time are being filled with media. And while this media can turn moments of downtime into productive blips, a long-standing debate over the pros and cons of constant connectivity is becoming more volatile.
A study from the University of Michigan tested the capacities of two groups of people to concentrate and learn information. Before studying identical information, one group took a stroll in nature and the other group took a stroll in an urban setting. The nature group learned significantly better, leading the researches to conclude that processing a lot of information, characteristic in urban environments, causes the brain to tire. Therefore, giving the brain time to recoup, as it did in the natural environment, enhances its capacity to learn.
Perhaps even more alarming is the assertion by Harvard-educated scholar, Nicholas Carr, who wrote an article called, “Is Google Making us Stupid?” In the article, Carr explains that the rapid functionality of the Internet—with pop ups, links, email notifications, and a tendency toward shallow consumption of information—is actually rewiring the circuitry of our brains. Carr writes, “My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.”
Carr contends that our ever-malleable brains are constantly being remodeled as nodes are unplugged and new circuits are routed. Since our brains adapt so quickly, the old channels that used to serve higher concentration on, say, books or creativity, are becoming eroded. This erosion, some scientists suggest, could lead to attention deficit disorder or depression.
It’s not all bad though. Camped on the other side of the debate are people like Wired Magazine’s Clive Thompson. In his essay, “the New Literacy” Thompson argues that people are reading and writing far more than they did during the 70’s, 80’s, 90’s and early 2000’s when television was the main medium. Thompson writes, “Before the Internet came along, most Americans never wrote anything, ever, that wasn’t a school assignment. Unless they got a job that required producing text…they’d leave school and virtually never construct a paragraph again.”
Thompson also points out that the new type of writing, geared toward reaching an audience, requires concise and communicative prose, a positive step away from the esoteric pontifications of writers before. In his assessment, this democratizes the language. Where people used words like esoteric and pontification, they are now tending toward words like lofty and preachy—which the average reader understands. For Thompson, Twitter updates and text messages are a far cry from defilers of the English language. They are the promoters who bring regular people into it.
Taking the argument even further is writer and NYU professor Clay Shirky, whose idea of “cognitive surplus” posits that the low cost and the ease-of-sharing associated with the Internet has been a boon to creativity. In a TED presentation, Shirky explains that advances in technology have opened up vast amounts of free time for human beings. Until the Internet, this free time used to be eaten up by passive entertainment like television or video gaming. Now that the Internet is available, Shirky says, a shift has occurred in that humans with free time are devoting it to creating.
“The very nature of these new technologies fosters social connection—creating, contributing, sharing,” Shirky said in an interview with Wired Magazine. “This lets ordinary citizens, who’ve previously been locked out, pool their free time for activities they like and care about.”
Shirky says the free time we have for creating, or “cognitive surplus,” is poured into everything from the trivial site lolcats.com, where people post funny pictures of cats, to serious political activities like Ushahidi.com, where people update and map instances of social suffering. Though some sites are less substantial than others, they still spark peoples’ creativity.
More importantly, when people collectively collaborate on the Internet new feats of humanity and knowledge become attainable. Shirky sites Wikipedia as an example. The articles, the edits, and the arguments behind the articles represent about 100 million hours of human labor. Wikipedia, the most extensive encyclopedia, was built virtually for free through collective cognitive surplus.
So where does this leave us? Are we doomed to being shallow-thinking automatons ruled by Google, or is the Internet helping us dig deeper to reach new potentials of creativity and collective conscious?
Most are inclined to think it’s a combination of both. These are the typical convulsions of new technology. Plato lamented the invention of the written word, saying it would supplant the need to exercise memory. Yet, he couldn’t see the vast historical repository, the potential for teaching, the aid in deeper contemplation that the written word brought about.
This past summer a group of neuroscientists from both sides of the argument went on a rafting trip down Glen Canyon, Utah to study the effects of nature’s respite on the brain. At the end of a lengthy New York Times article, the scientists remained unconvinced either way. They felt relaxed after having been in nature, to be sure. However, they said the adaptability of the brain allows our circuits to become exercised and better able to cope with the barrage of information that we are constantly immersed in during our urbanite lives.
At the end, they recommended the Golden Mean: if your thinking gets cluttered, go for a stroll without your earbuds and your iPod.
Maybe instead of getting up for my laptop, I should’ve given my brain a rest by eating my Sunday night meal in the sanctity of my own thoughts.