Cold, Unsympathetic Technology

If you’re anything like me, since the moment you got word of the missing Malaysian airplane, you’ve been checking for updates every spare second you get. There were times when watching the news felt more like viewing an early episode of Lost; jets that big don’t just disappear. At least part of the mystery came to an end today when the Malaysian Prime Minister announced that authorities are “assuming beyond a reasonable doubt” that the plane went down somewhere in the Southern Indian Ocean. Upon reading this I wondered about the families of those who were on the plane, much like I have since first learning of the incident. Are they relieved to have this answer, even if it wasn’t the one they’d hoped for? In this case, is bad news really better than no news?

As I browsed articles for information, this one in particular, it didn’t take long before I read something with immediate shock value. It’s right there in the first sentence: “The families of passengers aboard the missing Malaysia Airlines jet have been sent text messages telling them that the plane has been ‘lost.'”

Text message sent to the families of those on Malasia Airlines flight 370
Text message sent to the families of those on Malaysia Airlines flight 370

Text messages?! Seriously?! These people have been in agony for weeks wondering about the fate of their loved ones, and Malaysia Airlines decides to confirm their worst fears via text message? What kind of callous person could have possibly decided that this was a good idea? Yes, informing the families of 239 passengers is a daunting task, but when it’s a disaster of this magnitude, you’d think they would take the time to do it tactfully. According to one article, most of the family members were still located in Kuala Lumpur waiting for information, and yet they received this text message only minutes before the rest of the world got the news. I couldn’t help but think of some of our discussions in class about how technology has the potential to dehumanize us.

When a loved one dies, the news is typically delivered by word of mouth, whether from a doctor, medical professional, or a closer family member. The person delivering the news is able to offer their sympathy and support and can tailor it to the person they’re telling to make it appropriate and easier if possible. Grief is one of the most basic human emotions and we rt_malaysia_family_kb_140324_16x9_608rely on sympathetic, responsive human contact to get us through. Witnesses said they heard screaming and and crying coming from rooms where the families were gathered. At least one person is reported to have fainted upon hearing the news. Communicating information through SMS message, while convenient, is not appropriate for a sensitive situation like this. I understand that the business of Malaysia Airlines is transportation, not grief counseling, and that they are likely much more concerned with bigger media matters, but as human beings they should’ve known better. It is clear from these families’ shock that they still held hope that the passengers would be found alive. Delivering what is possibly the worst news of a person’s life via text message goes against the rules of social courtesy and in my opinion shows a real lack of consideration for others’ feelings. In times like this people need to be united by the all too common experience of losing a loved one. A mass text message is a cold replacement for the warm, sympathetic hug each family member would ideally receive.

“Today, life has become long distance and automated, and it’s not going to work.”

As a bit of an afterthought, while I was still mulling over what I wanted to write, I came across this post on the “Portraits of Boston” Facebook page (for those who don’t know, it’s a photo blog much like Humans of New York). Hearing (or reading, rather) what this man had to say about how disconnected we are as humans solidified for me the problem with the text that was sent to the families of the Malaysia Airlines victims. It furthers our culture of disconnection from one another. We hear about how much technology has brought us together, but the connections it fosters are usually fairly shallow. As the man in the picture said, “people are yearning for deep human connection. When we have it, we identify with the person or people. We better connect to each other or we will become more and more dehumanized.”

Digital Revolution Boosting Literacy

For this post I wanted to focus on a more personal viewpoint on connecting reading with technology.  I grew up relishing the concept of a book. From the feel of the pages, to the smell of a freshly printed novel.  I savored every chance I got to flip the page.  Each turn of the page was an accomplishment in my eyes. I was making my way, that much closer to the ending of the novel.  I had linked the idea of books always having a binding.  There was no other way.

It was not until landing a job at my public library did I start to embrace the relationship between books and technology.  I was quickly introduced to the most popular section of our library.  The audio book layer. Watching the differing ages of patrons that checked out audio books sparked my curiosity.  We even had books on tape for young children to help with the introduction to literacy. People were getting excited more frequently about reading because of this intermixed connection with technology.

My judgment kept me from realizing what advantages technology introduced to young children.  Though it is pleasurable to hold onto a hard copy of a book and mark it with the questions and ideas that arise, technology has introduced convenience to adults on the go and has increased the starting age of having children comprehending and reciting words on a page.

I wanted to discuss any indifference’s people might have had while first embracing the concept of using technology to read books. I still lack the companion of a Kindle or Nook and do not see one in my near future, but this class has defiantly warmed and opened my mind to different viewpoints on the advantages we have been blessed with by the new technology that swarms us.

This is a revolutionary concept as “technology has reinvigorated an art form instead of crushing it”.

Technology is sparking peoples interest in books and getting more people to start reading and developing excitement in what they are engaging themselves in. We are starting to uncover the fact that more people are starting to relish in listening comprehension in addition to visual. What are your thoughts on Audio books, Kindles or ibooks? Were you accepting in the combination of technology and text? Did it spark a new found admiration in reading? Did it hinder a previous admiration?

Off the Grid….(for a day)

I just got very excited about a New Yorker blog post I stumbled across. The writer, Casey N. Cep, is specifically addressing the “National Day of Unplugging,” in which participants spent a day without technology, and posted photos of themselves holding signs about what they did with that free time. She dismisses the idea that this movement is truly meaningful, and cited different ways in which technology enhances our lives, and why attempts to “escape” it are ultimately unsustainable.

@Pontifex #coolestpopeever
@Pontifex #coolestpopeever

Citing statistics regarding relationships forged online or the Pope’s perspective on the validity of online identities, Cep argues that the concept of a “real” world versus a “virtual” one is an inaccurate binary. She believes that, in its essence, turning off technology is just an extension of the age-old journey to find a “core” and escape “the hustle and bustle of life.” Here’s one quote I found particularly provocative:

“But how quickly the digital age turned into the age of technological anxiety, with our beloved devices becoming something to fear, not enjoy. What sex was for the Puritans, technology has become for us. We’ve focussed our collective anxiety on digital excess, and reconnecting with the ‘real’ world around us represents one effort to control it.”

This guy needs a vacation....
This guy needs a vacation….

I do understand this sense that technology is out of control, and needs to be somehow regulated to curb a feeling of over-excess. I’ve felt it myself sometimes when I’m sitting with facebook, twitter, my email, a homework assignment, some syllabuses, etc. etc. open on different windows and tabs on my laptop; my eyes start to blur, and I begin to day-dream about how simple everything would be if only technology would just go away.

Cep has an alternative solution in her article. Instead of an over-excess of technology, or rigid nonexistence, we should consider ways to make technology work for us. How can technology function in ways that aren’t overwhelming and socially isolating? How can people be in front of screens and still be healthy and happy? She says, “[b]ut let’s not mistake such experiments in asceticism for a sustainable way of life. For most of us, the modern world is full of gadgets and electronics, and we’d do better to reflect on how we can live there than to pretend we can live elsewhere.” Taking a break from technology is never forever; people participating in the National Day of Unplugging have no intention of going off the grid. So what’s a sustainable, practical way technology can be improved as a permanent fixture of everyday life?

“I’m going off the grid, man.”

I recently had a conversation with my mom about this. She was saying that while watching tv with me, it feels like we’re doing something together. Like, I’m watching tv, she’s watching tv, we’re watching tv together, almost as if there are three people in a room all having a conversation. However, in the case of laptops or phones, it feels like I’m communicating with the screen while she’s trying to communicate with me. Three participants, but no well-rounded conversation. This may be because of the nature of internet-related activities vs. television viewing (the former active, the latter passive), or it could be the ergonomic superiority of one over the other.

If so, how can a computer screen be enhanced or tweaked to become a better fixture in living rooms, dining rooms, etc? How can it have better manners (excuse my cheesy personification) and not hog or interrupt social interactions? It’s not unreasonable for us to desire these improvements, but it is unreasonable for us to assume they can’t be made.

Of course, decisions about technology usage are subject to personal preference and need. But, specifically referring to the use of technology in literary studies, I think proactive, optimistic attempts and improvement are certainly more useful than rejection and denial. Keep in mind that if the Pope has 3.81 million followers on twitter, there’s definitely no turning back from the digital age we live in!


Tim Berners-Lee (inventor of WWW) answers the Internet’s questions

Reddit (the self-proclaimed “front page of the Internet”) is a website where users can create virtual forums to discuss/ask/post/help/collaborate etc. regarding any topic under the sun. Do you like cute puppies? Visit the Aww subreddit. Curious as to why men have beards and women don’t? The geniuses over at Explainlikeimfive will give you a thorough and easy to understand answer.

One particularly fascinating subreddit is called “IAMA,” where noteworthy people begin a discussion thread entitled, “I am a [insert impressive thing here], ask me anything.” And by ‘noteworthy,’ I mean the threads range from “I am a 9/11 survivor” to “I am Colin Mockery” to “I am a black teen adopted by an all-white family.” Anyone with a (free) reddit account can post a question, and the original poster will respond to the interesting ones. 

Today, Tim Berners-Lee, the inventor of the World Wide Web, posted on the IAMA subreddit to celebrate the 25th anniversary of his creation. The title reads, “I am Tim Berners-Lee. I invented the WWW 25 years ago and I am concerned and excited about its future. AMA.” 2,700 comments later, the discussion thread now holds not only interesting content from Berners-Lee himself, but insights on the WWW from all around the wide world.

If you have a few minutes, consider perusing this discussion thread. We’ve spoken about the WWW in class, and we’ve certainly spent a lot of time talking about the evolution and future of technology, so I thought this might be of interest.

I hope everyone found something entertaining to do with all this snow!

Be Careful (as an Academic) what you Say Online

Co-blogger Katie Allen recently discussed the implications of anonymous comments in public online conversation. But what about when a person’s name is instead associated with a conversation never intended to be public?

Yesterday, Peter Schmidt of The Chronicle of Higher Education covered the story of Rachel Slocum, a non-tenured professor at the University of Wisconson at La Crosse.  Controversy [1] over Slocum’s supposedly partisan email to students, which was about how the government shutdown would prevent them from completing their assignments, eventually bought her a public rebuke by her campus’s chancellor. And all of this was over a message that was, at least intended to be, for a private audience: her class.

Here’s my (somewhat dry) response to the controversy:

https://twitter.com/gregjp48/status/443434877501718528

You can probably tell that I think that professors should not have to hide their political proclivities from their students.  This is because, to quote Queens College Professor of English David Richter, “there is no politically neutral learning” or teaching anyway; even the position that politics have no place in higher education is itself political. [2] Moreover, I agree with Slocum that her chancellors actions set a dangerous precedent in the use of social media to deride: “Chancellor Gow’s email,” she writes in her own open letter to the campus, “is a signal to students that the university approves of their efforts to publicly shame professors with whom they disagree.”  This is not to say, of course, that criticism of others’ positions should not happen or did not occur before the advent of social media.  But, as Schmidt suggests, never before has that criticism been able to travel so quickly or have such a monumental impact on someone’s livelihood nearly overnight.

What are your thoughts? Do professors have a responsibility not to “impose” their “partisan” views in their classrooms, lest they be seen as having an agenda? And, in the context of digital literacy in academia, do faculty have a responsibility to set an example of tactful diplomacy in online communication?

Technology continues to be a means by which the classroom is no longer perceived as an “isolated space”–a development that is often for the better, as Gerald Graff argues in his 2008 address to the MLA about a problem in higher education that he has named “Courseocentrism”
[3]. But what should academics keep in mind as they tweet or send emails to students and colleagues?


  1. ^ In the midst of this controversy was a message to the La Crosse campus chancellor Joe Gow from an aide to Stephen L. Nass, a Republican State Representative and Chairman of the Wisconsin Assembly’s Committee on Colleges and Universities; this message targeted Slocum’s email to her students, describing it as “clearly partisan in tone.” According to Schmidt, Nass “had a reputation for perennially looking for reasons to cut state spending on public colleges.” Speaking as a student at a public institution, that infuriates me. But that’s a whole ‘nother post.

  2. ^ Richter, David H., ed. Falling into Theory: Conflicting Views on Reading Literature. (Bedford/St. Martin’s, 2000): 22.

  3. ^ Graff, Gerald. “Presidential Address 2008: Courseocentrism.” PMLA 124.3 (2009): 728.

Technology Languages & the Generational “Leg Up”

All this TEI, XML, HTML, NFL, TMZ…what?! It’s got my head spinning! Being a kid born in the early 90s, I grew up learning a significant amount about technology as it was invented and introduced to the public. Kind of like when my Dad recollects waiting for the next Superman comic to be released, or how it was basically a national holiday when The Wizard of Oz was on TV in color once a year, many people my age express the same sentiments about their new video games or cell phones. My generation is readily equipped to understand the way these new technologies function.

I used to consider myself to have a leg up on most people in the technology department. My parents both worked at Kodak for the duration of my childhood and into my adolescence, and our household was always a well oiled machine of the latest cameras, PCs, and other gadgets and knick-knacks that we got to test out. I grew up learning that the problem can’t necessarily be fixed with “esc” or “ctrl-alt-del” and how to properly troubleshoot, that backing up your system is important and should be done regularly, and that you can never get enough RAM. During my last three years of college, it’s become clear that all those things are basically common knowledge, and the reality of my knowledge compared to others was usually less…a lot less.

Because where does interaction with technology take place? Freakin’ everywhere! In my little world, desktop computing was the end-all be-all. Now there are smart phones, the endless slew of apple products, eReaders, even touch-screen check ins at the doctor’s office! My set of skills is horridly outdated when put up against all the other possible sets of skills applicable to all of these other things. Only goes to show…there is currently lots to learn, and there always will be more to learn!

As a student of language, I use the analogy “to speak” quite often. I crack the joke “I don’t speak car” to tell someone who tells me they drive a Chevy Equinox Jeep Mazda CR2750 Honda Cavalier that I have no idea what that means or looks like. So, given the above, do I “speak” technology? I wouldn’t say that I’m fluent, but I would hazard that I have a basic command. If I was “dropped there” and had to ask for directions to the nearest train station, I’d make it.

Ok, ok. So enough with the analogies. I ‘get’ technology. Most people my age ‘get’ technology. As far as ‘speaking’ it though, analogies aside, we’ve learned that there is an actual language, of sorts, to this stuff. Hyper text markup language. Extensible markup language. Text encoding. These are all called languages, and they are the language that our technology speaks! It’s fascinating, really.

What’s most fascinating to me is that I’m capable of using all of this technology without having a good command of the actual technology languages. How’s that! It brings me back to the start of the circle, but with questions. Sure, technology comes second nature to people my age. But these languages sure don’t…is that generational, or will it always be the case? Are there kids in elementary school who are being educated in and on technology, being taught HTML, etc? The technology languages, while they feel foreign to me…are they second nature to the next generation as operating a smart phone is second nature to me?

Anonymous Comments Under Attack

The Beginning of the End of Online Commenting?anonymous1

In a recent class discussion, we touched on the topics of censorship, relevancy, equality, and anonymity when thinking about who should have a voice in certain online situations and what they should be allowed to share. USA Today published an article earlier this week called “Online commenting: A right to remain anonymous?“, which addresses some of these issues. The piece talks about how Internet culture has been forced to change in light of the way users are behaving; in the second half of 2013, The Huffington Post opted to ban anonymous comments from its site, and Popular Science surprisingly stopped allowing any form of commenting whatsoever. According to the USA Today article, this shift was in an attempt to “breathe civility back into what many see as the Wild West of the Web.”

But Does Banning or Censoring Comments Solve the Problem?

The short answer to this question is, “not really.” The nature of the Web makes it possible to share and comment on just about anything, whether or not owners, writers, and publishers like it. Similar to how digital versions of literature allow texts to become fluid which enables two-way “communication,” readers who are eager to join into the conversation find commenting to be a useful vehicle. Sites such as Facebook, Twitter, Reddit, and many others allow users to share their opinions on content independent of its original source, and if the commenter is using a screen name that does not connect to their identity, then it’s still relatively anonymous. This post on Reddit discusses why it is unsafe to use one’s actual name online, responding to one user’s question, “When commenting online, why don’t you use your real name?” anonymous2Though it is clear some people are concerned about Web safety for purposes like identity theft, to me, it seems more likely to be an issue of accountability; when a user leaves a comment anonymously, there is a sort of pseudo-invincibility that occurs, because the user knows that the consequences of whatever they’ve said are almost sure to be limited. So perhaps it is still an issue of safety, but a different kind of safety that protestors, trolls, cyberbullies, and just-plain-rude commenters are worried about.

Pros and Cons of Anonymity and the Effect of Removing Comments

Interestingly, around the same time last year when Popular Science and The Huffington Post so controversially changed their commenting policies, an article came out in The New Yorker called “The Psychology of Online Comments.” This piece deals with “cyberpsychology,” and makes reference to a study done in 2004 in which one researcher coins the term “Online Disinhibition Effect“: “The theory is that the moment you shed your identity the usual constraints on your behavior go, too,” says the New Yorker article. (Four years later, in 2008, another study came out entitled Self-disclosure on the Internet: the effects of anonymity of the self and the other. anonymous4Admittedly, I haven’t read either of these studies closely, but the fact that they [and probably countless others] exist means that researchers are identifying a measurable phenomenon worth looking into and talking about, because as we have discussed in class, technology shapes people, including their thoughts and behaviors.) The article in The New Yorker goes on to reference a study in which this was found: “Anonymity made a perceptible difference: a full fifty-three per cent of anonymous commenters were uncivil, as opposed to twenty-nine per cent of registered, non-anonymous commenters. Anonymity, Santana concluded, encouraged incivility.”

Despite the notable ramifications of anonymity, the article also mentions some positive effects of being anonymous online, such as increased participation, a greater sense of community identity, and boosts in creative thinking and problem solving. Though face-to-face communication has been found to produce greater “satisfaction,” anonymous online communication allows for greater risk-taking for individuals. Additionally, The New Yorker shows that anonymous comments tend to be taken less seriously, and therefore rarely impact the course of a conversation in terms of changing someone’s initial perceptions. This is probably because it is more difficult to affirm the credibility of an anonymous user.

In many of our discussions about Walden, we identity contemporary issues as being “old wine in new bottles,” because we recognize that many common problems have always existed, but simply look different because of how people and technology have evolved; this situation is no different. The following quote comes from the same article I’ve been discussing from The New Yorker: In a study, “The authors found that the nastier the comments, the more polarized readers became about the contents of the article, a phenomenon they dubbed the ‘nasty effect.’ But the nasty effect isn’t new, or unique to the Internet. Psychologists have long worried about the difference between face-to-face communication and more removed ways of talking – the letter, the telegraph, the phone. Without the traditional trappings of personal communication, like non-verbal cues, context, and tone, comments can become overly impersonal and cold.”

anonymous3When thinking about whether or not banning comments will truly solve the problem, an interesting note to keep in mind is that in doing so, the idea of “shared reality” becomes lessened, and therefore the interest surrounding that particular content decreases as well. The New Yorker article states, “Take away comments entirely, and you take away some of that shared reality, which is why we often want to share or comment in the first place. We want to believe that others will read and react to our ideas. What the University of Wisconsin-Madison study may ultimately show isn’t the negative power of a comment in itself but, rather, the cumulative effect of a lot of positivity or negativity in one place, a conclusion that is far less revolutionary.” It seems that this sort of “mob mentality” is nothing new, it merely looks different because it’s on a screen. But is it any more or less acceptable this way? Especially relevant to consider is that fact that the members of this cyber mob very well might be hidden behind the shield of anonymity.

But What about My Rights? What about Free Speech?

Coming back now to the USA Today article from this week, one significant debate that is arising out of the comment bans is whether or not it impedes on civil liberties to do so. The article quotes senior staff attorney Matt Zimmerman as saying, “I think (anonymity is) an important legal right that needs to be protected.” Despite this, “Zimmerman acknowledges that there is no legal issue with sites deciding what kind of commenting culture they want to cultivate, and that opportunities for people to contribute anonymously are abundant.” So the right to decide what (if any) types of comments are allowed on a site legally belongs to the people who run the site, a probably obvious point. Of course this opens the door to issues of control and censorship, which is another topic altogether. But what about free speech?

In terms of free speech, limitations exist no matter what the context, and the specific problem with being anonymous is accountability. The Huffington Post defended their decision to remove anonymous comments by saying, “Freedom of expression is given to people who stand up for what they’re saying and who are not hiding behind anonymity.” The Web allows users to wear masks in a way non-digital spaces do not, and if someone is extremely tech-savvy, they can get away with saying almost anything without leaving a trace. For a simple example, digital footprints become muddied when someone uses a public computer in a busy place without any surveillance equipment – this example doesn’t even begin to skim the surface of other methods of covering up one’s identity online. John Wooden once said, “The true test of a man’s character is what he does when no one is watching.” What about when people are watching, but don’t know who the man is? Can the same be said for him in that situation? If so, does a user still have the right to free speech without the ability to be identified or held accountable for his or her actions in cyberspace? I assume legislation will have to become much more specific about this in the future, especially if issues of diffused responsibility get too out of hand. Or do you think they are already?

Reading Without Scanning Lines?

We’ve talked a lot about how digital technology affects our interpretation of texts.  We’ve also talked about how it alters the way in which we read texts by changing the medium through which those texts are delivered or on which they are assessed.

But what about changing the actual makeup of the text itself?  Check out this new reading technology called Spritz.  The technology’s developers aim to improve reading speed by making it completely unnecessary for the reader to move his or her eyes from line to line, or even from word to word; each word appears, instead, in a “redicle” (a play on words between ‘red’ and ‘reticle’) in the reader’s field of vision.  In addition, the app centers each word on what Spritz calls its “Optimal Recognition Point” (ORP), which Spritz claims cuts down on the time that takes for the brain to decipher the word.

Spending thirty seconds or so with the demo is certainly an…eye-opening experience (sorry for that one).  This Elite Daily article encapsulates the technology’s efficiency, and the implications of that efficiency, with its headline: “This Insane App Will Allow you to Read Novels in Under 90 Minutes.”  Sounds like a dream come true for anyone taking multiple 300 or 400 level Geneseo English courses.  It’s easy to see how Spritz could prevent fatigue and distraction while reading, since the technology makes reading is less physical work (not to mention, it’s hard for a reader to be distracted by a flashing ad in his or her periphery when the flashing thing is actually the text itself).  Anticipating fears that reading this quickly would prevent readers from actually getting anything out of the text that they’re being presented, Spritz also claims that their technology improves reading comprehension.  The company argues that the time that a reader would usually spend scanning a text with his or her eye is instead spent on processing the content that it conveys.

But what is lost by not having words that are arranged on a page, be that a paper page or digital page?  As English majors, we are aware that the meaning of a text is as much shaped by its form as its content.  Under the Spritz system of reading, poems would lose line breaks and any enjambment.  There seems to be no convenient way, either, to do the returning to previous lines that such enjambment often impels.  And forget about concrete poetry or any other work that relies heavily on graphical codes.  Assuming that the technology is intended for longer pieces of prose that demand (arguably) less attention to sentence-level form, this may not be an issue.  I also wonder, however, how having a constant and electronically-set reading pace will affect the reader’s reception of meter in poetry and prose alike.

Thoughts?

It’s Dangerous to go Alone…Video Games as Narrative

I’ve been waiting a while to blog about a topic that I felt could come a little organically, and while I will admit that I perhaps waited a little too long to start, I thought this story was a perfect question to present in this class. Recently, I was having a conversation with my friend about a game that had come out a little while ago, called Bioshock Infinite. As we were discussing the game, another friend of ours came over to listen, and then asked us something that inspired a really fascinating discussion; “Haven’t you guys outgrown games yet?”

The friend who I was talking to (out of privacy I’ll call him Mark) , began to argue that games were not a child’s toy anymore, that the days of Super Mario and Donkey Kong being the major games was over. He discussed how games have really made the shift into actual creative storytelling, stories that for the first time, people can actively write themselves. Games no longer simply tell you to save the world, but they now offer you the choice to let it burn or rule it as well. You aren’t simply running through panning two-dimensional backgrounds, completing little missions as they pop up. You’re creating relationships between characters, feeling for the plights of the different citizens who claim to need your help. “In a way,” he said, “for a lot of people, video games are becoming the new way to read a book, except this time they really are the hero.”

Granted, I didn’t exactly agree with games taking the place of books, but games have certainly earned their merit in terms of telling stories, inciting emotions, and teaching moral lessons that myths, legends, and stories taken to paper have done in hundreds of years prior. Video games have been taking these lessons one step forward though, because now, instead of just watching the hero make a choice for his own reasons, and learning from that why it was the right or wrong choice, games force the player-reader to take agency in the narrative and make the choice themselves, suffering whatever consequences may come.

Now that we, in this class, are discovering how to take reading into the digital age, I thought it would be pertinent to see how storytelling has already taken that step forward. Sure, there are a lot of military games out there, but there are also games that talk about a father’s redemption in the eyes of their child, a soldier that suffers through his own Heart of Darkness in Dubai, two brothers that would travel the world to be with each other, and a Journey, taken in complete silence, yet with more communication and understanding than most games convey. Overall, narrative has already penetrated the digital world, and now it is our job to save the texts that helped inspire these new stories, and help inspire a conversation that pushes us into the future.

A Smattering of Stupid (?) Studies

We can’t always spend a whole class on a single book or author, can we? As an English literature student, obviously I read outside of class (“for fun,” as some would put it) and surf for literature-related material online. As I think these “supplemental study tools” can only be judged on an individual basis, nonetheless, here are some I think are interesting, weird, scarily cultish… or just plain stupid.

tumblr_lkq57nqioH1qiu5e6o1_500
So adorbs

For example, there is the ever-adorable “Writers and Kitties” tumblr, which takes a step back from literature itself and merely captures our empathy for the authors themselves. Apparently Mark Twain liked to relax while playing pool with a kitty in the corner pocket.

However, if one chooses to follow the trail of Twain’s digital afterlife, it derails quite quickly.

RCColaCat
Kitties > famous authors ?…

Here is a 1940’s ad from Royal Crown Cola featuring not just Twain, but containing (and focusing on!) his pool-playing kitten. Somewhat reassuring– it seems the digital age’s “nothing is sacred” mentality is nothing new (just as we have discussed that new technology scares are nothing new– paper?! OMG.)

182112
Pure sex appeal.

However, if we continue down the electric pathway of the ghost of Twain, we rapidly descend into madness. The twitter handle Shirtless Mark Twain takes a simple picture of said phenomenon (the explanation lost in time, it seems– can’t find the reason for it) and creates a new life for the author; over a hundred tweets capitalizing on the photograph. He usually signs every third tweet or so with a call for “shirts off,” as well as often dissing the physique of fellow authors. Fortunately(?), the account has only a paltry 45 followers. Please do not add to the following– I think this one serious students may skip. However, there may be something here– a short, comic appeal focused on the author himself (or herself).

Moving on to more positive examples, there is the Henry David Thoreau twitter account, which seems to be mostly tweets of T quotations. Here’s one that demonstrates T’s quotability:

“As if you could kill time without injuring eternity.”

It’s interesting how powerful a quotation seems to become when it’s set off by itself like this. Reading Walden twice, I never noticed this particular line before specifically (nor, shamefully(?) can even remember it at all). Are things like twitter literature accounts helping literature, as well as general students, alike? I can’t help by notice the similarities between something like this and our Social Text edition of Walden. This particular tweet had a fine 39 retweets, and it was the favorite of 41. The ThoreauPage has a staggering 25.1k followers. However, missing from many of the tweeted quotations (despite the exposure) was discussion, such as can be found in a more academic setting like the Reader’s Walden. And even if there was a long discussion from different accounts, how much can be conveyed in 140 characters for each individual comment?

As much as one can speculate on the meaning of this quotation by itself, is there missing context to it that can only be found embedded within its proper passage? Is this additional exposure– twitter pages, etc.– to literature really beneficial, or even detrimental, if the content is all flash and no substance? Will non-literature students ever willingly follow the forbidding Thoreau, author of the much-hated Walden (at least so seems to be the opinion of most people in Geneseo whenever I mention the book to others)? (My friend has scathingly accused Geneseo of having a “Thoreau fetish”– we literature students seem to always be on the defense for our field of academic criticism [see my other blog post on Annotations]. Has anyone else been on the end of a quick explosion of anger toward Thoreau in Geneseo?)

I’m not aware of how one could gauge the academic usefulness of things like the ThoreauPage on twitter (forget ShirtlessTwain), it probably being too complex to control for a scientific polling, but here’s something interesting that perhaps could be related, if other studies could be found or done: “Texting can help improve your kid’s writing skills.” This article cites British research done that correlates (positively) “textisms” and a student’s writing skills. It seems that students’ texting is language and writing use (while before texting, the students would not be writing at all)– even those “textisms” are beneficial in a way, and better yet, students, fortunately, know when to not use them in schoolwork.

486px-Henry_David_Thoreau
Hawt? Nawt.

Maybe those Thoreau quotations, even taken out of context (and only those pithy, short statements) are getting students to read while otherwise they would not be at all? Perhaps if Thoreau had posed for a shirtless daguerreotype, and if this comic aspect of Thoreau’s personal life was combined with serious quotations, the appeal to ThoreauPage would be even greater and more influential (unfortunately, I think Thoreau’s neckbeard disqualifies him for the label of “pure sex appeal,” as with our friend Mark Twain). But still, putting T on twitter must be more useful than reserving him to college courses… right?

Here’s a link to the 100 Best Opening Lines from Novels. Here’s a link to the complementary 100 Best Closing Lines, and here’s an entire tumblr page dedicated to that all-spoiling topic: the final sentence. Is digital media focusing on the easy quotables of literature? Or is that a false stereotype?

Speaking of pithiness, did you know the Bible has a Sparknotes page? See here for an interesting survey on who knows (aka reads) the Bible best among various religious groups. Religious (and Christian) or not, it would be a fair opinion that the Bible is the most well-read book in America and world-wide (especially if you’re one who thinks it ought to be read). Here’s another article discussing just how many read the Bible versus how many think it should be read. Hmm… will the biblical god be happy with those who have only sparknotes’d him? After all, 1 Peter 3:15 makes a call for Christians to know their book and faith, and yet it seems like they need a little help. The Bible, after all, is first and foremost a piece of literature with a profound affect on history, and huge numbers of classical works allude to it.

Sparknotes, of course, is the original student-slacking site, and the most well-known. But as much as teachers [seem to] hate it (input, Dr. Schacht?), it seems to be the general consensus that it helps students get a general grasp on basic plot and theme when reading, providing a shallow but supportive scaffolding to literary endeavors– however, the question is, as has been the entire post, does this tool substitute real thought and criticism, or add to its richness, and, in its shallowness itself, easily allow for otherwise difficult works to be entered into by hesitant students? I think, as long as students actually use these as supplemental tools as such, there can’t really be any net harm, only tiny, or even great, gains. Should students of literature loosen up a little in a ShirtlessTwain sort-of-way, in order to make ourselves more accessible and appealing? Is it time for Thoreauvians to call “shirts off, bro!”, and lighten the mood a little?

Pessimism floods us again– here is Twitterature, “The World’s Greatest Books in Twenty Tweets or Less.” Poorly reviewed on Amazon, it seems that good “scaffolding” for works themselves might be beneficial, but often times, as here, trying to be cool and compact leads one to laziness. The Guardian reviews, “The classics are so last century.” Ultimately, I think short, compact “guides” to literature may be helpful in (basic) understanding and motivating one to push past dense wordy slogs, but we might be careful to censor gently avoid the bad ones.

Finally, here is Elliott Holt‘s short detective story told entirely in Twitter itself. Twitterfiction tweets out 140-character stories. Good quality or not– most authors, when giving advice, say simply to start writing, no matter how bad or little. Practice for english students is analogous. How many literary-related tweets or Twitterfictions will it take to build up skill for a 10 page paper?…

I was going to end the post with a few cool, literature-related twitter accounts, but rather than quickly find some (having just made my twitter account for class and followed a few pages) without checking them over for quality, does anyone know of any useful or just interesting twitter accounts for literature-lovers?