It’s Written All Over Your Screen

So this isn’t my most thoughtful or deep post, but I’d like to backtrack a moment to possibly the number one concern I hear from people who are wary about digital texts: reading off of a screen. And it really can be uncomfortable! We’ve all experienced tired eyes and headaches after staring at our laptops too long, and the glare from the sun that forces you inside if you need to use your computer. According to one article, “the issue has become so prevalent in today’s work environment that the condition has been officially labelled by the American Optometric Association as ‘Computer Vision Syndrome.'”

I would also argue that there’s a more philosophical relevance to this concern; the screen is the face of the computer. The field of interaction between user and machine is primarily located on the screen (although the tactile experience of typing is of course also relevant, but perhaps less complained about). So, if this interaction is to be comfortable and integrated into every-day life, the screen needs to be user-friendly.

Let’s see what’s being done to improve this experience, focusing on laptop computers (of course ebook tools such as Nook and Kindle have done more on this front, but these aren’t the devices people are using for hours on end).

Laptops used to be black and white, prone to blurry screens and ghost images. But, by about 1991 color LCD screens came into use, which improved visual quality as well as cost for consumers. Nowadays of course, laptop screens are at a whole new level, with the emphasis

The MacBook Pro Retina Display...does seem to reduce glare!
The MacBook Pro Retina Display…does seem to reduce glare!

being on resolution and the implementation of touch screens. MacBook has a new “retina display” that’s supposed to have incredibly high resolution (the MacBook Pro 15″ Retina Display is literally advertised as “eye-popping”). However, there seems to be little to no push for these screens to be easy on the eyes. Improvements generally focus on bright color and high definition, but that “brighter and bolder” sort of thinking would intuitively seem to me to make things worse.

In fact, there are articles dotting the web about how to avoid eye-strain yourself. These range from buying computer glasses to sitting up straighter to taking breaks. Clearly this is a common concern. However, the physical screens themselves are not being made more ergonomic and healthy by engineers. I could only find one company that is on a mission to reduce eye fatigue

So my mom WAS right when she said I should sit up straight....
So my mom WAS right when she said I should sit up straight….

through engineering better screens. They are concerned with using direct current to reduce perceived flicker. However, this is not a mainstream laptop producer truly implementing revolutionary technology into their products.

Does anyone else know more about this, or have special tips/screen appliances they use to reduce eye-strain?

The fairness of Computer Based Testing

test

As many of you have probably experienced, there is a push right now for computer based testing. Computer based tests are becoming the norm for taking standardized tests like the GRE and NYSTCE. As with many things there are pros and cons to these exams. But the question is do the pros outweigh the cons? Or do the cons make these tests unfair?

Some of the benefits of computer based testing include: timely feedback, more efficient monitoring and tracking of students’ results, a reduced number of resources (as they are replaced by computers), easier storing of results records, and electronic analyzing of data that can be used in spreadsheets and statistical packages. Some downfalls of computer based testing include: costly and time consuming implementation of the exams, assessors and staff implementing must have IT skills, close monitoring of the software as there are chances it could fail or malfunction during an exam, the absence of an instructor, issues to prevent cheating, and computer anxiety.

This last disadvantage is a big one for students taking the exams, and it brings up the question of fairness. I am not saying that paper exams are entirely fair, as some people are better test takers than others to begin with. However, the computer based tests may give an advantage to students who are used to using a computer and have better skills using a computer. For example, a student who has not had a lot of experience typing on a keyboard may be at a disadvantage when taking a computer based test. Or a student who struggles when looking at a computer for too long may not be able to complete a long computer based examination.

I am interested to see how far computer based testing will go. Will students soon be taking SAT exams as computer based tests? And will it go so far that using paper and pencils on exams becomes a thing of the past and all tests taken in college and high school are taken through computer software? I do see the advantage of computer based testing, but I think it would be beneficial if students had a choice, especially right now for students who have not necessarily grown up taking all exams and assessments with a keyboard and a computer screen in front of them.

pencil

http://media.johnwiley.com.au/product_data/excerpt/24/04708619/0470861924.pdf

 

 

You Can Lead a Horse to Water

I do a lot of commuting: from Churchville to Geneseo, Churchville to Victor, and Churchville to Greece, all on a regular basis. I recently realized that rather than suffering through Eminem & Rhianna’s “Monster” for the 700th time (don’t get me wrong, I liked this song the first 699 times), a better use of my time in the car is to listen to Ted Talks.

Just today, I found two talks that apply to our course in TONS of different ways: Jennifer Golbeck’s “The curly fry conundrum: Why social media ‘likes’ say more than you might think,” and Lawrence Lessig’s (yep, he should sound familiar, and so should some of this video!) “Laws that choke creativity.”


I could write a post about either of these really interesting and relevant-to-340 videos, but I won’t. Instead, I will leave them here for others to “stumble upon,” particularly others who maybe haven’t found anything inspiring to write about in a while. The end of the semester is in sight, and many of my fellow bloggers have only posted once, or worse, not at all; hopefully these videos will help.

Are online skimming habits making us worse serious readers?

keep-calm-and-embrace-technology-2 (2)It seems appropriate, given the previous post about the ways in which technology helps or hinders our communication, to discuss how these new tools have also impacted the way we interpret the information we’re given. It’s nice to think that we as English majors can transition seamlessly between old and new media outlets, appreciating the feeling (and let’s not forget the smell!) of an actual tangible book while still keeping up to date with the new helpful technologies available to us. But the truth is that getting used to reading in the newer and more common formats, such as on a computer screen or smartphone, really can — and does — influence how we read “real books.”

In an article from Sunday’s Washington Post, Michael S. Rosenwald points out that our reading behavior with more serious texts has come to mimic our online, internet-surfing reading habits. One neuroscientist described this reading as “superficial” and said she worries that it is affecting us when we have to read with more in-depth processing. I’ve certainly noticed this in my own reading habits, and find it endlessly frustrating.

weapons-of-mass-distractionOn the internet, we skim. We look for important words that are of interest to us and if we can’t find them, we click on to the next page. I know I’m not alone in this. In our class discussion today someone from the group working on the Walter Harding website talked about including things like a letter from Albert Einstein to give the audience a reason to be interested and stay focused, since the eye is so easily diverted on the internet. It’s true! If we don’t immediately find something that piques our interest, we move on.

When I have important readings for class that are online, I have to close all other tabs and even use the Readability add-on that Dr. Schacht showed us earlier in the semester just to keep myself from getting distracted. It’s like my brain automatically assumes that if I’m reading on a computer monitor it must not be important, so my eyes start looking for “clickables.” To quote from Rosenwald’s article, “The brain is the innocent bystander in this new world. It just reflects how we live.” Clearly our leisurely habits are sneaking into our serious work as well.

ac4bb8a5ec3201c597967935c7ccfa94-617x411I encourage you to think about how you’ve experienced this just over the course of your time reading this blog post. You probably looked at the pictures, clicked the links to other websites (and maybe even other links on those sites), went to another tab to answer a Facebook message, and countless other things. I did all of that while writing the post too! Most of us are guilty of this habit, and that’s just what it is: a habit. We’re like little squirrels running around on the internet. Our focus is on one page until something more interesting (and not even necessarily better) comes along, at which point we leave our first focus entirely, sometimes struggling to remember how we got there in the first place. On one hand, it’s great that we have so much information readily available to us, and my guess is that there has to be a study out there somewhere regarding benefits of technology on our multitasking abilities! But when we’re so used to being bombarded with all of this, taking the time to slow down and isolate ourselves for a task without so many distractions can be a challenge.

Does Technology Help Us Communicate Better?

 “Are all these improvements in communication really helping us communicate?”

(Sex and the City, Season 4, Episode 6)

             It was a typical afternoon. I was home for spring break a few weeks ago, when I decided to unceremoniously plop myself down on my couch and flip through the TV channels. As I was lazily deciding which show I should purge on, I happened to stop on Sex and the City right when the main protagonist, Carrie, said that quote above.

Carrie Bradshaw from Sex and the City
Carrie Bradshaw from Sex and the City

Right away (or right after the episode was over…) I knew I had to do a little research. When that quote was said, it was only 2001. That was thirteen years ago. When Carrie said that, she was debating whether or not to get an email address. She thought that was “too advanced” for her to handle.

What about today? Today when we can text one another, Face Time each other, Skype, Facebook chat, Tumblr, and so much more? If someone once thought that email was too “high-tech,” then what about right now? Does technology truly help us communicate any better?

In his book, Stop Talking, Start Communicating, Geoffrey Tumlin says that, “A tech-centered view of communication encourages us to expect too much from our devices and too little from each other.” Yes, with all of our we can communicate easier and faster. That’s an obvious thing. But, is it any better? Super_Solvers_-_Gizmos_&_Gadgets_Coverart

In a great CNN article, “We never talk any more: The problem with text messaging” Jeffrey Kluger states that,The telephone call is a dying institution. The number of text messages sent monthly in the U.S. exploded from 14 billion in 2000 to 188 billion in 2010.” People, wherever one goes, are always looking down at their phones, instead of looking up. They are immersed in all of its aspects (mostly texting), and to see a person actually talking on it is a rare site nowadays.

We can easily read a message, a text, an email, but we don’t understand the emotion behind it. One can sincerely believe that a message sounds mean, while the author never intended that at all. Without always understanding a person’s tone, how then do we know what they actually are saying?

Geneseo anyone? JUST KIDDING/I do it too...
Geneseo anyone? JUST KIDDING/I do it too…

An easy counter argument for that could be reading a book. How is one supposed to know what the author’s tone is without asking him or her? Yet, that is usually a simple thing to figure out. We as English majors do that for everything and anything we read. However, that also could be because a book is longer than a text message, and has phrases such as, “he said with a vengeance” throughout. I personally don’t know many people who narrate their own text messages.

But, one cannot overlook the ways it truly has helped us. In a Huffington Post article,  Joel Gagne says, “(School) Districts benefit from embracing, rather than shying away from, technology. Districts can utilize various different technological platforms to engage their community and seek their input. By ensuring there are provocative topics and the need of feedback from the community it will ensure things are interesting. Readers like to know you are really interested in what their opinion is. Using technology can help bring your school community together.” Technology also can help loved ones see pictures from a trip via Facebook, rather than having to wait months to meet up in person. It can help people living across the globe talk every single day without much cost. It can get ideas spread so rapidly that in a blink of an eye a revolution of sorts is happening. Years ago this was never possible. And yet, today, it is.

Awwww
Awwww

While I myself believe that all of our “improvements” aren’t making us communicate a whole lot better, that doesn’t mean I don’t find it easier. Instead of calling my mom to tell her something, I text her. If I see a new book out that I think my dad would enjoy, I email him, instead of calling him. It is easier, and it is faster, and I use my cellphone and laptop Every. Single. Day.

And, for better or for worse, I don’t plan on stopping.

WWMS: What Would Marx Say about Digital Commons

Perhaps this is just because I’m currently reading The Manifesto in Humanities, but with all this talk about the consequences for private property in the digital age, I was wondering what Marx would have to say about all of this. The answer I arrive at is vague and pretty unhelpful (like Marx himself on the whole), but I’ll get there in a minute.

Before all this talk of communism, John Locke wrote about the implications of property ownership as early as the 17th century. He writes in his Second Treatise of Government (based on my limited Humn knowledge) that property was originally defined by what

John “Locke”: Because he was exercising his natural right to liberty…

you could hunt/gather for yourself without wasting. This created a level of equality among people, because amassing enormous wealth would be physically taxing and people stopped collecting things when their “natural” needs were met. Things like berries and meat spoil quickly, so it would make no sense to horde them. The appearance of money and its triumph over the barter system changed the way people owned things. Now people could own things unequally, and theoretically amass unlimited amounts of wealth that last and accumulate. This sounds like the capitalist system we have today.

 Marx, of course, credits any social development throughout history to economics–essentially, the distribution of property. Engels is really the one who describes the property distribution between the upper-middle bourgeoisie class and the working proletariat. The workers are stuck in an endless cycle of poverty. Marx writes in his Manifesto that the typical system of “synthesis” that happens when “haves” and “have-nots” clash will not be possible when the bourgeoisie and proletariat of capitalism inevitably meet their end. Capitalism will finish, and some sort of revolution that is unimaginable will happen. Communism, or equal distribution of wealth, is the best way to stop putting band-aids on capitalism and urge on this “revolution.”

Blamed Capitalism before it was mainstream....
Blamed Capitalism before it was mainstream….

So, WWMS about the question of digital commons, or places online such as Digital Thoreau in which anyone with internet access can “own” something? How can anyone truly own something/the rights to something if digital sites are open-access?

I think the important thing to remember is that the nature of property distribution has changed as texts, ideas, images, etc. have moved online. Nowadays, an artist can’t be assured for one second that she’ll receive money for everything she has published; someone somewhere will undoubtably have found a way to copy-paste or download or screenshot, etc, etc, etc, her work. There’s a block in the money-centered, capitalistic flow of trade that people such as Scott Turow, Paul Aiken, and James Shapiro would argue discourages creativity and production.

BUT

This is where things get eerie, because Marx predicts the destruction of means of production as ways to combat the over-production of final-stage capitalism. The sheer volume of things produced on the web make it a perfect example of capitalism in its final stages. There’s overproduction and then unwillingness/inability to pay on the part of consumers, and then a disincentive for producers to continue….producing.

Communal spaces on the web of course sound kind of communistic in that they equalize people as consumers. However, they’re different than the material property and situation that Marx and Engels were so sure determine everything in the world. In fact, it seems to me to be more similar to the berries and meat Locke spoke of. Web content doesn’t really have an expiration date, but there’s only so much you can download and read and listen to on a computer or in a day. And the amount that you download on your computer doesn’t determine your wealth or material situation (unlike money). This is arbitrary property that falls not really under the supply/demand chain of communism, but more under the take-what-you-need-but-it-will-take-time-and-effort model of the hunter/gatherer system.

Of course where it differs is that people have to produce online content, whereas deer produce venison for us (thanks, deer). So we still have the problem of production. But, Marx would definitely say that that anxiety is the capitalist in all of us which can’t envision any other way of viewing the world except as a giant factory of creation. However, that still doesn’t help us very much in finding pragmatic ways to encourage production in a communal world without guaranteed payback for your time and effort.

So I think Marx would look at the digital age and the way property has become in nature and in distribution, shake his head, think of the end of capitalism, smile, and say I told you this was coming.

Parenti, Lessig, and cute animals

Reading Lawrence Lessig’s “Free Culture” reminds me of a book I had to read for a high school global history class: “The Assassination of Julius Caesar: A People’s History of Ancient Rome” by Michael Parenti.

Parenti, a Yale grad and “cultural critic” (Wikipedia’s words), argues in his book that history has really done a number on poor Caesar, who was not, in fact, assassinated because he

Since this post does not lend itself to images, treat yourself to some adorable animal pictures.
Since this post does not lend itself to images, treat yourself to some adorable animal pictures.

was abusing power and ignoring the needs of his constituents. A few chapters are eloquent laundry lists of all the great things Caesar did for Rome, like creating the Julian calendar (a variation of which we still use today) and working to relieve poverty among the very plebs he was accused of mistreating; other chapters debunk common misconceptions ‘traditional history’ has fed us. A 2004 book review from Parenti’s website synopsizes his thesis: “In The Assassination of Julius Caesar, the distinguished author Michael Parenti subjects these assertions of “gentlemen historians” to a bracing critique, and presents us with a compelling story of popular resistance against entrenched power and wealth. Parenti shows that Caesar was only the last in a line of reformers, dating back across the better part of a century, who were murdered by opulent conservatives.”

His name is Lionel and she rescued him from a slaughterhouse when he was a calf. True story.
His name is Lionel and she rescued him from a slaughterhouse when he was a calf. True story.

I disliked the book from the first few pages because of Parenti’s smug attitude. He seems to think that he is pulling the wool off our eyes and showing us a hidden truth, when in reality, he is simply proposing a theory contrary to the ones in our boiler plate high school textbooks. Responsible readers will identify this bias and take his argument with a grain of salt; but I can easily see a less careful reader thinking that he now understands Ancient Rome better than his friends because he knows ‘the truth.’ Textbooks’ version of why Caesar was assassinated and Parenti’s are both rooted in facts; it’s just that each one gussies up his argument in a different way, puts those facts in a different order, foregrounds different information and flat-out omits what doesn’t suit the thesis.

I promise, I’m circling back to Lessig, now. In reading the introduction and first few chapters of “Free Culture,” I was getting strong Parenti-vibes. Just like Parenti, Lessig’s argument is

Elephants are highly emotional creatures, and are one of the only mammals besides us who mourn their dead.
Elephants are highly emotional creatures, and are one of the only mammals besides us who mourn their dead.

opposed to the one that contemporary culture furnishes us with. Most people believe it’s important to protect intellectual property, whereas Lessig dramatically states, “Ours was a free culture. It is becoming less so” (30). There’s nothing wrong with taking the counter view, but I am skeptical of an argument that stands upon completely disproving another position, rather than generating genuine ideas that may or may not line up with prevailing theories. That sounded pretentious and confusing. I just mean that I sense a little rebellious flare in Lessig’s writing, like he’s excited to tear down the mistakes our culture has made.

This guy gets it
This guy gets it

Lessig is doing the Socrates thing, where you ask little questions that people agree with (“isn’t it silly to sue Girl Scouts for singing copyrighted songs around a campfire?” “don’t scientist build off each other’s work all the time?”) until you’ve led them to a conclusion miles away from where they started. Think about what he’s saying: protecting intellectual property is not only illogical, but is changing our culture for the worse. Yet, every one of us has created something that we are proud of, sometimes even defensively proud of. Can you imagine another person or corporation taking credit for it? As someone who has been plagiarized, I can tell you that it’s more gut-wrenching than you’d think. I do not think it is such an evil thing to get credit for your hard work. Just because some inventing happens in the mind rather than in a workshop, that doesn’t mean we should privilege the protection of one kind over another.

The photographer is named Brian Skerry. He was interviewed about this photo and said that the Bow whale was calm, curious, and had not one iota of aggression. After this photo, the whale swam on for a while, Skerry following and snapping pictures. When Skerry had to stop to catch his breath after 20 minutes, he was thrilled to have had such a successful day. But the whale actually stopped and waited for him. Oh my God I'm tearing up, isn't that beautiful?!
The photographer is named Brian Skerry. He was interviewed about this photo and said that the Bow whale was calm, curious, and had not one iota of aggression as it approached his partner. After this photo, the whale swam on for a while, Skerry and his partner following and snapping pictures. When Skerry had to stop to catch his breath after 20 minutes, he was thrilled to have had such a successful day and assumed that was all he would get. But the whale actually stopped and waited for him. Oh my God I’m tearing up, isn’t that beautiful?!

But I am getting ahead of myself a little bit, because to be honest, I’m not even sure that I understand Lessig’s argument completely.  I probably shouldn’t be criticizing him like this until I’ve read the whole book, I admit. From what I’ve gotten through, though, I can say that I find his argument convincing only in small chunks, but kind of incoherent in the big picture. Lessig adores historical anecdotes. Each chapter contains several very interesting stories about how Joe What’shisnose got ripped off by a big corporation or how Jane Blah was only able to create the world’s greatest whatever because she used someone else’s idea. I really liked all of these examples, especially the one about and the explanation of Japanese ‘copycat’ comics. The problem was that I had trouble connecting them. Lessig tells us that his book is “about an effect of the Internet beyond the Internet itself: an effect upon how culture is made. […] The Internet has induced an important and unrecognized change in that process” (7) and that his goal is “to understand a hopelessly destructive war inspired by the technologies of the Internet but reaching far beyond its code” (11).  Honestly, that’s the kind of thesis that I would circle at the Writing Center and say, “You have a really interesting idea here, but the thesis is supposed to be the roadmap to the rest of your paper. You need to be more specific.” Saying that you want to talk about how the Internet has changed culture and how there is conflict surrounding technology tells me very little about what I as a critical reader am supposed to be looking for.

Over 10,000 pitbulls have been euthanized due to breed discriminatory legislation in cities. Happy, loving family pets like this fella have been persecuted just because he's a pit bull. But look at him! Just, look!
Over 10,000 pitbulls have been euthanized due to breed discriminatory legislation in cities. Happy, loving family pets like this fella have been persecuted just because of unfair stereotypes. It’s dog racism. But look at him! Just, look!

Yikes, this is getting wordy. My point is that some of Lessig’s anecdotes seem to cast the people who lost their intellectual property in a sympathetic light (like the first story about poor Edwin who committed suicide over his idea being stolen), while others underscore the importance of brooking property rights if we ever want to advance as a society (the Kodak episode). I’m pretty confident that he is arguing against strict intellectual copyright laws on the Internet, but if I wasn’t reading his book in the context of this class, I might be less certain.

He also pulls a Parenti every now and then and throws out a statement in support of his argument that is just totally ridiculous. Lessig honestly thinks that “we, the most powerful democracy in the world, have developed a strong norm against talking about politics” (42)? Really? He backs this up by noting that we are discouraged from discussing politics because they are too controversial and can lead to rudeness, but as a card carrying American, I can say that the thought of offending someone has never stopped me from saying anything. He cannot really try to get us on board with the idea that our society stifles political dialogue (or even ).

This is Tillie. I have been lucky to call her my best friend for 7, happy years!
This is Tillie. I have been blessed to call her my best friend for 7 happy years and counting!

All in all, I have not found this reading unpleasant. I like his writing style and, like I said, his anecdotes are very captivating. I just wish he had a little more direction, a little less sass, and a smidge of common sense.

You’re a champ if you stuck it through the whole thing. Hope the animal pictures helped.

Can Creativity be Programmed?

I was roaming the internet a few days ago and I came across this article.

http://news.bbc.co.uk/2/hi/programmes/click_online/9764416.stm

To summarize what this article is discussing, it unveils the fact that robots are actually capable of writing books now. At the moment, they are just writing on pre-existing scientific or mathematic theories and laws, and have even occasionally dabbled in love letters as well. However, the article’s biggest point is posing the question of whether or not robots could actually be able to write a fictional novel, and even win a Pulitzer for it.

Personally, I don’t think that is a valid question at this point.

Robots are still created and manufactured by humans, and their capabilities are clearly lined out by their creators within their computer code. At this point in time, there is no way to instruct something to be creative and innovative, that kills the whole point of creativity. To be able to properly write a work of fiction, you need to be able to arrive at the idea through a combination of experience and imagination, something that machines don’t necessarily have right now. Robots simply cannot sit down and think about what would be an interesting story to write about because the code for that simply does not exist.

However, if this technology ever does exist, I think the question is less about can a machine write a novel, but does the ability to create a novel imply that at some level, these robots have an element of humanity in them? Does having the ability to be creative make the machine part of the human psyche, and can that ever be achieved? For me, writing has been a way for me to express thoughts, feelings, and emotions as well as taking ownership of a world that I have built, which inspires me to continue to write and create new stories that can be shared. Can a machine ever find this same level of joy, or will it create and be creative simply because it has to? And what would a robot author mean for the future of storytelling? Will it be another aspect of competition, or a stigma on the literary world? Will it be a boon, or will it only cause a new level of literary elitism?

The ‘Virtual’ Future of Social Media

Facebook has half of one billion members, which is a crazy concept to me. Especially considering that just as recently as half our lives ago (us kids, anyway), something along the lines of this concept would be hard to imagine, let alone 20 years ago, or 30. All this connectedness and all these social media platforms raise a lot of questions, and there’s certainly a lot to be said about social media-does it bring people closer together or does it further remove people from actual interaction, is it a huge waste of time or can you do more productive things with it than finding out what your friends are up to etc. Whichever side you’re on it is impossible to deny the popularity and interest in social media, and the influence that these technologies have in our every day lives is just as ubiquitous. For instance, we have a whole class here at SUNY Geneseo about just this sort of thing-and we’re not the only ones. There is also a pretty consistent stream of articles written, on paper or on the internet-such as the one Becca shared from the New Yorker last week-that deal with ideas such as these. I think both sides have a point to be made, and everything can be good (or bad) depending on the moderation. There’s no doubt that technology, these social media devices included, can do wonderful, amazing things. They also simultaneously are changing certain social scripts that people have been used to, and maybe that’s part of the retraction or at least skepticism for the group of naysayers, because after all change can be scary as it brings into light a lot of un-known results and sometimes problems. Anyway, many people think that Facebook may be on the decline, despite its steady increase in membership since it’s inception. Recently Facebook, which has been purchasing multi-media apps and programs like you read about in order to continue to deliver current and various ways to attract new members (and keep old ones from getting bored)-acquired the virtual reality company Oculus for 2 billion dollars. While the purchase indicates pretty clearly where the company will be taking the site in the near future, it has spoken volumes to many disapproving imaginations, and disappointed gamers. Oculus used to be a company that worked on furthering research into virtual reality video games, but now that Facebook has merged with them, there has been a lot of backlash on the internet and hate towards this company. Some of the reasons being that Facebook is only interested in its total membership number, which is why certain companies and developers have refused to work with them in the past, and others accuse Zuckerberg of just finding new ways to hurl adds at people, potentially quite literally now. Zuckerberg says that this is going to mean a beautiful new way of connecting with people, and a totally new kind of way to share experiences with people (there are those damn words again; share! connect!). While the internet community has accused Facebook of things like souless-ness and big time capitalism and invading peoples’ privacies, the chief concern for the video game subculture it appears, who merely want to be able to enjoy solitude from time ti time (completely understandable), and I understand a lot of these arguments, I can’t say that I’m not curious. I think there’s a lot of potential, but then there’s the part of me that also says what is this going to do personal interaction? As we mentioned earlier this semester, people used to think the telephone would spell the end for face-to-face communication, and I can’t help but see a similarity in the debate surrounding this most recent social media related news bite. Why not, instead of virtually exploring a city or virtually climbing a mountain with someone, actually doing those things? Or maybe there’s room for both…I will bite my tongue and wait for time to tell since this is a recently new (one week old news) story. But I will lastly Include a funny photo I found on Reddit, which the users of have been particularly vocal in their disapproval of the merger, that resembles a lot of commentary one could find on there. After the news broke, the site was flooded with graphics like these. So if you’re interested in hearing a lot of peoples opinions, Reddit is a good sample space. This image brought to you via a Redditor slightly shopping a classic still from an old “Simpson’s” episode. You guys remember the game FarmVille, right? Well, I’ll say no more, but that this could be the future of Facebook… 

English Language Arts Algorithms?

252570-what-people-think-i-do-what-i-really-doLet’s face it: English majors probably fall victim to ridicule much more often than other college students. Our peers in the math or science departments might ask well-intentioned, yet still annoying questions like, “So, you guys just read all day? Like a book club?” Others likely judge us for being hypersensitive or overemotional. And 9 times out of 10, I get, “Oh, you must be one of those grammar Nazis then.” Yep, my homepage is Purdue OWL, and I’m spending large amounts of my time and money learning how to call you out for your incorrect use of “there/their/they’re.” Our major has even inspired a catchy show tune (See Avenue Q’s?”). Perhaps the most frustrating avenue_q_2_fullsizequestions of all is, “You’re an English major? So you’re going to be a teacher then?” I actually am in the School of Education, yet this question still bothers me because it seems to insinuate that there is only one possible career goal English majors aspire to: teaching. So why are we constantly having to defend ourselves and our field of study?

Those of us who pursue a degree in English understand. We know that the content and skills we learn apply in so many different ways, and that the type of thinking we are trained to do is valuable in countless careers outside of education. It’s true that some who study English might go on to be teachers of language and literature, but that many more of us choose to be writers, editors, publishers, journalists, lawyers, public speakers, human resource specialists, and more. We acquire characteristics like interpersonal skills, analytic and synthetic skills, communication skills, and perhaps most notably, critical thinking skills; the realm of possibilities available to us is perhaps much greater than a person who choses something highly technical or specified. And yet, the stigma still exists that those of us who study English are all about “the feels.”

It’s definitely true that our area of expertise is considered comparatively subjective. But that’s precisely why we love it. It’s called English Language Arts for a reason. Authors are master artists who use the craft of language to paint a beautiful picture with nothing other than words on a page. We live for those phrases with just the right balance of connotation, edge, and flow. We get sucked into a novel because we become so wrapped up in appreciation for the story, it seemingly takes on a life of its own. When we finish a book, it’s like we’re saying goodbye to a few good friends, and there is often a feeling of emptiness. The phrase “book hangover” is becoming popular: “The inability to start a new book because you’re still living in the last book’s world.” Language is powerful and certainly has the ability to transport us somewhere else for a while, and to me, literature is life breathed into once inanimate pieces of paper.

While reading Stephen Ramsay’s chapter entitled “An Algorithmic Criticism,” I will admit I was slightly skeptical. This man favors a black-and-white approach to viewing literature that I have never experienced until this class. As English majors, we like to latch on to those gray areas, interpreting a text in different ways based on various lenses. I took Literary Criticism at Geneseo as an undergraduate, and I loved being able to find cracks in which I might read between the lines, inserting a feminist, marxist, structuralist, or psychoanalytic rendering of a given text. And yet Ramsay suggests we begin looking at our beloved literature based on nothing but the cold, hard, quantitative facts. Despite being initially reluctant, I admit that I did begin warming up to the idea of a mathematical tool that might help us read literature more concretely. I envisioned myself becoming a better defender of our art form: “In your face, physics majors. We are totally using algorithms to further our understanding and analysis of this complex theoretical text.” That should force them to take us more seriously, right? Okay then, I can get behind algorithmic criticism. Especially since education in our country is currently emphasizing strict, textual-based evidence and data-driven instruction.

An artist's impression shows a fictional robo-teacherBUT THEN. I remember reading an article that was nothing short of a polar binary to the type of reading that we know and love as English majors. It’s called Robo-readers: the new teachers’ helper in the U.S., and it basically makes me want to cry. This article praises the use of robot graders in the classroom, which are supposedly more efficient, more reliable, and more accurate at grading student compositions than are humans. WHAT? I actually prefer this article – Facing a Robo-Grader? Just Keep Obfuscating Mellifluously – which, in addition to being satirical and rather entertaining, gives a much clearer picture of what these “robo-graders” are. Apparently, they are machines that are capable of “grading” up to 16,000 essays in 20 seconds. Similarly to the algorithms used in Ramsay’s piece, these robots scan compositions for length, Lexile complexity, vocabulary use, transition words, and other indicators that are somehow representative of “complex thinking.”

In class, we’re constantly talking about how technology is revolutionizing our lives,
and final-exameducational institutions have been using digital upgrades like Scantrons to help grade exams for years. HOWEVER. I think it’s pretty clear the difference between a machine that can count the correct number of answers based on objective measures (filling in the correct bubble), and grading a student’s essay based on algorithms alone.

The problems with robo-graders are outlined really well in the latter article I’ve referenced, but to give a quick summary: automated readers cannot identify arguably important information, such as, let’s say, TRUTH. This means a student can write an essay getting 100% of the factual information wrong, and still receive full credit. Computers also cannot detect nuances in human language such as sarcasm, and they do not understand or appreciate (and therefore cannot give credit for) creative, stylistic choices. E-raters give the best scores to the longer essays, regardless of content. They deduct points for short sentences, paragraphs, sentence fragments, phrases beginning with “or,” “and,” or “but,” etc. Does this begin to give you an idea of how scary this is for our students? Some argue that kids who are bright enough to outsmart the robo-grader and begin tailoring their writing in order to get high marks deserve them, because this sophisticated type of thinking is what warrants credit, even if students cannot write to save their lives. Sorry, what? Lastly, consider this quote from the Times article: “Two former students who are computer science majors [said] that they could design an Android app to generate essays that would receive 6’s from e-Rater. He says the nice thing about that is that smartphones would be able to submit essays directly to computer graders, and humans wouldn’t have to get involved.” Are you afraid yet?

Maybe I’m a typical, sentimental English major. Maybe I’m sounding like an old soul. Or maybe, I’m terrified of a world so quantifiable, our students need only learn how to write in order to please the grade-giving, robo-reader god. Those of us who study English do so because we recognize literature to be an art form, and because we believe in the power of language to give shape to the world. We understand English as a vehicle from which to make sense of life, and our passion for learning (and many of us for teaching) this material stems from our desire to connect with other members of humanity in a meaningful way. I’m not sure any e-Rater would understand this, let alone have the ability to truly judge our work. Maybe in the future robo-grading will become the norm, but no. Not just yet.