WWMS: What Would Marx Say about Digital Commons

Perhaps this is just because I’m currently reading The Manifesto in Humanities, but with all this talk about the consequences for private property in the digital age, I was wondering what Marx would have to say about all of this. The answer I arrive at is vague and pretty unhelpful (like Marx himself on the whole), but I’ll get there in a minute.

Before all this talk of communism, John Locke wrote about the implications of property ownership as early as the 17th century. He writes in his Second Treatise of Government (based on my limited Humn knowledge) that property was originally defined by what

John “Locke”: Because he was exercising his natural right to liberty…

you could hunt/gather for yourself without wasting. This created a level of equality among people, because amassing enormous wealth would be physically taxing and people stopped collecting things when their “natural” needs were met. Things like berries and meat spoil quickly, so it would make no sense to horde them. The appearance of money and its triumph over the barter system changed the way people owned things. Now people could own things unequally, and theoretically amass unlimited amounts of wealth that last and accumulate. This sounds like the capitalist system we have today.

 Marx, of course, credits any social development throughout history to economics–essentially, the distribution of property. Engels is really the one who describes the property distribution between the upper-middle bourgeoisie class and the working proletariat. The workers are stuck in an endless cycle of poverty. Marx writes in his Manifesto that the typical system of “synthesis” that happens when “haves” and “have-nots” clash will not be possible when the bourgeoisie and proletariat of capitalism inevitably meet their end. Capitalism will finish, and some sort of revolution that is unimaginable will happen. Communism, or equal distribution of wealth, is the best way to stop putting band-aids on capitalism and urge on this “revolution.”

Blamed Capitalism before it was mainstream....
Blamed Capitalism before it was mainstream….

So, WWMS about the question of digital commons, or places online such as Digital Thoreau in which anyone with internet access can “own” something? How can anyone truly own something/the rights to something if digital sites are open-access?

I think the important thing to remember is that the nature of property distribution has changed as texts, ideas, images, etc. have moved online. Nowadays, an artist can’t be assured for one second that she’ll receive money for everything she has published; someone somewhere will undoubtably have found a way to copy-paste or download or screenshot, etc, etc, etc, her work. There’s a block in the money-centered, capitalistic flow of trade that people such as Scott Turow, Paul Aiken, and James Shapiro would argue discourages creativity and production.

BUT

This is where things get eerie, because Marx predicts the destruction of means of production as ways to combat the over-production of final-stage capitalism. The sheer volume of things produced on the web make it a perfect example of capitalism in its final stages. There’s overproduction and then unwillingness/inability to pay on the part of consumers, and then a disincentive for producers to continue….producing.

Communal spaces on the web of course sound kind of communistic in that they equalize people as consumers. However, they’re different than the material property and situation that Marx and Engels were so sure determine everything in the world. In fact, it seems to me to be more similar to the berries and meat Locke spoke of. Web content doesn’t really have an expiration date, but there’s only so much you can download and read and listen to on a computer or in a day. And the amount that you download on your computer doesn’t determine your wealth or material situation (unlike money). This is arbitrary property that falls not really under the supply/demand chain of communism, but more under the take-what-you-need-but-it-will-take-time-and-effort model of the hunter/gatherer system.

Of course where it differs is that people have to produce online content, whereas deer produce venison for us (thanks, deer). So we still have the problem of production. But, Marx would definitely say that that anxiety is the capitalist in all of us which can’t envision any other way of viewing the world except as a giant factory of creation. However, that still doesn’t help us very much in finding pragmatic ways to encourage production in a communal world without guaranteed payback for your time and effort.

So I think Marx would look at the digital age and the way property has become in nature and in distribution, shake his head, think of the end of capitalism, smile, and say I told you this was coming.

Algorithmic Criticism and the Humanities

In a characteristically lively and thoughtful post, Katie Allen looks at some articles about computer programs that automate the evaluation of student writing. She eloquently expresses a concern that many in the humanities, myself included, share about the use of machines to perform tasks that have traditionally relied on human judgment. “Those of us who study English do so because we recognize literature to be an art form, and because we believe in the power of language to give shape to the world,” she writes. A computer can run algorithms to analyze a piece of writing for length and variety of sentences, complexity of vocabulary, use of transitions, etc., but it still takes a trained human eye, and a thinking subject behind it capable of putting words in context, to recognize truth and beauty.

Yet if we’re right to be skeptical about the capacity of machines to substitute for human judgment, we might ask whether there is some other role that algorithms might play in the work of humanists.

This is the question that Stephen Ramsay asks in his chapter of Reading Machines titled “An Algorithmic Criticism.”

Katie’s post makes Ramsay sound rather like he’s on the side of the robo-graders. She writes that he “favors a black-and-white approach to viewing literature that I have never experienced until this class… . [He] suggests we begin looking at our beloved literature based on nothing but the cold, hard, quantitative facts.”

In fact, though, Katie has an ally in Ramsay. Here is what he says about the difference, not between machines and humans, but more broadly between the aims and methods of science and those of the humanities:

… science differs significantly from the humanities in that it seeks singular answers to the problems under discussion. However far ranging a scientific debate might be, however varied the interpretations offered, the assumption remains that there is a singular answer (or set of answers) to the question at hand. Literary criticism has no such assumption. In the humanities the fecundity of any particular discussion is often judged precisely by the degree to which it offers ramified solutions to the problem at hand. We are not trying to solve [Virginia] Woolf. We are trying to ensure that the discussion of [Woolf’s novel] The Waves continues.

Critics often use the word “pattern” to describe what they’re putting forth, and that word aptly connotes the fundamental nature of the data upon which literary insight relies. The understanding promised by the critical act arises not from a presentation of facts, but from the elaboration of a gestalt, and it rightfully includes the vague reference, the conjectured similitude, the ironic twist, and the dramatic turn. In the spirit of inventio, the critic freely employs the rhetorical tactics of conjecture — not so that a given matter might be definitely settled, but in order that the matter might become richer, deeper, and ever more complicated. The proper response to the conundrum posed by [the literary critic George] Steiner’s “redemptive worldview” is not the scientific imperative toward verification and falsification, but the humanistic propensity toward disagreement and elaboration.

This distinction — which insists, as Katie does, that work in the humanities requires powers and dispositions that machines don’t possess and can’t appreciate (insight, irony) — provides the background for Ramsay’s attempt to sketch out the value of an “algorithmic criticism” for humanists. Science seeks results that can be experimentally “verified” or “falsified.” The humanities seek to keep a certain kind of conversation going.

We might add that science seeks to explain what is given by the world through the discovery of regular laws that govern that world, whereas the humanities seek to explain what it is like to be, and what it means to be, human in that world — as well as what humans themselves have added to it. To perform its job, science must do everything in its power to transcend the limits of human perspective; for the humanities, that perspective is unavoidable. As the philosopher Charles Taylor has put it, humans are “self-interpreting animals” — we are who we are partly in virtue of how we see ourselves. It would be pointless for us to understand what matters to us as humans from some neutral vantage outside the frame of human subjectivity and human concerns — “pointless” in the sense of “futile,” but also in the sense of “beside the point.” Sharpening our view of things from this vantage is precisely what the humanist is trying to do. If you tried to sharpen the view without simultaneously inhabiting it, you would have no way to gauge your own success.

The gray areas that are the inevitable territory of the English major, and in which Katie, as an exemplary English major, is happy to live, are — Ramsay is saying — the result of just this difference between science and the humanities. As a humanist himself, he’s happy there, too. He’s not suggesting that the humanities should take a black-and-white approach to literature. On the contrary, he insists repeatedly that texts contain no “cold, hard facts” because everything we see in them we see from some human viewpoint, from within some frame of reference; in fact, from within multiple, overlapping frames of reference.

Ramsay also warns repeatedly against the mistake of supposing that one could ever follow the methods of science to arrive at “verifiable” and “falsifiable” answers to the questions that literary criticism cares about.

What he does suggest, however, is that precisely because literary critics cast their explanations in terms of “patterns” rather than “laws,” the computer’s ability to execute certain kinds of algorithms and perform certain kinds of counting makes it ideally suited, in certain circumstances, to aid the critic in her or his task. “Patterns” of a certain kind are just what computers are good at turning up.

“Any reading of a text that is not a recapitulation of that text relies on a heuristic of radical transformation,” Ramsay writes. If your interpretation of Hamlet is to be anything other than a mere repetition of the words of Hamlet, it must re-cast Shakespeare’s play in other words. From that moment, it is no longer Hamlet, but from that moment, and not until that moment, understanding Hamlet becomes possible. “The critic who endeavors to put forth a ‘reading’ puts forth not the text, but a new text in which the data has been paraphrased, elaborated, selected, truncated, and transduced.”

There are many ways to do this. Ramsay’s point is merely that computers give us some new ones, and that the “radical transformation” produced by, for example, analyzing linguistic patterns in Woolf’s The Waves may take the conversation about the novel in some heretofore unexpected, and, at least for the moment, fruitful direction, making it richer, deeper, more complicated.

At a time when those of us in the humanities justly feel that what we do is undervalued in the culture at large, while what scientists do is reflexively celebrated (even as it is often poorly understood), there are, I believe, two mistakes we can make.

One is the mistake that Ramsay mentions: trying to make the humanities scientific, in the vain hope that doing so will persuade others to view what we do as important, useful, “practical.” (Katie identifies a version of this mistake in the presumption that robo-grading can provide a more “accurate” — that is, more scientific — assessment of students’ writing skills than humans can.)

But the other mistake would be to take up a defensive posture toward science, to treat the methods and aims of science as so utterly alien, if not hostile, to the humanities that we should guard ourselves against contamination by them and, whenever possible, proclaim from the rooftops our superiority to them. Katie doesn’t do this, but there are some in the humanities who do.

In a recent blogpost on The New Anti-Intellectualism, Andrew Piper calls out those humanists who seem to believe that “the world can be neatly partitioned into two kinds of thought, scientific and humanistic, quantitative and qualitative, remaking the history of ideas in the image of C.P. Snow’s two cultures.” It’s wrongheaded, he argues, to suppose that “Quantity is OK as long as it doesn’t touch those quintessentially human practices of art, culture, value, and meaning.”

Piper worries that “quantification today is tarnished with a host of evils. It is seen as a source of intellectual isolation (when academics use numbers they are alienating themselves from the public); a moral danger (when academics use numbers to understand things that shouldn’t be quantified they threaten to undo what matters most); and finally, quantification is just irrelevant.”

That view of quantification is dangerous and unfortunate, I think, not only because we need quantitative methods to help us make sense of such issues of pressing human concern as wealth inequality and climate change, but also because artists themselves measure sound, syllable, and space to take the measure of humanity and nature.

As Piper points out, “Quantity is part of that drama” of our quest for meaning about matters of human concern, of our deeply human “need to know ‘why.’”

Admin’s note: This post has been updated since its original appearance.

Parenti, Lessig, and cute animals

Reading Lawrence Lessig’s “Free Culture” reminds me of a book I had to read for a high school global history class: “The Assassination of Julius Caesar: A People’s History of Ancient Rome” by Michael Parenti.

Parenti, a Yale grad and “cultural critic” (Wikipedia’s words), argues in his book that history has really done a number on poor Caesar, who was not, in fact, assassinated because he

Since this post does not lend itself to images, treat yourself to some adorable animal pictures.
Since this post does not lend itself to images, treat yourself to some adorable animal pictures.

was abusing power and ignoring the needs of his constituents. A few chapters are eloquent laundry lists of all the great things Caesar did for Rome, like creating the Julian calendar (a variation of which we still use today) and working to relieve poverty among the very plebs he was accused of mistreating; other chapters debunk common misconceptions ‘traditional history’ has fed us. A 2004 book review from Parenti’s website synopsizes his thesis: “In The Assassination of Julius Caesar, the distinguished author Michael Parenti subjects these assertions of “gentlemen historians” to a bracing critique, and presents us with a compelling story of popular resistance against entrenched power and wealth. Parenti shows that Caesar was only the last in a line of reformers, dating back across the better part of a century, who were murdered by opulent conservatives.”

His name is Lionel and she rescued him from a slaughterhouse when he was a calf. True story.
His name is Lionel and she rescued him from a slaughterhouse when he was a calf. True story.

I disliked the book from the first few pages because of Parenti’s smug attitude. He seems to think that he is pulling the wool off our eyes and showing us a hidden truth, when in reality, he is simply proposing a theory contrary to the ones in our boiler plate high school textbooks. Responsible readers will identify this bias and take his argument with a grain of salt; but I can easily see a less careful reader thinking that he now understands Ancient Rome better than his friends because he knows ‘the truth.’ Textbooks’ version of why Caesar was assassinated and Parenti’s are both rooted in facts; it’s just that each one gussies up his argument in a different way, puts those facts in a different order, foregrounds different information and flat-out omits what doesn’t suit the thesis.

I promise, I’m circling back to Lessig, now. In reading the introduction and first few chapters of “Free Culture,” I was getting strong Parenti-vibes. Just like Parenti, Lessig’s argument is

Elephants are highly emotional creatures, and are one of the only mammals besides us who mourn their dead.
Elephants are highly emotional creatures, and are one of the only mammals besides us who mourn their dead.

opposed to the one that contemporary culture furnishes us with. Most people believe it’s important to protect intellectual property, whereas Lessig dramatically states, “Ours was a free culture. It is becoming less so” (30). There’s nothing wrong with taking the counter view, but I am skeptical of an argument that stands upon completely disproving another position, rather than generating genuine ideas that may or may not line up with prevailing theories. That sounded pretentious and confusing. I just mean that I sense a little rebellious flare in Lessig’s writing, like he’s excited to tear down the mistakes our culture has made.

This guy gets it
This guy gets it

Lessig is doing the Socrates thing, where you ask little questions that people agree with (“isn’t it silly to sue Girl Scouts for singing copyrighted songs around a campfire?” “don’t scientist build off each other’s work all the time?”) until you’ve led them to a conclusion miles away from where they started. Think about what he’s saying: protecting intellectual property is not only illogical, but is changing our culture for the worse. Yet, every one of us has created something that we are proud of, sometimes even defensively proud of. Can you imagine another person or corporation taking credit for it? As someone who has been plagiarized, I can tell you that it’s more gut-wrenching than you’d think. I do not think it is such an evil thing to get credit for your hard work. Just because some inventing happens in the mind rather than in a workshop, that doesn’t mean we should privilege the protection of one kind over another.

The photographer is named Brian Skerry. He was interviewed about this photo and said that the Bow whale was calm, curious, and had not one iota of aggression. After this photo, the whale swam on for a while, Skerry following and snapping pictures. When Skerry had to stop to catch his breath after 20 minutes, he was thrilled to have had such a successful day. But the whale actually stopped and waited for him. Oh my God I'm tearing up, isn't that beautiful?!
The photographer is named Brian Skerry. He was interviewed about this photo and said that the Bow whale was calm, curious, and had not one iota of aggression as it approached his partner. After this photo, the whale swam on for a while, Skerry and his partner following and snapping pictures. When Skerry had to stop to catch his breath after 20 minutes, he was thrilled to have had such a successful day and assumed that was all he would get. But the whale actually stopped and waited for him. Oh my God I’m tearing up, isn’t that beautiful?!

But I am getting ahead of myself a little bit, because to be honest, I’m not even sure that I understand Lessig’s argument completely.  I probably shouldn’t be criticizing him like this until I’ve read the whole book, I admit. From what I’ve gotten through, though, I can say that I find his argument convincing only in small chunks, but kind of incoherent in the big picture. Lessig adores historical anecdotes. Each chapter contains several very interesting stories about how Joe What’shisnose got ripped off by a big corporation or how Jane Blah was only able to create the world’s greatest whatever because she used someone else’s idea. I really liked all of these examples, especially the one about Steamboat Willie and the explanation of Japanese ‘copycat’ comics. The problem was that I had trouble connecting them. Lessig tells us that his book is “about an effect of the Internet beyond the Internet itself: an effect upon how culture is made. […] The Internet has induced an important and unrecognized change in that process” (7) and that his goal is “to understand a hopelessly destructive war inspired by the technologies of the Internet but reaching far beyond its code” (11).  Honestly, that’s the kind of thesis that I would circle at the Writing Center and say, “You have a really interesting idea here, but the thesis is supposed to be the roadmap to the rest of your paper. You need to be more specific.” Saying that you want to talk about how the Internet has changed culture and how there is conflict surrounding technology tells me very little about what I as a critical reader am supposed to be looking for.

Over 10,000 pitbulls have been euthanized due to breed discriminatory legislation in cities. Happy, loving family pets like this fella have been persecuted just because he's a pit bull. But look at him! Just, look!
Over 10,000 pitbulls have been euthanized due to breed discriminatory legislation in cities. Happy, loving family pets like this fella have been persecuted just because of unfair stereotypes. It’s dog racism. But look at him! Just, look!

Yikes, this is getting wordy. My point is that some of Lessig’s anecdotes seem to cast the people who lost their intellectual property in a sympathetic light (like the first story about poor Edwin who committed suicide over his idea being stolen), while others underscore the importance of brooking property rights if we ever want to advance as a society (the Kodak episode). I’m pretty confident that he is arguing against strict intellectual copyright laws on the Internet, but if I wasn’t reading his book in the context of this class, I might be less certain.

He also pulls a Parenti every now and then and throws out a statement in support of his argument that is just totally ridiculous. Lessig honestly thinks that “we, the most powerful democracy in the world, have developed a strong norm against talking about politics” (42)? Really? He backs this up by noting that we are discouraged from discussing politics because they are too controversial and can lead to rudeness, but as a card carrying American, I can say that the thought of offending someone has never stopped me from saying anything. He cannot really try to get us on board with the idea that our society stifles political dialogue (or even satire).

This is Tillie. I have been lucky to call her my best friend for 7, happy years!
This is Tillie. I have been blessed to call her my best friend for 7 happy years and counting!

All in all, I have not found this reading unpleasant. I like his writing style and, like I said, his anecdotes are very captivating. I just wish he had a little more direction, a little less sass, and a smidge of common sense.

You’re a champ if you stuck it through the whole thing. Hope the animal pictures helped.

Can Creativity be Programmed?

I was roaming the internet a few days ago and I came across this article.

http://news.bbc.co.uk/2/hi/programmes/click_online/9764416.stm

To summarize what this article is discussing, it unveils the fact that robots are actually capable of writing books now. At the moment, they are just writing on pre-existing scientific or mathematic theories and laws, and have even occasionally dabbled in love letters as well. However, the article’s biggest point is posing the question of whether or not robots could actually be able to write a fictional novel, and even win a Pulitzer for it.

Personally, I don’t think that is a valid question at this point.

Robots are still created and manufactured by humans, and their capabilities are clearly lined out by their creators within their computer code. At this point in time, there is no way to instruct something to be creative and innovative, that kills the whole point of creativity. To be able to properly write a work of fiction, you need to be able to arrive at the idea through a combination of experience and imagination, something that machines don’t necessarily have right now. Robots simply cannot sit down and think about what would be an interesting story to write about because the code for that simply does not exist.

However, if this technology ever does exist, I think the question is less about can a machine write a novel, but does the ability to create a novel imply that at some level, these robots have an element of humanity in them? Does having the ability to be creative make the machine part of the human psyche, and can that ever be achieved? For me, writing has been a way for me to express thoughts, feelings, and emotions as well as taking ownership of a world that I have built, which inspires me to continue to write and create new stories that can be shared. Can a machine ever find this same level of joy, or will it create and be creative simply because it has to? And what would a robot author mean for the future of storytelling? Will it be another aspect of competition, or a stigma on the literary world? Will it be a boon, or will it only cause a new level of literary elitism?

The ‘Virtual’ Future of Social Media

Facebook has half of one billion members, which is a crazy concept to me. Especially considering that just as recently as half our lives ago (us kids, anyway), something along the lines of this concept would be hard to imagine, let alone 20 years ago, or 30. All this connectedness and all these social media platforms raise a lot of questions, and there’s certainly a lot to be said about social media-does it bring people closer together or does it further remove people from actual interaction, is it a huge waste of time or can you do more productive things with it than finding out what your friends are up to etc. Whichever side you’re on it is impossible to deny the popularity and interest in social media, and the influence that these technologies have in our every day lives is just as ubiquitous. For instance, we have a whole class here at SUNY Geneseo about just this sort of thing-and we’re not the only ones. There is also a pretty consistent stream of articles written, on paper or on the internet-such as the one Becca shared from the New Yorker last week-that deal with ideas such as these. I think both sides have a point to be made, and everything can be good (or bad) depending on the moderation. There’s no doubt that technology, these social media devices included, can do wonderful, amazing things. They also simultaneously are changing certain social scripts that people have been used to, and maybe that’s part of the retraction or at least skepticism for the group of naysayers, because after all change can be scary as it brings into light a lot of un-known results and sometimes problems. Anyway, many people think that Facebook may be on the decline, despite its steady increase in membership since it’s inception. Recently Facebook, which has been purchasing multi-media apps and programs like you read about in order to continue to deliver current and various ways to attract new members (and keep old ones from getting bored)-acquired the virtual reality company Oculus for 2 billion dollars. While the purchase indicates pretty clearly where the company will be taking the site in the near future, it has spoken volumes to many disapproving imaginations, and disappointed gamers. Oculus used to be a company that worked on furthering research into virtual reality video games, but now that Facebook has merged with them, there has been a lot of backlash on the internet and hate towards this company. Some of the reasons being that Facebook is only interested in its total membership number, which is why certain companies and developers have refused to work with them in the past, and others accuse Zuckerberg of just finding new ways to hurl adds at people, potentially quite literally now. Zuckerberg says that this is going to mean a beautiful new way of connecting with people, and a totally new kind of way to share experiences with people (there are those damn words again; share! connect!). While the internet community has accused Facebook of things like souless-ness and big time capitalism and invading peoples’ privacies, the chief concern for the video game subculture it appears, who merely want to be able to enjoy solitude from time ti time (completely understandable), and I understand a lot of these arguments, I can’t say that I’m not curious. I think there’s a lot of potential, but then there’s the part of me that also says what is this going to do personal interaction? As we mentioned earlier this semester, people used to think the telephone would spell the end for face-to-face communication, and I can’t help but see a similarity in the debate surrounding this most recent social media related news bite. Why not, instead of virtually exploring a city or virtually climbing a mountain with someone, actually doing those things? Or maybe there’s room for both…I will bite my tongue and wait for time to tell since this is a recently new (one week old news) story. But I will lastly Include a funny photo I found on Reddit, which the users of have been particularly vocal in their disapproval of the merger, that resembles a lot of commentary one could find on there. After the news broke, the site was flooded with graphics like these. So if you’re interested in hearing a lot of peoples opinions, Reddit is a good sample space. This image brought to you via a Redditor slightly shopping a classic still from an old “Simpson’s” episode. You guys remember the game FarmVille, right? Well, I’ll say no more, but that this could be the future of Facebook… 

English Language Arts Algorithms?

252570-what-people-think-i-do-what-i-really-doLet’s face it: English majors probably fall victim to ridicule much more often than other college students. Our peers in the math or science departments might ask well-intentioned, yet still annoying questions like, “So, you guys just read all day? Like a book club?” Others likely judge us for being hypersensitive or overemotional. And 9 times out of 10, I get, “Oh, you must be one of those grammar Nazis then.” Yep, my homepage is Purdue OWL, and I’m spending large amounts of my time and money learning how to call you out for your incorrect use of “there/their/they’re.” Our major has even inspired a catchy show tune (See Avenue Q’sWhat Do You Do with a B.A. in English?”). Perhaps the most frustrating avenue_q_2_fullsizequestions of all is, “You’re an English major? So you’re going to be a teacher then?” I actually am in the School of Education, yet this question still bothers me because it seems to insinuate that there is only one possible career goal English majors aspire to: teaching. So why are we constantly having to defend ourselves and our field of study?

Those of us who pursue a degree in English understand. We know that the content and skills we learn apply in so many different ways, and that the type of thinking we are trained to do is valuable in countless careers outside of education. It’s true that some who study English might go on to be teachers of language and literature, but that many more of us choose to be writers, editors, publishers, journalists, lawyers, public speakers, human resource specialists, and more. We acquire characteristics like interpersonal skills, analytic and synthetic skills, communication skills, and perhaps most notably, critical thinking skills; the realm of possibilities available to us is perhaps much greater than a person who choses something highly technical or specified. And yet, the stigma still exists that those of us who study English are all about “the feels.”

It’s definitely true that our area of expertise is considered comparatively subjective. But that’s precisely why we love it. It’s called English Language Arts for a reason. Authors are master artists who use the craft of language to paint a beautiful picture with nothing other than words on a page. We live for those phrases with just the right balance of connotation, edge, and flow. We get sucked into a novel because we become so wrapped up in appreciation for the story, it seemingly takes on a life of its own. When we finish a book, it’s like we’re saying goodbye to a few good friends, and there is often a feeling of emptiness. The phrase “book hangover” is becoming popular: “The inability to start a new book because you’re still living in the last book’s world.” Language is powerful and certainly has the ability to transport us somewhere else for a while, and to me, literature is life breathed into once inanimate pieces of paper.

While reading Stephen Ramsay’s chapter entitled “An Algorithmic Criticism,” I will admit I was slightly skeptical. This man favors a black-and-white approach to viewing literature that I have never experienced until this class. As English majors, we like to latch on to those gray areas, interpreting a text in different ways based on various lenses. I took Literary Criticism at Geneseo as an undergraduate, and I loved being able to find cracks in which I might read between the lines, inserting a feminist, marxist, structuralist, or psychoanalytic rendering of a given text. And yet Ramsay suggests we begin looking at our beloved literature based on nothing but the cold, hard, quantitative facts. Despite being initially reluctant, I admit that I did begin warming up to the idea of a mathematical tool that might help us read literature more concretely. I envisioned myself becoming a better defender of our art form: “In your face, physics majors. We are totally using algorithms to further our understanding and analysis of this complex theoretical text.” That should force them to take us more seriously, right? Okay then, I can get behind algorithmic criticism. Especially since education in our country is currently emphasizing strict, textual-based evidence and data-driven instruction.

An artist's impression shows a fictional robo-teacherBUT THEN. I remember reading an article that was nothing short of a polar binary to the type of reading that we know and love as English majors. It’s called Robo-readers: the new teachers’ helper in the U.S., and it basically makes me want to cry. This article praises the use of robot graders in the classroom, which are supposedly more efficient, more reliable, and more accurate at grading student compositions than are humans. WHAT? I actually prefer this article – Facing a Robo-Grader? Just Keep Obfuscating Mellifluously – which, in addition to being satirical and rather entertaining, gives a much clearer picture of what these “robo-graders” are. Apparently, they are machines that are capable of “grading” up to 16,000 essays in 20 seconds. Similarly to the algorithms used in Ramsay’s piece, these robots scan compositions for length, Lexile complexity, vocabulary use, transition words, and other indicators that are somehow representative of “complex thinking.”

In class, we’re constantly talking about how technology is revolutionizing our lives,
and final-exameducational institutions have been using digital upgrades like Scantrons to help grade exams for years. HOWEVER. I think it’s pretty clear the difference between a machine that can count the correct number of answers based on objective measures (filling in the correct bubble), and grading a student’s essay based on algorithms alone.

The problems with robo-graders are outlined really well in the latter article I’ve referenced, but to give a quick summary: automated readers cannot identify arguably important information, such as, let’s say, TRUTH. This means a student can write an essay getting 100% of the factual information wrong, and still receive full credit. Computers also cannot detect nuances in human language such as sarcasm, and they do not understand or appreciate (and therefore cannot give credit for) creative, stylistic choices. E-raters give the best scores to the longer essays, regardless of content. They deduct points for short sentences, paragraphs, sentence fragments, phrases beginning with “or,” “and,” or “but,” etc. Does this begin to give you an idea of how scary this is for our students? Some argue that kids who are bright enough to outsmart the robo-grader and begin tailoring their writing in order to get high marks deserve them, because this sophisticated type of thinking is what warrants credit, even if students cannot write to save their lives. Sorry, what? Lastly, consider this quote from the Times article: “Two former students who are computer science majors [said] that they could design an Android app to generate essays that would receive 6’s from e-Rater. He says the nice thing about that is that smartphones would be able to submit essays directly to computer graders, and humans wouldn’t have to get involved.” Are you afraid yet?

Maybe I’m a typical, sentimental English major. Maybe I’m sounding like an old soul. Or maybe, I’m terrified of a world so quantifiable, our students need only learn how to write in order to please the grade-giving, robo-reader god. Those of us who study English do so because we recognize literature to be an art form, and because we believe in the power of language to give shape to the world. We understand English as a vehicle from which to make sense of life, and our passion for learning (and many of us for teaching) this material stems from our desire to connect with other members of humanity in a meaningful way. I’m not sure any e-Rater would understand this, let alone have the ability to truly judge our work. Maybe in the future robo-grading will become the norm, but no. Not just yet.

Technology in Special Education Classrooms

I just finished writing a research paper for my Shakespeare class on using Shakespeare in a special education classroom, and much of the research that I came across discussed the benefits of using technology to teach literature to students in special education. I thought this was an interesting topic to share with the class, as I know many students in this class are education majors, and let’s face it regardless of if you are certified in special education or not, you WILL have special education students in your classroom!

Using technology in teaching literature is not only engaging to students and makes them more willing and excited to read, but technology also can incorporate a variety of different activities that can help improve students’ literacy. One common way of using technology in a classroom is by showing a film or video clips that enhance the lesson you are trying to teach, but there are many other options. Programs like Microsoft Publisher and Windows Movie Maker allow students to create professional looking projects that they can be excited about producing and take pride in the finished product.

In my research I came across one article by a teacher who documented her results of using different technologies to teach a Shakespeare play in her classroom. In addition to using Microsoft Publisher to have her students make pamphlets that featured a main character in the play, she also used digital cameras to take photos and Photo Shop to create scenes that were used in a PowerPoint presentation of the play. She was thrilled with the results, saying, “Technology was the vehicle that built their confidence, gave them an understanding of Shakespeare, and ultimately the willingness to take the risk reading the actual work” (Savoring Shakespeare 1).  With creative teachers and developing technology there is no saying how much will change in education and how much students can learn. Students in special education are capable of learning to the same extent of any other general education student, and technology is the tool that will help them succeed.

The article I mentioned is cited below:

“Savoring Shakespeare.” Reading Today 21.2 (2003): 10. Academic Search Alumni Edition. Web. 1 Apr. 2014

Famous Selfies: What Do They Say About Society?

f

Unless you live under a rock, there is no doubt that you have seen the “selfie” that comedian Ellen Degeneres posted on Twitter while she was hosting the Academy Awards on Sunday, March 2nd. While I was watching the Oscars, I saw the scene unfold: In the middle of the ceremony, Degeneres suddenly descended from the stage and declared that she was on a quest to snap the most retweeted picture of all time. She kept beckoning for other celebrities, such as Brad Pitt and Angelina Jolie, to join in. This culminated in a star-studded picture that garnered over 3.4 million retweets, satisfying her goal of making it the most retweeted Tweet of all time. It also smashed the record held previously by President Obama when he won the most recent Presidential election. Deemed by the Los Angeles Times as the “Tweet that broke Twitter” and “the greatest Selfie of all Time” by MTV, The selfie even earned over $3 million for charities. Personally, before this legendary selfie, I had never even seen a Tweet make it to one million retweets, let alone 3.4 million!

The memorable selfie has been duplicated by celebrities as well as the general public. One of the most notable reproductions of the famed photo was tweeted by comedian Jimmy Kimmel about two weeks later. The caption? “@TheEllenShow– No Brad Cooper but 3 Clintons & a Kimmel.” Although Kimmel’s attempt did not receive half as many retweets as Degeneres’, it illustrates the social trend that Degeneres created. The fact that the selfie triggered a myriad of responses from Kimmel, along with others such as 50 cent, the creators of The Simpsons, and a few ambitious people on my Instagram feed.

So, what does the staggering popularity of Degeneres’ selfie say about ourselves as a society? The fact that the selfie amassed so much attention, ranging from leading news stations to the common everyday Tweeter, just goes to show how much of an impact social media sites such as Twitter, Facebook and Instagram have had on our lives. People’s feelings about the selfie range from near addiction to pure hatred–an Inuit campaign called “SEALFIE” has been instituted as a protest against Degeneres, who used the selfie to donate money to an anti-seal hunting fund. Personally, I believe that humorous moments such as these can be a healthy reprieve from the stresses of everyday life–that’s why I think they get so much attention. One could argue that social media is a detrimental force on society and that its prominence is an example of how future generations are getting “doomed,” but that’s no fun at all. The negative backlash that this selfie has provoked strikes me as an overreaction to a lighthearted, humorous matter.

In addition, the popularity of the selfie expresses the profound impact that A-list celebrities such as Jennifer Lawrence, Meryl Streep and Bradley Cooper can have on the community. In addition, the fact that the selfie has more retweets than those featuring the Clintons and Obama does not necessarily mean that society cares more about celebrities. It simply illustrates that when browsing social media sites, people tend to look for more playful and humorous matters, unlike politics, which can be daunting. The remarkable popularity of Degeneres’ selfie is a representation of the endless possibilities that can be reached via social media.

 

Is social media pulling people apart? How about relationships?

A new app finally satisfies the desperate need of many men across the nation. It’s called BroApp, and their slogan is that they “Message your girlfriend sweet things so you can spend more time with the bros”.

http://broapp.net/

This app seems to be the new frontier in a world where people are increasingly separated by computer screens and cellphones. The creators of the app probably think that their service is a win win for all. The girls get sweet messages sent to them, and the guys have a little extra free time. However, isn’t it true that it’s the thought that counts? It would seem to me that any girl would object to being given these stock messages rather than heartfelt messages from their partner.

The app even takes precautions to hide itself from a bro’s girlfriend. It uses gps of the phones to make sure that you aren’t WITH your girlfriend when it sends the message! That would give it away immediately. It’s amazing how much effort we can put into creating technology so that we don’t have to put effort into our relationships.

I’m sure we’ve all seen the pictures online or news stories of families and friends “spending time together” when they are really all playing on their phones individually. Is this the way that human relationships will continue to develop? With less and less real interaction, and more interaction over the internet, I believe that we will forget feelings and nuance in favor of a cold world of simple text.

BroApp is just another app that further disconnects people from each other. Even those we care about we choose not to interact with just so that we can have a few extra seconds of free time. Are these new apps and websites destroying the social structure of the world? Only time will tell.

Traveling with Technology

tumblr_mdno9jb6Nm1qk3nblo1_400

A week ago I was driving down King Street, Charleston SC soaking up the 75 degree weather while glancing at the palm trees.

Unfortunately, Spring Break only lasts a week, and I was forced (literally pushed out of the car) to go to the airport to fly back to the good ol’ 585.

On my flight, I always read the ‘SkyMall’ magazine to see what innovative (and not so innovative) products are being sold.

Here are the two I found the most interesting:

The Portable Wifi Signal Booster

This is a fantastic idea! In a technological era, this product represents the easiness of access, anywhere, at any time.

photo 2

In my house back home, the wifi is centrally located in our basement,

next to the household computer. My room is two floors above, making the wifi connection sometimes weak. Usually, I’m using the wifi on my television to stream Netflix, on my iPhone to

scroll through Tumblr, on my Macbook Air to watch cat YouTube videos, and on my iPad to play Candy Crush (yes, I have an Apple

addiction). This product is very innovative, and would really help in easily boosting the wifi signal to all of my devices. It is described as being a simple process: “The device simply plugs into an AC outlet, connects to a wireless network, and rebroadcasts the signal to provide a faster, more reliable WiFi connection.”

Biffy Butler Bidet Sprayer / Digital Accessory Caddy / Toilet Paper Stand

photo 1I had to do a double-take when I saw this product. I know that sometimes extra material is needed when in the bathroom; normally you picture newspapers and magazines. Well, I mean, you can stream newspapers and magazines from iBooks, yes?

It astonishes me that this was actually being sold; on the other hand, I am not surprised at all. For products to be created and put on the market, there is an obvious demand for said product. Are we living in an era where we cannot go without our technological devices for five minutes? 

My train of thought brought me onto the topic of being lonely versus being alone. A lot of people are frightened and uncomfortable of the thought of being alone. Now, I don’t mean on a deserted island in the middle of the Atlantic Ocean, moreover, I mean simply going for a walk, or standing on line at Starbucks, or sitting in the Quad and listening to Michelle Branch (she’s currently playing on my iTunes). Humankind has become completely wrapped up in always being with others, literally and through technology. We do not feel the pleasure of being allowed time to ourselves, to think for ourselves. By always being bombarded with different social media and access to the thousands of opinions in the world, we do not take the time to form our own thoughts. We simply agree to the thought that sounds the most to our liking. Does anyone take a minute to form his own opinion?

Is technology taking away our sense of self and our ability to form our own true opinions?

And so my two hour flight from South Carolina to Rochester came to an end.