And the Copyright Goes to…

After last week’s discussions on copyrighting laws and the rules with which copyrighting is defined, I took a look at an instance that to me seemed a little curious. This isn’t a super insightful instance, but if inspires a dialogue then great, I just thought that this was an interesting scenario.

A lot of college students nowadays are familiar, or at least have heard in passing, about a television cartoon known as Avatar: The Last Airbender. It was a fun cartoon, had a cool concept and had a cast of very relatable and engaging characters. As is common with such shows, it gathered quite a following and a huge fan base exists for the show. A few years later, the blockbuster James Cameron hit Avatar came out, which managed to cash in an absurd amount of money. Now the really unusual thing about this is that one of the reasons Cameron pulled in so much money was by purchasing the copyright on the name Avatar, which essentially barred the name from being used in other productions, including the aforementioned television series. The live action movie that was made from the cartoon series had to be called just The Last Airbender, and the sequel series that the cartoon made had to be called The Legend of Korra to avoid paying royalties to Cameron for an unrelated series.

This to me seems ridiculous.

Why should productions made BEFORE a series have to pay someone else for using their original name? Not to mention that the word “avatar” is a title that has existed long before CGI blue people borrowed it for their description. Robbing former projects of their identities by using bought rights goes against the very point of copyrighting, which is to ensure the artist doesn’t get robbed of their due respect.

I believe that to solve this sort of issue, a change to the function of copyrighting should be made. If a project is published, sent out and reaches a certain profit quota, then the work should have a copyright applied to it immediately. That way, works that have come before other works of the same name can get their due, rather than having the rights be mad bid for ownership. It might be a bit on the unorthodox side, but I know if i created something, I’d want to maintain ownership of its identity, especially if I produced it before a project of the same name. Just a topic for thought.

It’s Written All Over Your Screen

So this isn’t my most thoughtful or deep post, but I’d like to backtrack a moment to possibly the number one concern I hear from people who are wary about digital texts: reading off of a screen. And it really can be uncomfortable! We’ve all experienced tired eyes and headaches after staring at our laptops too long, and the glare from the sun that forces you inside if you need to use your computer. According to one article, “the issue has become so prevalent in today’s work environment that the condition has been officially labelled by the American Optometric Association as ‘Computer Vision Syndrome.'”

I would also argue that there’s a more philosophical relevance to this concern; the screen is the face of the computer. The field of interaction between user and machine is primarily located on the screen (although the tactile experience of typing is of course also relevant, but perhaps less complained about). So, if this interaction is to be comfortable and integrated into every-day life, the screen needs to be user-friendly.

Let’s see what’s being done to improve this experience, focusing on laptop computers (of course ebook tools such as Nook and Kindle have done more on this front, but these aren’t the devices people are using for hours on end).

Laptops used to be black and white, prone to blurry screens and ghost images. But, by about 1991 color LCD screens came into use, which improved visual quality as well as cost for consumers. Nowadays of course, laptop screens are at a whole new level, with the emphasis

The MacBook Pro Retina Display...does seem to reduce glare!
The MacBook Pro Retina Display…does seem to reduce glare!

being on resolution and the implementation of touch screens. MacBook has a new “retina display” that’s supposed to have incredibly high resolution (the MacBook Pro 15″ Retina Display is literally advertised as “eye-popping”). However, there seems to be little to no push for these screens to be easy on the eyes. Improvements generally focus on bright color and high definition, but that “brighter and bolder” sort of thinking would intuitively seem to me to make things worse.

In fact, there are articles dotting the web about how to avoid eye-strain yourself. These range from buying computer glasses to sitting up straighter to taking breaks. Clearly this is a common concern. However, the physical screens themselves are not being made more ergonomic and healthy by engineers. I could only find one company that is on a mission to reduce eye fatigue

So my mom WAS right when she said I should sit up straight....
So my mom WAS right when she said I should sit up straight….

through engineering better screens. They are concerned with using direct current to reduce perceived flicker. However, this is not a mainstream laptop producer truly implementing revolutionary technology into their products.

Does anyone else know more about this, or have special tips/screen appliances they use to reduce eye-strain?

The fairness of Computer Based Testing

test

As many of you have probably experienced, there is a push right now for computer based testing. Computer based tests are becoming the norm for taking standardized tests like the GRE and NYSTCE. As with many things there are pros and cons to these exams. But the question is do the pros outweigh the cons? Or do the cons make these tests unfair?

Some of the benefits of computer based testing include: timely feedback, more efficient monitoring and tracking of students’ results, a reduced number of resources (as they are replaced by computers), easier storing of results records, and electronic analyzing of data that can be used in spreadsheets and statistical packages. Some downfalls of computer based testing include: costly and time consuming implementation of the exams, assessors and staff implementing must have IT skills, close monitoring of the software as there are chances it could fail or malfunction during an exam, the absence of an instructor, issues to prevent cheating, and computer anxiety.

This last disadvantage is a big one for students taking the exams, and it brings up the question of fairness. I am not saying that paper exams are entirely fair, as some people are better test takers than others to begin with. However, the computer based tests may give an advantage to students who are used to using a computer and have better skills using a computer. For example, a student who has not had a lot of experience typing on a keyboard may be at a disadvantage when taking a computer based test. Or a student who struggles when looking at a computer for too long may not be able to complete a long computer based examination.

I am interested to see how far computer based testing will go. Will students soon be taking SAT exams as computer based tests? And will it go so far that using paper and pencils on exams becomes a thing of the past and all tests taken in college and high school are taken through computer software? I do see the advantage of computer based testing, but I think it would be beneficial if students had a choice, especially right now for students who have not necessarily grown up taking all exams and assessments with a keyboard and a computer screen in front of them.

pencil

http://media.johnwiley.com.au/product_data/excerpt/24/04708619/0470861924.pdf

 

 

You Can Lead a Horse to Water

I do a lot of commuting: from Churchville to Geneseo, Churchville to Victor, and Churchville to Greece, all on a regular basis. I recently realized that rather than suffering through Eminem & Rhianna’s “Monster” for the 700th time (don’t get me wrong, I liked this song the first 699 times), a better use of my time in the car is to listen to Ted Talks.

Just today, I found two talks that apply to our course in TONS of different ways: Jennifer Golbeck’s “The curly fry conundrum: Why social media ‘likes’ say more than you might think,” and Lawrence Lessig’s (yep, he should sound familiar, and so should some of this video!) “Laws that choke creativity.”


I could write a post about either of these really interesting and relevant-to-340 videos, but I won’t. Instead, I will leave them here for others to “stumble upon,” particularly others who maybe haven’t found anything inspiring to write about in a while. The end of the semester is in sight, and many of my fellow bloggers have only posted once, or worse, not at all; hopefully these videos will help.

Are online skimming habits making us worse serious readers?

keep-calm-and-embrace-technology-2 (2)It seems appropriate, given the previous post about the ways in which technology helps or hinders our communication, to discuss how these new tools have also impacted the way we interpret the information we’re given. It’s nice to think that we as English majors can transition seamlessly between old and new media outlets, appreciating the feeling (and let’s not forget the smell!) of an actual tangible book while still keeping up to date with the new helpful technologies available to us. But the truth is that getting used to reading in the newer and more common formats, such as on a computer screen or smartphone, really can — and does — influence how we read “real books.”

In an article from Sunday’s Washington Post, Michael S. Rosenwald points out that our reading behavior with more serious texts has come to mimic our online, internet-surfing reading habits. One neuroscientist described this reading as “superficial” and said she worries that it is affecting us when we have to read with more in-depth processing. I’ve certainly noticed this in my own reading habits, and find it endlessly frustrating.

weapons-of-mass-distractionOn the internet, we skim. We look for important words that are of interest to us and if we can’t find them, we click on to the next page. I know I’m not alone in this. In our class discussion today someone from the group working on the Walter Harding website talked about including things like a letter from Albert Einstein to give the audience a reason to be interested and stay focused, since the eye is so easily diverted on the internet. It’s true! If we don’t immediately find something that piques our interest, we move on.

When I have important readings for class that are online, I have to close all other tabs and even use the Readability add-on that Dr. Schacht showed us earlier in the semester just to keep myself from getting distracted. It’s like my brain automatically assumes that if I’m reading on a computer monitor it must not be important, so my eyes start looking for “clickables.” To quote from Rosenwald’s article, “The brain is the innocent bystander in this new world. It just reflects how we live.” Clearly our leisurely habits are sneaking into our serious work as well.

ac4bb8a5ec3201c597967935c7ccfa94-617x411I encourage you to think about how you’ve experienced this just over the course of your time reading this blog post. You probably looked at the pictures, clicked the links to other websites (and maybe even other links on those sites), went to another tab to answer a Facebook message, and countless other things. I did all of that while writing the post too! Most of us are guilty of this habit, and that’s just what it is: a habit. We’re like little squirrels running around on the internet. Our focus is on one page until something more interesting (and not even necessarily better) comes along, at which point we leave our first focus entirely, sometimes struggling to remember how we got there in the first place. On one hand, it’s great that we have so much information readily available to us, and my guess is that there has to be a study out there somewhere regarding benefits of technology on our multitasking abilities! But when we’re so used to being bombarded with all of this, taking the time to slow down and isolate ourselves for a task without so many distractions can be a challenge.

Does Technology Help Us Communicate Better?

 “Are all these improvements in communication really helping us communicate?”

(Sex and the City, Season 4, Episode 6)

             It was a typical afternoon. I was home for spring break a few weeks ago, when I decided to unceremoniously plop myself down on my couch and flip through the TV channels. As I was lazily deciding which show I should purge on, I happened to stop on Sex and the City right when the main protagonist, Carrie, said that quote above.

Carrie Bradshaw from Sex and the City
Carrie Bradshaw from Sex and the City

Right away (or right after the episode was over…) I knew I had to do a little research. When that quote was said, it was only 2001. That was thirteen years ago. When Carrie said that, she was debating whether or not to get an email address. She thought that was “too advanced” for her to handle.

What about today? Today when we can text one another, Face Time each other, Skype, Facebook chat, Tumblr, and so much more? If someone once thought that email was too “high-tech,” then what about right now? Does technology truly help us communicate any better?

In his book, Stop Talking, Start Communicating, Geoffrey Tumlin says that, “A tech-centered view of communication encourages us to expect too much from our devices and too little from each other.” Yes, with all of our we can communicate easier and faster. That’s an obvious thing. But, is it any better? Super_Solvers_-_Gizmos_&_Gadgets_Coverart

In a great CNN article, “We never talk any more: The problem with text messaging” Jeffrey Kluger states that,The telephone call is a dying institution. The number of text messages sent monthly in the U.S. exploded from 14 billion in 2000 to 188 billion in 2010.” People, wherever one goes, are always looking down at their phones, instead of looking up. They are immersed in all of its aspects (mostly texting), and to see a person actually talking on it is a rare site nowadays.

We can easily read a message, a text, an email, but we don’t understand the emotion behind it. One can sincerely believe that a message sounds mean, while the author never intended that at all. Without always understanding a person’s tone, how then do we know what they actually are saying?

Geneseo anyone? JUST KIDDING/I do it too...
Geneseo anyone? JUST KIDDING/I do it too…

An easy counter argument for that could be reading a book. How is one supposed to know what the author’s tone is without asking him or her? Yet, that is usually a simple thing to figure out. We as English majors do that for everything and anything we read. However, that also could be because a book is longer than a text message, and has phrases such as, “he said with a vengeance” throughout. I personally don’t know many people who narrate their own text messages.

But, one cannot overlook the ways it truly has helped us. In a Huffington Post article,  Joel Gagne says, “(School) Districts benefit from embracing, rather than shying away from, technology. Districts can utilize various different technological platforms to engage their community and seek their input. By ensuring there are provocative topics and the need of feedback from the community it will ensure things are interesting. Readers like to know you are really interested in what their opinion is. Using technology can help bring your school community together.” Technology also can help loved ones see pictures from a trip via Facebook, rather than having to wait months to meet up in person. It can help people living across the globe talk every single day without much cost. It can get ideas spread so rapidly that in a blink of an eye a revolution of sorts is happening. Years ago this was never possible. And yet, today, it is.

Awwww
Awwww

While I myself believe that all of our “improvements” aren’t making us communicate a whole lot better, that doesn’t mean I don’t find it easier. Instead of calling my mom to tell her something, I text her. If I see a new book out that I think my dad would enjoy, I email him, instead of calling him. It is easier, and it is faster, and I use my cellphone and laptop Every. Single. Day.

And, for better or for worse, I don’t plan on stopping.

WWMS: What Would Marx Say about Digital Commons

Perhaps this is just because I’m currently reading The Manifesto in Humanities, but with all this talk about the consequences for private property in the digital age, I was wondering what Marx would have to say about all of this. The answer I arrive at is vague and pretty unhelpful (like Marx himself on the whole), but I’ll get there in a minute.

Before all this talk of communism, John Locke wrote about the implications of property ownership as early as the 17th century. He writes in his Second Treatise of Government (based on my limited Humn knowledge) that property was originally defined by what

John “Locke”: Because he was exercising his natural right to liberty…

you could hunt/gather for yourself without wasting. This created a level of equality among people, because amassing enormous wealth would be physically taxing and people stopped collecting things when their “natural” needs were met. Things like berries and meat spoil quickly, so it would make no sense to horde them. The appearance of money and its triumph over the barter system changed the way people owned things. Now people could own things unequally, and theoretically amass unlimited amounts of wealth that last and accumulate. This sounds like the capitalist system we have today.

 Marx, of course, credits any social development throughout history to economics–essentially, the distribution of property. Engels is really the one who describes the property distribution between the upper-middle bourgeoisie class and the working proletariat. The workers are stuck in an endless cycle of poverty. Marx writes in his Manifesto that the typical system of “synthesis” that happens when “haves” and “have-nots” clash will not be possible when the bourgeoisie and proletariat of capitalism inevitably meet their end. Capitalism will finish, and some sort of revolution that is unimaginable will happen. Communism, or equal distribution of wealth, is the best way to stop putting band-aids on capitalism and urge on this “revolution.”

Blamed Capitalism before it was mainstream....
Blamed Capitalism before it was mainstream….

So, WWMS about the question of digital commons, or places online such as Digital Thoreau in which anyone with internet access can “own” something? How can anyone truly own something/the rights to something if digital sites are open-access?

I think the important thing to remember is that the nature of property distribution has changed as texts, ideas, images, etc. have moved online. Nowadays, an artist can’t be assured for one second that she’ll receive money for everything she has published; someone somewhere will undoubtably have found a way to copy-paste or download or screenshot, etc, etc, etc, her work. There’s a block in the money-centered, capitalistic flow of trade that people such as Scott Turow, Paul Aiken, and James Shapiro would argue discourages creativity and production.

BUT

This is where things get eerie, because Marx predicts the destruction of means of production as ways to combat the over-production of final-stage capitalism. The sheer volume of things produced on the web make it a perfect example of capitalism in its final stages. There’s overproduction and then unwillingness/inability to pay on the part of consumers, and then a disincentive for producers to continue….producing.

Communal spaces on the web of course sound kind of communistic in that they equalize people as consumers. However, they’re different than the material property and situation that Marx and Engels were so sure determine everything in the world. In fact, it seems to me to be more similar to the berries and meat Locke spoke of. Web content doesn’t really have an expiration date, but there’s only so much you can download and read and listen to on a computer or in a day. And the amount that you download on your computer doesn’t determine your wealth or material situation (unlike money). This is arbitrary property that falls not really under the supply/demand chain of communism, but more under the take-what-you-need-but-it-will-take-time-and-effort model of the hunter/gatherer system.

Of course where it differs is that people have to produce online content, whereas deer produce venison for us (thanks, deer). So we still have the problem of production. But, Marx would definitely say that that anxiety is the capitalist in all of us which can’t envision any other way of viewing the world except as a giant factory of creation. However, that still doesn’t help us very much in finding pragmatic ways to encourage production in a communal world without guaranteed payback for your time and effort.

So I think Marx would look at the digital age and the way property has become in nature and in distribution, shake his head, think of the end of capitalism, smile, and say I told you this was coming.

Algorithmic Criticism and the Humanities

In a characteristically lively and thoughtful post, Katie Allen looks at some articles about computer programs that automate the evaluation of student writing. She eloquently expresses a concern that many in the humanities, myself included, share about the use of machines to perform tasks that have traditionally relied on human judgment. “Those of us who study English do so because we recognize literature to be an art form, and because we believe in the power of language to give shape to the world,” she writes. A computer can run algorithms to analyze a piece of writing for length and variety of sentences, complexity of vocabulary, use of transitions, etc., but it still takes a trained human eye, and a thinking subject behind it capable of putting words in context, to recognize truth and beauty.

Yet if we’re right to be skeptical about the capacity of machines to substitute for human judgment, we might ask whether there is some other role that algorithms might play in the work of humanists.

This is the question that Stephen Ramsay asks in his chapter of Reading Machines titled “An Algorithmic Criticism.”

Katie’s post makes Ramsay sound rather like he’s on the side of the robo-graders. She writes that he “favors a black-and-white approach to viewing literature that I have never experienced until this class… . [He] suggests we begin looking at our beloved literature based on nothing but the cold, hard, quantitative facts.”

In fact, though, Katie has an ally in Ramsay. Here is what he says about the difference, not between machines and humans, but more broadly between the aims and methods of science and those of the humanities:

… science differs significantly from the humanities in that it seeks singular answers to the problems under discussion. However far ranging a scientific debate might be, however varied the interpretations offered, the assumption remains that there is a singular answer (or set of answers) to the question at hand. Literary criticism has no such assumption. In the humanities the fecundity of any particular discussion is often judged precisely by the degree to which it offers ramified solutions to the problem at hand. We are not trying to solve [Virginia] Woolf. We are trying to ensure that the discussion of [Woolf’s novel] The Waves continues.

Critics often use the word “pattern” to describe what they’re putting forth, and that word aptly connotes the fundamental nature of the data upon which literary insight relies. The understanding promised by the critical act arises not from a presentation of facts, but from the elaboration of a gestalt, and it rightfully includes the vague reference, the conjectured similitude, the ironic twist, and the dramatic turn. In the spirit of inventio, the critic freely employs the rhetorical tactics of conjecture — not so that a given matter might be definitely settled, but in order that the matter might become richer, deeper, and ever more complicated. The proper response to the conundrum posed by [the literary critic George] Steiner’s “redemptive worldview” is not the scientific imperative toward verification and falsification, but the humanistic propensity toward disagreement and elaboration.

This distinction — which insists, as Katie does, that work in the humanities requires powers and dispositions that machines don’t possess and can’t appreciate (insight, irony) — provides the background for Ramsay’s attempt to sketch out the value of an “algorithmic criticism” for humanists. Science seeks results that can be experimentally “verified” or “falsified.” The humanities seek to keep a certain kind of conversation going.

We might add that science seeks to explain what is given by the world through the discovery of regular laws that govern that world, whereas the humanities seek to explain what it is like to be, and what it means to be, human in that world — as well as what humans themselves have added to it. To perform its job, science must do everything in its power to transcend the limits of human perspective; for the humanities, that perspective is unavoidable. As the philosopher Charles Taylor has put it, humans are “self-interpreting animals” — we are who we are partly in virtue of how we see ourselves. It would be pointless for us to understand what matters to us as humans from some neutral vantage outside the frame of human subjectivity and human concerns — “pointless” in the sense of “futile,” but also in the sense of “beside the point.” Sharpening our view of things from this vantage is precisely what the humanist is trying to do. If you tried to sharpen the view without simultaneously inhabiting it, you would have no way to gauge your own success.

The gray areas that are the inevitable territory of the English major, and in which Katie, as an exemplary English major, is happy to live, are — Ramsay is saying — the result of just this difference between science and the humanities. As a humanist himself, he’s happy there, too. He’s not suggesting that the humanities should take a black-and-white approach to literature. On the contrary, he insists repeatedly that texts contain no “cold, hard facts” because everything we see in them we see from some human viewpoint, from within some frame of reference; in fact, from within multiple, overlapping frames of reference.

Ramsay also warns repeatedly against the mistake of supposing that one could ever follow the methods of science to arrive at “verifiable” and “falsifiable” answers to the questions that literary criticism cares about.

What he does suggest, however, is that precisely because literary critics cast their explanations in terms of “patterns” rather than “laws,” the computer’s ability to execute certain kinds of algorithms and perform certain kinds of counting makes it ideally suited, in certain circumstances, to aid the critic in her or his task. “Patterns” of a certain kind are just what computers are good at turning up.

“Any reading of a text that is not a recapitulation of that text relies on a heuristic of radical transformation,” Ramsay writes. If your interpretation of Hamlet is to be anything other than a mere repetition of the words of Hamlet, it must re-cast Shakespeare’s play in other words. From that moment, it is no longer Hamlet, but from that moment, and not until that moment, understanding Hamlet becomes possible. “The critic who endeavors to put forth a ‘reading’ puts forth not the text, but a new text in which the data has been paraphrased, elaborated, selected, truncated, and transduced.”

There are many ways to do this. Ramsay’s point is merely that computers give us some new ones, and that the “radical transformation” produced by, for example, analyzing linguistic patterns in Woolf’s The Waves may take the conversation about the novel in some heretofore unexpected, and, at least for the moment, fruitful direction, making it richer, deeper, more complicated.

At a time when those of us in the humanities justly feel that what we do is undervalued in the culture at large, while what scientists do is reflexively celebrated (even as it is often poorly understood), there are, I believe, two mistakes we can make.

One is the mistake that Ramsay mentions: trying to make the humanities scientific, in the vain hope that doing so will persuade others to view what we do as important, useful, “practical.” (Katie identifies a version of this mistake in the presumption that robo-grading can provide a more “accurate” — that is, more scientific — assessment of students’ writing skills than humans can.)

But the other mistake would be to take up a defensive posture toward science, to treat the methods and aims of science as so utterly alien, if not hostile, to the humanities that we should guard ourselves against contamination by them and, whenever possible, proclaim from the rooftops our superiority to them. Katie doesn’t do this, but there are some in the humanities who do.

In a recent blogpost on The New Anti-Intellectualism, Andrew Piper calls out those humanists who seem to believe that “the world can be neatly partitioned into two kinds of thought, scientific and humanistic, quantitative and qualitative, remaking the history of ideas in the image of C.P. Snow’s two cultures.” It’s wrongheaded, he argues, to suppose that “Quantity is OK as long as it doesn’t touch those quintessentially human practices of art, culture, value, and meaning.”

Piper worries that “quantification today is tarnished with a host of evils. It is seen as a source of intellectual isolation (when academics use numbers they are alienating themselves from the public); a moral danger (when academics use numbers to understand things that shouldn’t be quantified they threaten to undo what matters most); and finally, quantification is just irrelevant.”

That view of quantification is dangerous and unfortunate, I think, not only because we need quantitative methods to help us make sense of such issues of pressing human concern as and climate change, but also because artists themselves measure sound, syllable, and space to take the measure of humanity and nature.

As Piper points out, “Quantity is part of that drama” of our quest for meaning about matters of human concern, of our deeply human “need to know ‘why.’”

Admin’s note: This post has been updated since its original appearance.

Parenti, Lessig, and cute animals

Reading Lawrence Lessig’s “Free Culture” reminds me of a book I had to read for a high school global history class: “The Assassination of Julius Caesar: A People’s History of Ancient Rome” by Michael Parenti.

Parenti, a Yale grad and “cultural critic” (Wikipedia’s words), argues in his book that history has really done a number on poor Caesar, who was not, in fact, assassinated because he

Since this post does not lend itself to images, treat yourself to some adorable animal pictures.
Since this post does not lend itself to images, treat yourself to some adorable animal pictures.

was abusing power and ignoring the needs of his constituents. A few chapters are eloquent laundry lists of all the great things Caesar did for Rome, like creating the Julian calendar (a variation of which we still use today) and working to relieve poverty among the very plebs he was accused of mistreating; other chapters debunk common misconceptions ‘traditional history’ has fed us. A 2004 book review from Parenti’s website synopsizes his thesis: “In The Assassination of Julius Caesar, the distinguished author Michael Parenti subjects these assertions of “gentlemen historians” to a bracing critique, and presents us with a compelling story of popular resistance against entrenched power and wealth. Parenti shows that Caesar was only the last in a line of reformers, dating back across the better part of a century, who were murdered by opulent conservatives.”

His name is Lionel and she rescued him from a slaughterhouse when he was a calf. True story.
His name is Lionel and she rescued him from a slaughterhouse when he was a calf. True story.

I disliked the book from the first few pages because of Parenti’s smug attitude. He seems to think that he is pulling the wool off our eyes and showing us a hidden truth, when in reality, he is simply proposing a theory contrary to the ones in our boiler plate high school textbooks. Responsible readers will identify this bias and take his argument with a grain of salt; but I can easily see a less careful reader thinking that he now understands Ancient Rome better than his friends because he knows ‘the truth.’ Textbooks’ version of why Caesar was assassinated and Parenti’s are both rooted in facts; it’s just that each one gussies up his argument in a different way, puts those facts in a different order, foregrounds different information and flat-out omits what doesn’t suit the thesis.

I promise, I’m circling back to Lessig, now. In reading the introduction and first few chapters of “Free Culture,” I was getting strong Parenti-vibes. Just like Parenti, Lessig’s argument is

Elephants are highly emotional creatures, and are one of the only mammals besides us who mourn their dead.
Elephants are highly emotional creatures, and are one of the only mammals besides us who mourn their dead.

opposed to the one that contemporary culture furnishes us with. Most people believe it’s important to protect intellectual property, whereas Lessig dramatically states, “Ours was a free culture. It is becoming less so” (30). There’s nothing wrong with taking the counter view, but I am skeptical of an argument that stands upon completely disproving another position, rather than generating genuine ideas that may or may not line up with prevailing theories. That sounded pretentious and confusing. I just mean that I sense a little rebellious flare in Lessig’s writing, like he’s excited to tear down the mistakes our culture has made.

This guy gets it
This guy gets it

Lessig is doing the Socrates thing, where you ask little questions that people agree with (“isn’t it silly to sue Girl Scouts for singing copyrighted songs around a campfire?” “don’t scientist build off each other’s work all the time?”) until you’ve led them to a conclusion miles away from where they started. Think about what he’s saying: protecting intellectual property is not only illogical, but is changing our culture for the worse. Yet, every one of us has created something that we are proud of, sometimes even defensively proud of. Can you imagine another person or corporation taking credit for it? As someone who has been plagiarized, I can tell you that it’s more gut-wrenching than you’d think. I do not think it is such an evil thing to get credit for your hard work. Just because some inventing happens in the mind rather than in a workshop, that doesn’t mean we should privilege the protection of one kind over another.

The photographer is named Brian Skerry. He was interviewed about this photo and said that the Bow whale was calm, curious, and had not one iota of aggression. After this photo, the whale swam on for a while, Skerry following and snapping pictures. When Skerry had to stop to catch his breath after 20 minutes, he was thrilled to have had such a successful day. But the whale actually stopped and waited for him. Oh my God I'm tearing up, isn't that beautiful?!
The photographer is named Brian Skerry. He was interviewed about this photo and said that the Bow whale was calm, curious, and had not one iota of aggression as it approached his partner. After this photo, the whale swam on for a while, Skerry and his partner following and snapping pictures. When Skerry had to stop to catch his breath after 20 minutes, he was thrilled to have had such a successful day and assumed that was all he would get. But the whale actually stopped and waited for him. Oh my God I’m tearing up, isn’t that beautiful?!

But I am getting ahead of myself a little bit, because to be honest, I’m not even sure that I understand Lessig’s argument completely.  I probably shouldn’t be criticizing him like this until I’ve read the whole book, I admit. From what I’ve gotten through, though, I can say that I find his argument convincing only in small chunks, but kind of incoherent in the big picture. Lessig adores historical anecdotes. Each chapter contains several very interesting stories about how Joe What’shisnose got ripped off by a big corporation or how Jane Blah was only able to create the world’s greatest whatever because she used someone else’s idea. I really liked all of these examples, especially the one about and the explanation of Japanese ‘copycat’ comics. The problem was that I had trouble connecting them. Lessig tells us that his book is “about an effect of the Internet beyond the Internet itself: an effect upon how culture is made. […] The Internet has induced an important and unrecognized change in that process” (7) and that his goal is “to understand a hopelessly destructive war inspired by the technologies of the Internet but reaching far beyond its code” (11).  Honestly, that’s the kind of thesis that I would circle at the Writing Center and say, “You have a really interesting idea here, but the thesis is supposed to be the roadmap to the rest of your paper. You need to be more specific.” Saying that you want to talk about how the Internet has changed culture and how there is conflict surrounding technology tells me very little about what I as a critical reader am supposed to be looking for.

Over 10,000 pitbulls have been euthanized due to breed discriminatory legislation in cities. Happy, loving family pets like this fella have been persecuted just because he's a pit bull. But look at him! Just, look!
Over 10,000 pitbulls have been euthanized due to breed discriminatory legislation in cities. Happy, loving family pets like this fella have been persecuted just because of unfair stereotypes. It’s dog racism. But look at him! Just, look!

Yikes, this is getting wordy. My point is that some of Lessig’s anecdotes seem to cast the people who lost their intellectual property in a sympathetic light (like the first story about poor Edwin who committed suicide over his idea being stolen), while others underscore the importance of brooking property rights if we ever want to advance as a society (the Kodak episode). I’m pretty confident that he is arguing against strict intellectual copyright laws on the Internet, but if I wasn’t reading his book in the context of this class, I might be less certain.

He also pulls a Parenti every now and then and throws out a statement in support of his argument that is just totally ridiculous. Lessig honestly thinks that “we, the most powerful democracy in the world, have developed a strong norm against talking about politics” (42)? Really? He backs this up by noting that we are discouraged from discussing politics because they are too controversial and can lead to rudeness, but as a card carrying American, I can say that the thought of offending someone has never stopped me from saying anything. He cannot really try to get us on board with the idea that our society stifles political dialogue (or even ).

This is Tillie. I have been lucky to call her my best friend for 7, happy years!
This is Tillie. I have been blessed to call her my best friend for 7 happy years and counting!

All in all, I have not found this reading unpleasant. I like his writing style and, like I said, his anecdotes are very captivating. I just wish he had a little more direction, a little less sass, and a smidge of common sense.

You’re a champ if you stuck it through the whole thing. Hope the animal pictures helped.

Can Creativity be Programmed?

I was roaming the internet a few days ago and I came across this article.

http://news.bbc.co.uk/2/hi/programmes/click_online/9764416.stm

To summarize what this article is discussing, it unveils the fact that robots are actually capable of writing books now. At the moment, they are just writing on pre-existing scientific or mathematic theories and laws, and have even occasionally dabbled in love letters as well. However, the article’s biggest point is posing the question of whether or not robots could actually be able to write a fictional novel, and even win a Pulitzer for it.

Personally, I don’t think that is a valid question at this point.

Robots are still created and manufactured by humans, and their capabilities are clearly lined out by their creators within their computer code. At this point in time, there is no way to instruct something to be creative and innovative, that kills the whole point of creativity. To be able to properly write a work of fiction, you need to be able to arrive at the idea through a combination of experience and imagination, something that machines don’t necessarily have right now. Robots simply cannot sit down and think about what would be an interesting story to write about because the code for that simply does not exist.

However, if this technology ever does exist, I think the question is less about can a machine write a novel, but does the ability to create a novel imply that at some level, these robots have an element of humanity in them? Does having the ability to be creative make the machine part of the human psyche, and can that ever be achieved? For me, writing has been a way for me to express thoughts, feelings, and emotions as well as taking ownership of a world that I have built, which inspires me to continue to write and create new stories that can be shared. Can a machine ever find this same level of joy, or will it create and be creative simply because it has to? And what would a robot author mean for the future of storytelling? Will it be another aspect of competition, or a stigma on the literary world? Will it be a boon, or will it only cause a new level of literary elitism?