Revising Nature into Language: An Analysis of Solitude through Time

Group Members: Hannah Fahy, Hannah Jewell, Kyle Regan, Leila Sassouni, Jaffre Aether

The first major decision of our group was to determine the pages we wanted to review. For that, we chose the first pages of “Solitude” as the chapter resonated with many of us and contained relevant snippets on the grand themes we enjoyed throughout the work, namely, the imperfections of language in the face of true sensory experience. Language and nature are set up as two sides to the whole, and it is a gap that must be confronted when trying to write about the sublimity of the outdoors, and how we, as social creatures bathed in language, must find an authentic way to access this unremembered sphere. However, it is the introduction of manuscript pages in which we faced the most obstacles.

The process of combing through the handwritten Walden was an especially difficult task for our group, as one of our group members noted that the version we were looking at (on the spreadsheet Dr. Schacht shared) was out of order. Such as that, it was quite difficult to locate the pages we were working off of. This challenge did not stop our work though, as we were able to form analysis through the versioning machine. While not in the whole spirit of the analysis, the versioning machine was incredibly helpful, as we were able to visualize the changes being enacted by Walden. Moreover, the versioning machine makes explicit the changes by using highlights and cross outs that would not have been as obvious in comparing the actual and physical manuscript pages. We struggled, too, in reading cursive, so seeing the work in print was very helpful. But, because of the versioning machine, the analysis went smoothly and connecting Thoreau’s revisions to the grander themes within the work was a painless task. In addition, the work of creating the timeline was simple, too. In going through many of the manuscript pages, a group member came to the realization on how to find the manuscript images for the “Solitude” chapter. The pictures repeatedly had page numbers in a blue font. The page numbers that were written in blue from pages 202 to 218 contained writing from the chapter. It became simpler to find parts of the chapter once these page numbers were discovered. The next step was to use Ctrl + F to search for strings of words that were discernible from the manuscript, to find if they were in fact in the chapter or not.

As far as the building of the TimelineJS spreadsheet, as most of us were familiar with the platform from a previous assignment, it was not too difficult to create. The template made things simple, as everything we needed to include was clearly marked. Finding the dates, too, was as simple as looking them up. But, once again, the most difficult aspect of creating the timeline itself was finding the manuscript images, as the page order was not linear in the way the final work was, and legibility was at a minimum (for a group not used to cursive). The search naturally improved when we began to use the Huntington Library’s manuscript, as the files were not fully pixelated, and thus, did not take long to load on each of our computers. Other than that, creating the timeline of “Solitude” was a smooth and fun process. The coding of the chapter’s first page to a TEI format was also relatively painless.

For the coding of the TEI format, one group member (as this group member felt comfortable in the TEI coding framework) took on the task, but nonetheless, the encoding process went well. By already having experience in the TEI format from the modules on Canvas, as well as having the added template, coding out the page was a task of utilizing the template to find how to code the changed parts of “Solitude” into the file. However, the coding was, again, not done off the manuscript page itself, instead using the versioning machine, as it took our group some time to find the manuscript page, and we knew we had to continue working while that process was ongoing. This practice did create a sort of logjam, as the lines of “Solitude” on the versioning machine did not match up to the lines of “Solitude” on the manuscript page, which meant that the TEI file could not be completed until we found the manuscript page. But, once we did, the coding was able to be completed. The work seems to go much smoother by using the versioning machine and the manuscript page in tandem, as the machine is eminently readable, and the manuscript age contains the lines that allow for the TEI file to be rightly ordered. The last meaningful piece of the process to address was our group’s communication.

For the most part, our group communicated through Slack and Zoom. At first, I thought that this may make the project more difficult or frustrating as there would be no set class time to work on it, but it actually went quite smoothly. We were able to talk about the smaller things in the Slack chat, mainly asking questions and sharing pertinent links, and when we needed to discuss larger issues, mainly the project itself and delegating out work, we met in a Zoom call. The Zoom calls were effective, not only because we could talk and share ideas quickly, but also because it created a time in which we could all gather and work on the project together. Because of that, discussion was only one facet of the call, with the other major facet being the creation of a ‘study hall’ where we knew we would be getting a significant amount of work done throughout. So overall, communication was not a problem for this project, which I sincerely thought it might be at the outset. 

With all that said, this project was interesting to do, as it was the synthesis of everything we had been working on. Though, it was slightly tricky as well, for a good deal of what we used within this project was learned during our period of distance learning, not only for the technical workings we used (TimelineJS and TEI coding), but reading and analyzing text from the revisions that Walden made. It still worked out, and I believe the final project is something meaningful, but it inspired a bit more anxiety than normal, as we were not able to test our ideas out to Dr. Schacht in real time. However, working without that kind of safety net was a worthwhile experience, even if it may turn out different because of that. Either way, the group is happy with how the project turned out, as the analysis turned out intriguing, and our technical skills were sharpened. 

Writing with Machine Reading

In working with Voyant Tools, I cannot help but reflect on how machine reading can also make us better (or maybe not better, but different) writers. To start exploring how I feel about writing in conjunction with reading, I want to start with my personal experience using Voyant, separate from class. My favorite book is Moby Dick by Herman Melville, and a large part of why I like Moby Dick is Melville’s prose. So naturally, after seeing Voyant run through and categorize Walden in most used words, average sentence length, and so on, I wanted to see the same of Moby Dick. It gave me the same data as Walden and I stared at the output blankly for some time, thinking of how this could be useful. As I was looking back on the Hayles reading, I came across this quote where she says, “On the other hand, machine reading may reveal patterns overlooked in close reading” (20). Close reading allows a deeper view of the thematic elements at play within whatever we are reading, but machine reading could allow us a deeper view in the inner workings of the sentences that construct the aesthetic patina. This is where I see real affordance in machine reading’s contribution towards our ability to write differently. I can analyze Moby Dick using Voyant, find the words that are used most, and begin to use those same words in my writing, effectively starting to build up the same aesthetic mood Melville is constructing.

However, there are limits to this kind of use. First, Voyant does not have the kind of capabilities (or I don’t know how to use the program to its fullest extent) to enact the kind of immersion that I want. I cannot see how sentences are structured in intricate detail, how many clauses there are, how sentence length varies i.e. if long sentences are generally followed by short ones or if short/long sentences are used only in specific contexts following specific words. Moreover, there is a limitation in the programs ability to read how passive and active voice is deployed. Extending beyond technological limitations (or again, personal ones), this type of writing seems to me either for the student or the satirist. For the student, I think of the story told wherein Hunter S Thompson wrote out The Great Gatsby to learn what great writing feels like. Now, instead of having to merely type out a story, you could view the data from Voyant and imitate the style with your own story. It affords a level of creativity while still feeling like you are under the tutelage of a canonized writer. For the satirist (or postmodernist maybe), the tool affords a new depth to the writing of the people you want to satirize. But for someone who eventually wants to create a truly original work, the tools of Voyant will become less and less useful.

The last thing I want to touch on is about a website that imports some of the ideas that I have been tracking, and it is one that ‘reads’ your writing so that it may compare you to an author (for those interested, the website is iwl.me). The website also brings up Harold Bloom’s The Anxiety of Influence for me, in which his main theory is that for authors to rise above their influences, they must misread or misinterpret their predecessors. By using a website like iwl.me (which is right now, a poor iteration of what I would like), one could potentially track the aesthetic framework that they have drawn the most influence from. The potential writer could quickly pinpoint who they sound the most like, and purposefully subvert their own language, as well as their predecessor, to not have different ideas than their influences, but sound different as well.

Machine reading can not just help us understand texts better, but it can help us write better as well, not only by allowing us to run our favorite texts and the authors who influence us the most through text analysis tools so that we can see how they write and borrow the broad linguistic patterns we like. For tools that compare you to authors, and this use of text analysis tools would broadly not be for the student or satirist, we could find who we write like and subvert them by viewing how our writing relies on them. This may seem contradictory to my view of text analysis as edifying, but it is well known that we have to stand on the shoulders of giants first to create truly great works.

A Language of Expression and Action: Speech Acts and Coding

I ran into the concept of speech-acts last semester, and while finding the thought interesting, I did not find it relevant at the time. But I find the concept revived once more as our class begins writing in Markdown, HTML, and CSS. Here, we have languages that are, as far as I know it, composed entirely of speech-acts or a certain kind of speech-act. Each corresponding word and phrase either builds or enacts an operation onto the computer. Since each word and phrase creates action, coding becomes the language of command, of speech-acts, and of properly learning the scripts of command. In connection to the humanities then, our day-to-day language becomes in contrast a language of expression. The question, to me, becomes then how do each of these languages interact with each other? And, how do we properly theorize on a language of action?

The first chapter of James Gleick’s The Information provides the best resource for exploring this thought, as the chapter describes code (Morse code, specifically, but the thought is broad) as a means of ‘bootstrapping’ meaning onto symbols. Textually, Gleick reflects that “Morse had bootstrapped his system from a middle symbolic layer, the written alphabet, intermediate between speech and final code” (29). I think this quote is useful because the sentiment on Morse code is analogous to the relationship between code as action and language as expression. Code (in the programming sense) is mediated through language, but unlike Morse code, which aimed for streamlined expression, code aims for streamlined action. The issue, which I believe is most naturally raised here, is that streamlined expression is more perplexing than streamlined action because expression converts into action through peer to peer relationships, whereas streamlined action is only completely operative in a human-computer relationship. Thus, the province of the humanities would have nothing to say about this utilitarian language based around a relationship between a user and a tool. Yet, I disagree with this notion, because we have, like Morse, used the alphabet to create a language entirely composed of speech-acts that will be followed when the right rules are followed. This is the key sentiment; our most effective tool is filtered through everyday language. And there are two important meeting points of the humanities and coding because of that. The first is the intersection between programming and rhetoric, and the second is a language of streamlined command serving as the foundation to our most used space of expression.

It is this commonality of symbols that paves the way for coding to form a relationship with the humanities, and more specifically, rhetoric. I find rhetoric to as the most ample ground in the humanities because it is the discipline that has teleological aims. Rhetoric itself is not precisely a speech act, as the flow of rhetoric often spans over many statements and ideas, but it can be viewed as the goal being to inspire a belief or action within the listener. In relationship to coding, there is a natural link in the way language is used as influence (rhetoric’s domain) and command (programming’s domain). Naturally, rhetoricians interface with a person or a section of a polis whereas programmers interact with computers, but the complexities of this distinction are outside the scope of my post. I want to focus on the feeling that is accessible when having access to a language that allows for the consistent and perfect use of speech-acts and commands.

I want to relate this back to Gleick’s recounting of the Morse code and the African drummers, for there is a point in which his statement on expression mirrors that of action. Of the drummers, Gleick states that “The extra drumbeats, far from being extraneous, provide context. Every ambiguous word begins in a cloud of possible alternative interpretations; then the unwanted possibilities evaporate” (32). Morse code needed to have as few words communicated as possible to save money, but I think this implicit utilitarian factor culls the ability of expression. In contrast, the ‘translations’ of the drum beats felt far more poetic and evocative. However, the streamlined language of action (programming) loses none of its power, and in some sense, gains more with its straightforwardness. Clearly, there is a difference in the words necessary when attempting to be properly expressive and properly active. And yet, this streamlined language of action builds the framework of our most used platform of expression. I cannot remember another time when our most common mode of expression was built on a language of purely action. I cannot help but feel as if that framework resonates upwards and effects the way in which we discourse as well.

I would like to conclude with a short word on the concept of scripts that I mentioned. There is always the thought that human interaction has a ‘script’ so to speak, or rather, that there are a combination of words that I could say to someone that would allow me to ‘access’ or ‘influence’ them effectively. This is a flawed conception, and I believe that it blocks attempts at cooperative dialogue. But now, our world is run on a set of programs operating through this kind of script. I think this is the enigmatic piece when attempting to theorize on the relationship between the humanities and computing. The humanities does not have a language of action whereas computing does. This point raises a wariness in me then, as this gift of a language of pure action to creatures of pure expression, it seems perverse. I think it is possible to write off this entire thought as a over-extrapolation of a user/tool dynamic, but computing (in modern times, I remember 1s and 0s) is the first tool to use language so powerfully. I suppose I would like to end with a quote I found from Walden that fits into what I am attempting to convey. Thoreau says of our living this, “It would be well perhaps if we were to spend more of our days and nights without any obstruction between us and the celestial bodies, if the poet did not speak so much from under roof, or the saint dwell there so long. Birds do not sing in caves, nor do doves cherish their innocence in dovecots” (44).