Stephen Ramsay    Home

The Digital Naif

i got into digital humanities for the money.

That will come as a dispiriting revelation to scholars like Alan Liu, who have accused digital humanists of failing to reflect on the ways in which the digital humanities “advances, channels, or resists today’s great postindustrial, neoliberal, corporate, and global flows of information-cum-capital.” [Liu] Yet it is a simple fact of my intellectual life that I got into digital humanities because I was broke.

I was in graduate school during the 1990s, and like all graduate students, had submitted to that voluntary (and highly privileged) poverty that goes along with the pursuit of a Ph.D. I was at the University of Virginia, and a job was posted for a part-time position at the Electronic Text Center. That seemed preferable to my other avocation (as a security guard). What I didn’t realize is that I was walking into the World Wide Web.

The Electronic Text Center—now moribund as an organization, but fully alive in the collection that resulted—isn’t mentioned as often as it might be in the casual and conflicting histories we tell about the rise of dh, but it had a profound effect on my own development as a scholar. Its brief could not have been simpler: create a searchable digital archive of . . . well . . . everything. As with the earlier, and more famous Project Gutenberg, the “what” of it all seemed far less important than the “why.” The crucial thing was to get as much of the written cultural record of humanity online as quickly as possible.

The Electronic Text Center (“Etext,” as we called it), thought of itself as a more mature and expert version of Project Gutenberg. Michael Hart—the man who had essentially invented the ebook—had chosen editions and recensions willy-nilly. Etext would accession its texts according to the best rules of textual criticism, with particular emphasis on provenance. Both Hart and our group were intending, like Joseph, to open the granaries to those starved for the artifacts of human culture. Anyone with a computer could access anything from Aeschylus to Zweig (and we were confident that that would one day mean everyone). We, with more than a hint of self satisfaction, were opening the doors a bit more carefully.

It would be hard to understate the giddy naiveté that permeated our efforts. Hart’s decision to name his project after the inventor of the printing press says it all, or most of it. We were standing at a unique pivot point in world history. Before, you would need to travel to the British Library and present your academic credentials (and your damn good reason) for examining the Beowulf manuscript—a text for which we have precisely one extant copy. Now anyone could read it! One of my earliest memories of contact with this new world is of me sitting next to a senior medievalist scholar as we waited—endlessly—for a high-resolution copy of a page of Piers Plowman to appear on a computer screen. Once we had it, we both thrilled to the way in which we could drill down to the pock marks on the vellum. A bit dumbfounded, I remarked that it was almost as good as the real thing. “Stephen,” he said with resigned pleasure, “It’s better than the real thing.”

How different this bright-eyed world was to the English Department just on the other side of Jefferson’s Academical Village. There, I was being trained in the hermeneutics of suspicion—trained to look precisely for the outworkings of power in human cultural artifacts exactly as Liu would suggest. But here in the digital scriptorium, that prime directive of the contemporary scholar was held in strange abeyance. No one, including me, was attempting to flee the theoretical or the critical. We could easily have made the right noises about suspicion and contingency to anyone who asked. But in practice, we were as wide-eyed as startup founders embarking on a project that would Change Everything.

The actual job of Changing Everything, it must be said, was dry as dust. Essentially we were engaged in that slightly exalted form of data entry known as SGML tagging—placing angle brackets around paragraphs, dialogue, quotations, and as many other structural features of a text as time would permit. At some point, the job of Assistant Director of the Etext Center opened up, and I committed the ultimate heresy of taking a full-time job while still in graduate school. The email I received from the graduate Chair strongly resembled that of an abbot trying to discourage a contemplative monk from pursuing the active life of a wandering mendicant; I would hardly be fulfilled by forsaking Derrida for data. But I liked the work, and I had suitably absurd notions about my ability to complete a dissertation quickly while working full time.

That new job was mostly administrative. I had to make sure that the work was doled out properly and that everyone was paid, attend to incoming unadorned text, and generally help to oversee the operation. I loved it, of course. I was after all, part of a revolution.

Tagging, though, was for the birds; only revolutionary fervor (or the promise of pin money) could compel one to spend hour upon hour determining the beginning and end of paragraphs. The thought occurred to me that perhaps some of this could be automated. And isn’t programming the way you automate things? And isn’t C a programming language?

I exaggerate only slightly the depth of my ignorance about what it meant to write software. But I couldn’t have been fifty pages into Brian Kernighan and Dennis Ritchie’s classic The C Programming Language before I knew that my whole scholarly direction was about to change (I had been training to be a theater historian). Putting text online was all well and good, but—and I was surely the only one in the world who had had this thought—the real revolution would come once those texts were in machine-readable form. I was barely through wrapping my head around while-loops before I realized that it was the text morphed, transformed, and visualized that was going to change my discipline forever. I was smitten.

So smitten, that I proceeded to learn half-a-dozen programming languages in the space of a few years, along with relational databases, design patterns, functional programming, and discrete mathematics (the latter, with the help of a brilliant professor who kindly agreed to tutor me). Some of this overzealousness was borne of my worry that I would never be taken seriously as a technologist, since my only formal training consisted of English degrees. But this was also the heady days of the dot com bubble, and having html on your résumé already made you an expert.

I soon discovered, of course, that I was far from being the first one to imagine something like text analysis. I read every relevant article on the subject going back to Busa. Many, many had gone before me. But something was missing, and it had to do not with the Etext Center’s goals, but with what I was being taught to me back in the English Department. The overwhelming consensus among my forebears seemed to be that the greatest thing about computers was the way in which they could bring objectivity to the study of the humanities. To my way of thinking, this was a devastating error. Much of my work since (a dissertation, a book, and many papers since) have been devoted to countering what I then perceived as “scientism” in the digital humanities.

I hope I will be forgiven this brief foray into the autobiographical; I am loath to write anything so precious as an intellectual memoir, particularly since I am (I hope!) very far from the end of my career. My purpose, though, is only partly to indulge the trope of the wizened adult looking back on the follies of youth. I would rather ask if the apparent maturity of our present is only slightly less naive than our past.

Every moment in the recent history of dh, indeed, evokes (in me, at least) a twinge of embarrassment. The flurry of attempts to define dh, say what it is and isn’t, who’s doing it and who isn’t; the “punk phase” of dh in which we loudly proclaimed “more hack, less yak;” more recent defensive postures borne of repeated attacks against a field that for reasons too numerous to even mention, seems to some a harbinger of the death of the humanities as such: all of this leaves me only slightly less bemused than I am toward my nineteen-year-old self.

It is possible, though, that naiveté is a precondition for learning and growth. Even the most critical, skeptical posture can be a form of naiveté (did we not all, after first reading Foucault, feel as if we had joined the enlightened?). Certainly, it is the naiveté of our students that propels us forward as teachers. When they “discover” Hamlet’s indecision or Woolf’s feminism, we celebrate their achievement (while gently drawing their attention to the giant’s shoulders). We expect better from ourselves as scholars, of course. But is it possible that a field that isn’t to some degree stumbling in and out of absurd generalization, quaint lack of critical self consciousness, and discovery of what someone else already knew—is it possible that such a field is not fully alive?

The British parliamentarian Enoch Powell once said that all political careers end in failure. My own career is (I must say it again) far from over, but I already suspect that all academic careers end in irrelevance. All our revelations, all our breakthroughs, all our epiphanies will doubtless appear naive to future generations. One or two of us will perhaps be remembered as “great scholars,” but only in the way that Aristotle is remembered as a great zoologist. If we are lucky, they will admire our pluck; their own job will be to destroy us.

In the meantime, our job, it seems to me, is kindly and gently to destroy our past selves even as we create the next iteration of what will later be itself destroyed. If that is cynical, it is in the very ancient sense. Diogenes, after all, had to destroy himself repeatedly before he could exhort his fellow citizens to awareness.

He started out studying to be a banker.

blog comments powered by Disqus