Faculty Profile. Chad Wellmon

Faculty Profile Chad Wellmon Chad Wellmon Growing up in rural western North Carolina, Chad Wellmon played baseball, read poetry, and designed elabo...
8 downloads 2 Views 1MB Size
Faculty Profile

Chad Wellmon

Chad Wellmon Growing up in rural western North Carolina, Chad Wellmon played baseball, read poetry, and designed elaborate science-fair projects that never seemed to work. At fifteen, his sights set on a career in nuclear physics, he left his hometown and enrolled in one of the nation’s only publicly-funded boarding schools, the North Carolina School of Science and Mathematics. The school, then as now prestigious and highly competitive, is in Durham just blocks from Duke University and its library, where the aspiring scientist—and future literature professor—spent countless hours. At his new school, Wellmon encountered a world of learning crammed into the few square blocks of a onetime hospital campus. He spent mornings reading Western philosophy and literature from Plato to Dickinson, afternoons setting up experiments in the physics lab, and nights seeking assistance with his chemistry lab work while also finding time to help classmates parse sentences from Erasmus and T. S. Eliot. He describes his move to the North Carolina School of Science and Mathematics: It was as if I had been reading epistles about the sciences and literature sent to the far reaches of the world only to be suddenly dropped into the middle of a world of intellectual excitement—a place where knowledge of all sorts was shared and generated among a motley group of teenagers who didn’t know better than to disregard the established boundaries of disciplines, sciences, and traditions. In keeping with that boundary-jumping spirit, Wellmon went on to design his own degree in political theory at Davidson College and earn a PhD in German studies and intellectual history at the University of California, Berkeley. But those formative years in Durham showed him the diverse forms knowledge could take, the different ways in which the world could be approached, appreciated, and known—from the insight of poetry to the exacting descriptions of physics. He also learned early on that he would never become proficient in them all. Instead, he decided to study how these different ways of knowing came about, how they formed, how they emerged, what they assumed, and 1

which values they held dear. Ever since, his scholarship has been animated by a desire to understand different ways of knowing the world, others, and ourselves. After some time in Germany studying, Wellmon joined the faculty of the University of Virginia in 2007 as an assistant professor. His work at the university has focused on questions concerning the history of knowledge, especially the epistemic values, practices, and institutions underlying scientific knowledge. Yet, whereas traditional historians of science tend to focus on the so-called hard sciences, such as physics and chemistry, Wellmon has expanded that ambit to include the human sciences. He understands the human sciences to center on the systematic inquiry into the activities and creations of humans, and to include all the sciences that study and make up culture. Like all modern sciences, the human sciences, from philology and anthropology to art history and philosophy, generate knowledge by appealing to questions concerning evidence, explanation, and the proper objects of inquiry. Wellmon is particularly interested in the normative and ethical aspects of these sciences and forms of inquiry. What types of persons do these sciences and types of knowledge both presume and form? In his first book, Becoming Human: Romantic Anthropology and the Embodiment of Freedom (2010), Wellmon considered why the question “What is the human?” became such a central concern for thinkers in the German Enlightenment. Confronted with a growing stock of ethnographic reports about peoples around the world, new knowledge about human biology, and early theories about culture and cultural differences, German intellectuals, including J. G. Herder, Immanuel Kant, Friedrich Schleiermacher, and Wilhelm von Humboldt, began to imagine and sketch a new science of man. From its inception as a discipline in eighteenth-century Germany, anthropology—or this new science of man—was beset by a basic but intractable conundrum: The object of study, the human, was the same as the subject of study, the human. Knower and known were identical. This epistemic puzzle produced deeply ethical anxieties. Questions of fact and value could never be fully isolated. Unexamined moral values often colored empirical judgments, just as isolated empirical observations became the grounds for normative evaluations. This epistemic and ethical dilemma would remain with anthropology throughout its history, and it continues to be a central concern of all human sciences. While Becoming Human focused on the vicissitudes of the emergent discipline of anthropology, it raised questions of a more basic nature for Wellmon: What constitutes a discipline or science in the first place? When does knowledge about the world go from being ordinary, common knowledge to being uniquely scientific knowledge? These were precisely the questions that animated some of the Enlightenment’s greatest thinkers, and the ones that Wellmon takes up in his forthcoming book with Johns Hopkins University Press, Organizing Enlightenment: Information Overload and the Invention 2

of the Research University, the research for which was funded in part by a Charles A. Ryskamp Research Fellowship from the American Council of Learned Societies. In contemplating these epistemic questions, Wellmon noticed that debate about what constituted a real science as compared to what was simply common, ordinary knowledge had started taking place in Germany just as advances in print technologies and increased demand were enabling an increase in the production of new books. Between 1770 and 1800, the production of printed texts in Germany increased by nearly 150 percent. On the basis of this knowledge, Wellmon reformulated his initial questions about science and authoritative knowledge in broader terms: What counts as real or authoritative knowledge in an age when information is thought to be ubiquitous and easily accessible? The resonances with our contemporary moment of “big data” and “information overload” were obvious. “If we imagine ourselves to be overwhelmed by a deluge of digital data,” Wellmon says, “late-eighteenth-century German readers imagined themselves to be drowning in a ‘flood of books’ or suffering under a ‘plague of books.’” In 2014, as they did in, say, 1784, people find themselves compelled to decide which sources of knowledge to trust, and which not to, in environments of extraordinarily expanded production and seemingly unlimited access. Wellmon is not sure what practices, institutions, norms, and related technologies will ultimately emerge in our own digital age, but he points out that late-eighteenth-century and early-nineteenth-century German intellectuals embraced the idea of the modern research university and its organizing ethic, disciplinarity. The “credentialized,” discipline-based ordering of knowledge embodied in the research university was a new way of coping with a perceived proliferation of knowledge and the attendant crisis in epistemic authority. Ultimately, Wellmon considers Organizing Enlightenment a sustained reflection on how the present moment and that period of intellectual energy 200 years ago can illuminate each other. The anxieties and aspirations felt by people around 1800, he insists, can help us more fully appreciate our own situation and what is at stake in debates about the future of digital technologies, the university, and the future of knowledge. Likewise, he believes that our contemporary situation can illuminate the history of the research university, helping us to better understand the norms, virtues, and purposes that have animated it for two centuries. As Wellmon was writing about the history of the university in Organizing Enlightenment, the contemporary university suddenly became the center of a national debate about its future, with detractors and supporters alike declaring it to be in a state of crisis. For Wellmon and his University of Virginia 3

colleagues, these debates hit home during the summer of 2012 with the resignation of Teresa Sullivan as university president. Amid all the chatter and declarations of the contemporary university’s imminent disruption by massive open online courses (MOOCs) and other new digital technologies, Wellmon found that the debate over the university’s future was largely ignoring its past. Despite the fact that earlier formulations of the mission of the research university had played a key role in its development, the debate seemed to assume that the university was just another technology that, like so many others, would fold under the pressure of the latest digital advances. It was then that Wellmon joined forces with two colleagues, Louis Menand (Harvard) and Paul Reitter (Ohio State), to develop a new book, The Rise of the Research University: A Sourcebook (University of Chicago Press, forthcoming). This edited volume begins with a simple observation, that the nineteenth-century German university, both real and imagined, was especially important for the first American research universities, such as Johns Hopkins and the University of Chicago. However, it remains hard to gain access to many of the critical sources documenting the debates surrounding the genesis of the research university. Most of the classic German writings on the university either have not been translated or were rendered into English long ago, and rather poorly. In addition, some of the most consequential English-language writings on the early American research university have turned out to be surprisingly difficult to track down. In The Rise of the Research University, Wellmon and his colleagues will make easily available a series of seminal writings on the university; moreover, this “sourcebook,” as it is described in its subtitle, will make those writings more accessible to students and a general audience by providing them with historical context. The broader aim is to shape the national debate about the future of the research university by providing a framework for understanding its emergence and development over the past two centuries. During his work on the history of the university and the role technological change had always played in it, Wellmon began to explore broader questions about technology and its relationship to humans. Specifically, he began to question the two dominant narratives of our own digital age: the techno-utopian and the technophobic. For techno-utopians, new digital technologies are salvific. They have the power to set humans free from traditional authority structures and to usher in a wave of radical democracy, as well as to realize that ancient dream of unifying all knowledge. Conversely, technophobes feel that digital technologies, at best, distract and overwhelm us with too much data, and, at worst, degrade our ability to think and pay sustained attention. In so many words, digital technologies make us stupid. Those who adhere to these dominant narratives, Wellmon argues, make two basic mistakes. First, they imagine our information age to be unprecedented, despite the fact that information explosions and the utopian and apocalyptic 4

visions that accompany them are an old concern. The emergence of every new information technology brings with it new methods and modes for storing and transmitting ever more information, and these technologies deeply affect the ways in which humans interact with the world. Both the unconstrained optimism of the techno-utopians and the unmitigated pessimism of the technophobes echo historical hopes and complaints about information overload. A second mistake of the two narratives is that they both isolate the causal effects of technology. Technologies, whether the printed book or Google, make us neither unboundedly free nor unflaggingly stupid. Positing such a sharp dichotomy between humans and technology unduly simplifies the complex, unpredictable, and deeply historical ways in which humans and technologies interact and shape each other. Simple claims about the effects of technology obscure basic assumptions, for good or bad, about technology as an independent cause that eclipses causes of other kinds. But the effects of technology cannot be so easily isolated and abstracted from their social and historical contexts. To challenge the techno-utopian and technophobic narratives, Wellmon proposed, in his article “Why Google Is Not Making Us Stupid…or Smart” (which he is now expanding into a book titled Google before Google: Search and the Human from Kant to the Web), to rethink the very terms of technology and its relationship to humans. Instead of thinking of technology as either a distinct tool somehow separate from the human or simply as an extension of the mind, we might consider technologies as embedded in the environment in which people live, work, think, and play. In this sense, Wellmon uses the word technology to refer to the very manner in which we engage the world. To celebrate Google, or any other technology, as inherently liberating, or to condemn it as enslaving, is to ignore its more human scale: we gain access to the world and information through technologies that are designed and constantly altered by human decisions and experiences. Technologies do not exist independently of the persons who use them; neither do we exist independently of the technologies through which we engage the world, others, and ourselves. In an effort to push his thinking about technology and ethics further, Wellmon recently started a blog called The Infernal Machine: Reflecting on Technology, Ethics, and the Human Person. It is based at the website of The Hedgehog Review, the journal on contemporary culture published by the Institute for Advanced Studies in Culture. “We live in an age of rapid technological change,” Wellmon says, arguing that The Infernal Machine is an attempt to examine and consider these changes in terms of the good, the true, and the beautiful. Never content to consider only the immediate moment, Wellmon and others who post at The Infernal Machine draw on the approaches of history, philosophy, anthropology, sociology, media studies, and religious studies to make sense of the historical and contemporary relationships among technology, ethics, and the human person. 5

Looking forward, Wellmon sees The Infernal Machine as a launch pad for “Technology, Knowledge, and Meaning,” a research program at the Institute for Advanced Studies in Culture set to begin in the fall of 2014. He believes that this program will establish a context for scholars in the humanities and humanistic social sciences to carry out both historical and analytic research dedicated to the understanding of how knowledge and technology work in the world. It will also provide a platform for imagining new paradigms and forms of knowledge and for engaging technology for the future. The questions he will explore are big: What are the limits and possibilities of knowledge today? How do the technologies that enable our knowledge outstrip our moral and ethical capacities and practices? What alternatives to instrumental rationality can we imagine today? In his years of work on the topic of disciplinarity and the research university, Wellmon has learned to appreciate the intellectual focus and clarity that academic disciplines can provide—the way they help drown out the noise of so many sources of knowledge and data. But he is an interdisciplinary thinker at heart, formed as he was by his time at the North Carolina School of Science and Mathematics. Today, when he approaches a new conundrum or a thorny text, he can’t help but imagine what his former lab partners—now cancer researchers, computer engineers, and astrophysicists—would make of it.

BIBLIOGRAPHY

Books Organizing Enlightenment: Information Overload and the Invention of the Modern University. Baltimore: Johns Hopkins University Press, forthcoming. Google before Google: Search and the Human from Kant to the Web. Amherst: University of Massachusetts Press, forthcoming. The Rise of the Research University: A Sourcebook, edited with Louis Menand and Paul Reitter. Chicago: University of Chicago Press, forthcoming. Becoming Human: Romantic Anthropology and the Embodiment of Freedom. Series on Philosophy and Literature. University Park, PA: Penn State University Press, 2010. 6

Articles and Chapters “See Page 481,” with Brad Pasanek. In The Eighteenth Century, edited by David Gies and Cynthia Wall. Charlottesville: University of Virginia Press, forthcoming. “The Enlightenment Index,” with Brad Pasanek. The Eighteenth Century: Theory and Interpretation, forthcoming. “Kant on the Discipline of Knowledge and the Unity of Knowledge.” In Performing Knowledge in the Long Eighteenth Century, edited by Mary Helen Dupie and Sean Franzel. New York: Walter de Gruyter, forthcoming. “Knowledge, Virtue, and the Research University.” The Hedgehog Review 15, no. 2 (Summer 2013): 79–91. “Why Google Is Not Making Us Stupid…or Smart.” The Hedgehog Review 14, no. 1 (Spring 2012): 64–78. “Touching Books: Diderot, Novalis and the Encyclopedia of the Future.” Representations 114, no. 1 (2011): 65–102. “Goethe’s Morphology of Knowledge, or the Overgrowth of Nomenclature.” Goethe Yearbook 17 (2010): 153–77. “Kant and the Feelings of Reason.” Eighteenth-Century Studies 42, no. 4 (2009): 557–80. “Languages, Cultural Studies and the Future of Foreign Language Education.” Modern Language Journal 92, no. 2 (2008): 292–95. “From Bildung durch Sprache to Language Ecology: The Multilingual Challenge,” with Claire Kramsch. Münchener Arbeiten zur Fremdsprachenforschung 22 (2008): 215–25. “Lyrical Feeling: Novalis’ Anthropology of the Senses.” Studies in Romanticism 47, no. 4 (2008): 453–78. “The Problem of Framing in Foreign Language Education: The Case of German,” with Claire Kramsch, Tes Howell, and Chantelle Warner. Critical Inquiry in Language Studies 4, nos. 2–3 (2007): 151–78. “Poesie as Anthropology: Schleiermacher, Colonial History and the Ethics of Ethnography.” German Quarterly 79, no. 4 (2006): 423–42. Reviews Review of Louis Menand, The Marketplace of Ideas, and Mark Taylor, Crisis on Campus. The Hedgehog Review 13, no. 1 (Spring 2011): 88–91. Review of Stefani Engelstein, Anxious Anatomy. In The Eighteenth-Century Current Bibliography. New York: AMS Press, 2009. Review of John B. Lyon, Crafting the Flesh, Crafting the Self. German Studies Review 31, no. 1 (2008). Review of Tom Gunning, The Films of Fritz Lang: Allegories of Modernity. Medienwissenschaft (Spring 2002).

7

selected Articles

Why Google Isn’t Making Us Stupid…or Smart Chad Wellmon Last year The Economist published a special report not on the global financial crisis or the polarization of the American electorate, but on the era of big data. Article after article cited one big number after another to bolster the claim that we live in an age of information superabundance. The data are impressive: 300 billion emails, 200 million tweets, and 2.5 billion text messages course through our digital networks every day, and, if these numbers were not staggering enough, scientists are reportedly awash in even more information. This past January astronomers surveying the sky with the Sloan telescope in New Mexico released over 49.5 terabytes of information—a mass of images and measurements—in one data drop. The Large Hadron Collider at CERN (the European Organization for Nuclear Research), however, produces almost that much information per second. Last year alone, the world’s information base is estimated to have doubled every eleven hours. Just a decade ago, computer professionals spoke of kilobytes and megabytes. Today they talk of the terabyte, the petabyte, the exabyte, the zettabyte, and now the yottabyte, each a thousand times bigger than the last. Some see this as information abundance, others as information overload. The advent of digital information and with it the era of big data allows geneticists to decode the human genome, humanists to search entire bodies of literature, and businesses to spot economic trends. But it is also creating for many the sense that we are being overwhelmed by information. How are we to manage it all? What are we to make, as Ann Blair asks, of a zettabyte of information—a one with 21 zeros after it?1 From a more embodied, human perspective, these tremendous scales of information are rather meaningless. We do not experience information as pure data, be it a byte or

Under the Sign of Satan Mark Edmundson

Pressured and Measured Gaye Tuchman

Do College Teachers Have to Be Scholars? Frank Donoghue

The CorporaTe professor

WWW. hed geho grevieW.Co m

This essay originally appeared in issue 14.1 of The Hedgehog Review, Spring 2012.

8

a yottabyte, but as filtered and framed through the keyboards, screens, and touchpads of our digital technologies. However impressive these astronomical scales of information may be, our contemporary awe and increasing worry about all this data obscures the ways in which we actually engage it and the world of which it and we are a part. All of the chatter about information superabundance and overload tends not only to marginalize human persons, but also to render technology just as abstract as a yottabyte. An email is reduced to yet another data point, the Web to an infinite complex of protocols and machinery, Google to a neutral machine for producing information. Our compulsive talk about information overload can isolate and abstract digital technology from society, human persons, and our broader culture. We have become distracted by all the data and inarticulate about our digital technologies. The more pressing, if more complex, task of our digital age, then, lies not in figuring out what comes after the yottabyte, but in cultivating contact with an increasingly technologically formed world.2 In order to understand how our lives are already deeply formed by technology, we need to consider information not only in the abstract terms of terrabytes and zettabytes, but also in more cultural terms. How do the technologies that humans form to engage the world come in turn to form us? What do these technologies that are of our own making and irreducible elements of our own being do to us? The analytical task lies in identifying and embracing forms of human agency particular to our digital age, without reducing technology to a mere mechanical extension of the human, to a mere tool. In short, asking whether Google makes us stupid, as some cultural critics recently have, is the wrong question. It assumes sharp distinctions between humans and technology that are no longer, if they ever were, tenable. Two Narratives The history of this mutual constitution of humans and technology has been obscured as of late by the crystallization of two competing narratives about how we experience all of this information. On the one hand, there are those who claim that the digitization efforts of Google, the social-networking power of Facebook, and the era of big data in general are finally realizing that ancient dream of unifying all knowledge. The digital world will become a “single liquid fabric of interconnected words and ideas,” a form of knowledge without distinctions or differences.3 Unlike other technological innovations, like print, which was limited to the educated elite, the internet is a network of “densely interlinked Web pages, blogs, news articles and Tweets [that] are all visible to anyone and everyone.”4 Our information age is unique not only in its scale, but in its inherently open and democratic arrangement of information. Information has finally been set free. Digital technologies, claim the most optimistic among us, will deliver a universal knowledge that will make us smarter and ultimately liberate us.5 These utopic claims are related to similar visions about a trans-humanist future in which technology will overcome what were once the historical limits of humanity: physical, intellectual, and psychological. The dream is of a post-human era.6 9

On the other hand, less sanguine observers interpret the advent of digitization and big data as portending an age of information overload. We are suffering under a deluge of data. Many worry that the Web’s hyperlinks that propel us from page to page, the blogs that reduce long articles to a more consumable line or two, and the tweets that condense thoughts to 140 characters have all created a culture of distraction. The very technologies that help us manage all of this information are undermining our ability to read with any depth or care. The Web, according to some, is a deeply flawed medium that facilitates a less intensive, more superficial form of reading. When we read online, we browse, we scan, we skim. The superabundance of information, such critics charge, however, is changing not only our reading habits, but also the way we think. As Nicholas Carr puts it, “what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.”7 The constant distractions of the internet—think of all those hyperlinks and new message warnings that flash up on the screen—are degrading our ability “to pay sustained attention,” to read in depth, to reflect, to remember. For Carr and many others like him, true knowledge is deep, and its depth is proportional to the intensity of our attentiveness. In our digital world that encourages quantity over quality, Google is making us stupid. Each of these narratives points to real changes in how technology impacts humans. Both the scale and the acceleration of information production and dissemination in our digital age are unique. Google, like every technology before it, may well be part of broader changes in the ways we think and experience the world. Both narratives, however, make two basic mistakes. First, they imagine our information age to be unprecedented, but information explosions and the utopian and apocalyptic pronouncements that accompany them are an old concern. The emergence of every new information technology brings with it new methods and modes for storing and transmitting ever more information, and these technologies deeply impact the ways in which humans interact with the world. Both the optimism of technophiles who predict the emergence of a digital “liquid” intelligence and the pessimism of those who fear that Google is “making us stupid” echo historical hopes and complaints about large amounts of information. Second, both narratives make a key conceptual error by isolating the causal effects of technology. Technologies, be it the printed book or Google, do not make us unboundedly free or unflaggingly stupid. Such a sharp dichotomy between humans and technology simplifies the complex, unpredictable, and thoroughly historical ways in which humans and technologies interact and form each other. Simple claims about the effects of technology obscure basic assumptions, for good or bad, about technology as an independent cause that eclipses causes of other kinds. They assume the effects of technology can be easily isolated and abstracted from their social and historical contexts. Instead of thinking in such dichotomies or worrying about all of those impending yottabytes, we might consider a perhaps simple but oftentimes overlooked fact: we access, use, and engage information through technologies that help us select, filter, and delimit. Web browsers, hyperlinks, blogs, online newspapers, computational 10

algorithms, rss feeds, Facebook, and Google help us turn all of those terrabytes of data into something more useful and particular, that is, something that can be remade and repurposed by an embodied human person. These now ubiquitous technologies help us filter the essential from the excess and search for the needle in the haystack, and in so doing they have become central mediums for our experience of the world. In this sense, technology is neither an abstract flood of data nor a simple machinelike appendage subordinate to human intentions, but instead the very manner in which humans engage the world. To celebrate the Web, or any other technology, as inherently edifying or stultifying is to ignore its more human scale: our individual access to this imagined expanse of pure information is made possible by technologies that are constructed, designed, and constantly tweaked by human decisions and experiences. These technologies do not exist independently of the human persons who design and use them. Likewise, to suggest that Google is making us stupid is to ignore the historical fact that over time technologies have had an effect on how we think, but in ways that are much more complex and not at all reducible to simple statements like “Google is making us stupid.” Think of it this way: the Web in its entirety—just like those terrabytes of information that we imagine weighing down upon us—is inaccessible to the ill-equipped person. Digital technologies make the Web accessible by making it seem much smaller and more manageable than we imagine it to be. The Web does not exist. In this sense, the history of information overload is instructive less for what it teaches us about the quantity of information than what it teaches us about how the technologies that we design to engage the world come in turn to shape us. The specific technologies developed to manage information can give us insight into how we organize, produce, and distribute knowledge—that is, the history of information overload is a history of how we know what we know. It is not only the history of data, books, and the tools used to cope with them. It is also a history of ourselves and of the environment within which we make and in turn are made by technologies. In the following sections, I put our information age in historical context in an effort to demonstrate that technology’s impact on the human is both precedented and constitutive of new forms of life, new norms, and new cultures. The concluding sections focus on Google in particular and consider how it is impacting our very notion of what it is to be human in the digital age. Carr and other critics of the ways we have come to interact with our digital technologies have good reason to be concerned, but, as I hope to show, for rather different reasons than they might think. The core issue concerns not particular modes of accommodating new technologies— nifty advice on dealing with email or limiting screen time—but our very conception of the relationship between the human and technology. Too Many Books As historian Ann Blair has recently demonstrated, our contemporary worries about information overload resonate with historical complaints about “too many books.” Historical analogues afford us insight not only into the history of particular 11

anxieties, but also into the ways humans have always been impacted by their own technologies. These complaints have their biblical antecedents: Ecclesiastes 12:12, “Of making books there is no end”; their classical ones: Seneca, “the abundance of books is a distraction”8; and their early modern ones: Leibniz, the “horrible mass of books keeps growing.”9 After the invention of the printing press around 1450 and the attendant drop in book prices, according to some estimates by as much as 80 percent, these complaints took on new meaning. As the German philosopher and critic Johann Gottfried Herder put it in the late eighteenth century, the printing press “gave wings” to paper.10 Complaints about too many books gained particular urgency over the course of the eighteenth century when the book market exploded, especially in England, France, and Germany. Whereas today we imagine ourselves to be engulfed by a flood of digital data, late eighteenth-century German readers, for example, imagined themselves to have been infested by a plague of books [Bücherseuche]. Books circulated like contagions through the reading public. These anxieties corresponded to a rapid increase in new print titles in the last third of the eighteenth century, an increase of about 150 percent from 1770 to 1800 alone. Similar to contemporary worries that Google and Wikipedia are making us stupid, these eighteenth-century complaints about “excess” were not merely descriptive. In 1702 the jurist and philosopher Christian Thomasius laid out some of the normative concerns that would gain increasing traction over the course of the century. He described the writing and business of books as a kind of Epidemic disease, which hath afflicted Europe for a long time, and is more fit to fill warehouses of booksellers, than the libraries of the Learned. Any one may understand this to be meant of that itching desire to write books, which people are troubled with at this time. Heretofore none but the learned, or at least such as ought to be accounted so, meddled with this subject, but now-a-days there is nothing more common, it extends itself through all professions, so that now almost the very Coblers, and Women who can scarce read, are ambitious to appear in print, and then we may see them carrying their books from door to door, as a Hawker does his comb cases, pins and laces.11 The emergence of a print book market lowered the bar of entry for authors and gradually began to render traditional filters and constraints on the production of books increasingly inadequate. The perception of an excess of books was motivated by a more basic assumption about who should and should not write them. At the end of the century, even book dealers had grown weary of a market that seemed to be growing out of control. In his 1795 screed, Appeal to My Nation: On the Plague of German Books, the German bookseller and publisher Johann Georg Heinzmann lamented that “no nation has printed so much as the Germans.”12 For Heinzmann, late eighteenth-century German readers suffered under a “reign of books” in which they were the unwitting pawns of ideas that were not their own. Giving this broad cultural anxiety a philosophical frame, and beating Carr to the punch by more than two centuries, Immanuel Kant complained that such an overabundance of books encouraged people to “read a lot” and “superficially.”13 Extensive 12

reading not only fostered bad reading habits, but also caused a more general pathological condition, Belesenheit [the quality of being well-read], because it exposed readers to the great “waste” [Verderb] of books. It cultivated uncritical thought. Like contemporary worries about “excess,” these were fundamentally normative. They made particular claims not only about what was good or bad about print, but about what constituted “true” knowledge. First, they presumed some unstated yet normative level of information or, in the case of a Bücherseuche, some normative number of books. There are too many books; there is too much data. But compared to what? Second, such laments presumed the normative value of particular practices and technologies for dealing with all of these books and all of this information. Every complaint about excess was followed by a proposal on how to fix the apparent problem. To insist that there are too many books was to insist that there were too many books to be read or dealt with in a particular way and thus to assume the normative value of one form of reading over another. Enlightenment Reading Technologies Not so dissimilar to contemporary readers with their digital tools, eighteenth-century German readers had a range of technologies and methods at their disposal for dealing with the proliferation of print—dictionaries, bibliographies, reviews, notetaking, encyclopedias, marginalia, commonplace books, footnotes. These technologies made the increasing amounts of print more manageable by helping readers to select, summarize, and organize an ever-increasing store of information. The sheer range of technologies demonstrates that humans usually deal with information overload through creative and sometimes surprising solutions that blur the line between humans and technology. By the late seventeenth and early eighteenth centuries, European readers dealt with the influx of new titles and the lack of funds and time to read them all by creating virtual libraries called bibliotheca. At first these printed texts were simply listings of books that had been published or displayed at book fairs, but over time they began to include short reviews and summaries intended to guide the collector, scholar, and amateur in their choice and reading of books. They also allowed eighteenth-century readers to avoid reading entire books by providing summaries of individual books. Eighteenth-century readers also made use of an increasing array of encyclopedias. In contrast to their early modern Latin predecessors that sought to summarize the most significant branches of established knowledge (designed to present an enkuklios paideia, or common knowledge), these Enlightenment encyclopedias were produced and sold as reference books that disseminated information more widely and efficiently by compiling, selecting, and summarizing more specialized and, above all, new knowledge. It made knowledge more general and common by sifting and constraining the purview of knowledge.14 Similarly, compilations, which date from at least the early modern period, employed cut and paste technologies, rather than summarization, to select, collect, 13

and distribute the best passages from an array of books.15 A related search technology, the biblical concordance—the first dates back to 1247—indexed every word of the Bible and facilitated its broader use for sermons and, after its translations into the vernacular, even broader audiences. Similarly, indexes became increasingly popular and big selling points of printed texts by the sixteenth century.16 All of these technologies facilitated a consultative reading that allowed a text to be accessed in parts instead of reading a text straight through from beginning to end.17 By the early eighteenth century, there was even a science devoted to organizing and accounting for all of these technologies and books: historia literaria. It produced books about books. The technologies and methods for organizing and managing all of these books and information were embedded into other forms and even other sciences. All of these devices and technologies provided shortcuts and methods for filtering and searching the mass of printed or scribal texts. They were technologies for managing two perennially precious resources: money (books and manuscripts were expensive) and time (it takes a lot of time to read every word). While many overwhelmed readers welcomed these techniques and technologies, some, especially by the late eighteenth century, began to complain that they led to a derivative, second-hand form of knowledge. One of Kant’s students and a key figure of the German Enlightenment, J. G. Herder, mocked the French for their attempts to deal with such a proliferation of print through encyclopedias: Now encyclopedias are being made, even Diderot and D’Alembert have lowered themselves to this. And that book that is a triumph for the French is for us the first sign of their decline. They have nothing to write and, thus, produce Abregés, vocabularies, esprits, encyclopedias—the original works fall away.18 Echoing contemporary concerns about how our reliance on Google and Wikipedia might lead to superficial forms of knowledge, Herder worried that these technologies reduced knowledge to discrete units of information. Journals reduced entire books to a paragraph or blurb; encyclopedias aggregated huge swaths of information into a deceptively simple form; compilations separated readers from the original texts. By the mid-eighteenth century, the word “polymath”—previously used positively to describe a learned person—became synonymous with dilettante, one who merely skimmed, aggregated, and heaped together mounds of information but never knew much at all. In sum, encyclopedias and the like had reduced the Enlightenment project, these critics claimed, to mere information management. At stake was the definition of “true” knowledge. Over the course of the eighteenth century, German thinkers and authors began to make a normative distinction between what they termed Gelehrsamkeit and Wissen, between mere pedantry and true knowledge. As this brief history of Enlightenment information technologies suggests, to claim that a particular technology has one unique effect, either positive or negative, is to reduce both historically and conceptually the complex causal nexus within which humans and technologies interact and shape each other. Carr’s recent and broadly well-received arguments wondering if Google makes us stupid, for example, rely on a 14

historical parallel that he draws with print. He claims that the invention of printing “caused a more intensive” form of reading and, by extrapolation, print caused a more reflective form of thought—words on a page focused the reader.19 Historically speaking, this is hyperbolic techno-determinism. Carr assumes that technologies simply “determine our situation,” independent of human persons, but these very technologies, methods, and media emerge from particular historical situations with their own complex of factors.20 Carr relies on quick allusions to historians of print to bolster his case and inoculate himself from counter-arguments, but the historian of print to whom he appeals, Elizabeth L. Eisenstein, warns that “efforts to summarize changes wrought by printing in any simple or single formula are likely to lead us astray.”21 Arguments like Carr’s—and I focus on him because he has become the vocal advocate of this view—also tend to ignore the fact that, historically, print facilitated a range of reading habits and styles. Francis Bacon, himself prone to condemning printed books, laid out at least three ways to read books: “Some books are to be tasted, others to be swallowed, and some few to be chewed and digested.”22 As a host of scholars have demonstrated of late, different ways of reading co-existed in the print era.23 Extensive or consultative forms of reading—those that Carr might describe as distracted or unfocused—existed alongside more intensive forms of reading—those that he might describe as deep, careful, prolonged engagements with particular texts in the Enlightenment. Eighteenth-century German Pietists read the Bible very closely, but they also consistently consulted Bible concordances and Latin encyclopedias.24 Even the form of intensive reading held up today as a dying practice, novel reading, was often derided in the eighteenth century as weakening the memory and leading to “habitual distraction,” as Kant put it.25 It was thought especially dangerous to women who, according to Kant, were already prone to such lesser forms of thought. In short, print did not cause one particular form of reading; instead, it facilitated a range of ever-newer technologies, methods, and innovations that were deeply interwoven with new forms of human life and new ways of experiencing the world. The problem with suggestions that Google makes us stupid, smart, or whatever else we might imagine, however, is not just their historical myopia. Such reductions elide the fact that Google and print technology do not operate independently of the humans who design, interact with, and constantly modify them, just as humans do not exist independently of technologies. By focusing on technology’s capacity to determine the human (by insisting that Google makes us stupid, that print makes us deeper readers), we risk losing sight of just how deeply our own agency is wrapped up with technology. We forego a more anthropological perspective from which we can observe “the activity of situated people trying to solve local problems.”26 To emphasize a single and direct causal link between technology and a particular form of thought is to isolate technology from the very forms of life with which it is bound up. Considering our anxieties and utopic fantasies about technology or information superabundance in a more historical light is one way to mitigate this tendency and gain some conceptual clarity. Thus far I have offered some very general historical and conceptual observations about technology and the history of information overload. 15

In the next sections, I focus on one particular historical technology—the footnote— and its afterlife in our contemporary digital world. The Footnote: From Kant to Google Today our most common tools for organizing knowledge are algorithms and data structures. We often imagine them to be unprecedented. But Google’s search engines take advantage of a rather old technology—that most academic and seemingly useless thing called the footnote. Although Google continues to tweak and improve its search engines, the data that continue to fuel them are hyperlinks, those blue colored bits of texts on the Web that if clicked will take you to another page. They are the sinews of the Web, which is simply the totality of all hyperlinks. The World Wide Web emerged in part from the efforts of a British physicist working at CERN in the early 1990s, Tim Berners-Lee. Frustrated by the confusion that resulted from a proliferation of computers, each with its own codes and formats, he wondered how they could all be connected. He took advantage of the fact that regardless of the particular code, every computer had documents. He went on to work on codes for html, URLs, and http that could link these documents regardless of the differences among the computers themselves. It turns out that these digital hyperlinks have a revealing historical and conceptual antecedent in the Enlightenment footnote. The modern hyperlink and the Enlightenment footnote share a logic that is grounded in assumptions about the text-based nature of knowledge. Both assume that documents, the printed texts of the eighteenth century or the digitized ones of the twenty-first century, are the basis of knowledge. And these assumptions have come to dominate not only the way we search the web, but also the ways we interact with our digital world. The history of the footnote is a curious but perspicuous example, then, of how normative, cultural assumptions and values become embedded in technology. Footnotes have a long history in biblical commentaries and medieval annotations. Whereas these scriptural commentaries simply “buttressed a text” that derived its ultimate authority from some divine source, Enlightenment footnotes pointed to other Enlightenment texts.27 They highlighted the fact that these texts were precisely not divine or transcendent. They located the work in a particular time and place. The modern footnote anchors a text and grounds its authority not in some transcendent realm, but in the footnotes themselves. Unlike biblical commentaries, modern footnotes “seek to show that the work they support claims authority and solidity from the historical conditions of its creation.”28 The Enlightenment’s citational logic is fundamentally self-referential and recursive—that is, the criteria for judgment are always given by the system of texts themselves and not something external, like divine or ecclesial authority. The value and authority of one text is established by the fact that other texts point to it. The more footnotes that point to a particular text, the more authoritative that text becomes by dent of the fact that other texts point to it. Online newspapers and blogs are central to our public debates, but printed journals were the central medium of the Enlightenment. One of the most famous 16

German journals was the Berlinische Monatsschrift published between 1783 and 1811. It published the most important articles to a broad and increasingly diverse reading public. In its first issue, the editors wrote that the journal sought “news from the entire empire [Reich] of the sciences”—ethnographic reports, biographical reports about interesting people, translations, excerpts from texts from foreign lands. The editors envisioned the journal as a central node in the broader world of information exchange and circulation. This editorial plan was then carried out according to a citational logic that structured the entire journal. The journal’s first essay, “On the Origin of the Fable of the Woman in White,” centers on a fable “drawn” from another text of 1723. This citation is followed by another one citing another history, published in 1753, on the origins of the fable. The rest of the essay cites “various language scholars and scholars of antiquity” [Sprachund Alterthumsforscher] to authorize its own claims. The citations and footnotes that fill the margins and the parenthetical directives that are peppered throughout the main text not only give authority to the broader argument and narrative, but also create a web of interconnected texts. Even Kant’s famous essay on the question of Enlightenment, which appeared in the same journal in 1784, begins not with a philosophical argument, but with a footnote directly underneath the title, directing the reader to a footnote from another essay published in December of 1783 that posed the original question: “What is Enlightenment?” This essay in turn directs readers to yet another article on Enlightenment from September of that year. The traditional understanding of Enlightenment is based on the self-legislation and autonomy of reason, but all of these footnotes suggest that Enlightenment reason was bound up with print technology from the beginning. One of the central mediums of the Enlightenment, journals, operated according to a citational logic. The authority, relevance, and value of a text was undergirded— both conceptually and visually—by an array of footnotes that pointed to other texts. Like our contemporary hyperlinks, these citations interrupted the flow of reading— marked as they often were by a big asterisk or a “see page 516.” Perhaps most importantly, however, all of these footnotes and citations pointed not to a single divinely inspired or authoritative text, but to a much broader network of texts. Footnotes and citations were the pointing sinews that connected and coordinated an abundance of print. By the end of the eighteenth century, there even emerged a term for all of this pointing: the language of books [Büchersprache]. Books were imagined to speak to one another because they constantly pointed to and cited one another. The possibility of knowledge and interaction with the broader world in the Enlightenment rested not only on the pensive, autonomous philosopher, but also within the links from book to book, essay to essay. Google’s Citational Logic The founders of Google, Larry Page and Sergey Brin, modeled their revolutionary search engine on the citational logic of the footnote and thus transposed many of its 17

assumptions about knowledge and technology into a digital medium. Google “organizes the world’s information,” as their motto goes, by modeling the hyperlink structure inherent in the document-based Web; that is, it produces search results based on all of the pointing between digital texts that hyperlinks do. Taking advantage of the enormous scaling power afforded by digitization, Google, however, takes this citational logic to both a conceptual and practical extreme. Whereas the footnotes in Enlightenment texts were always bound to particular pages, Google uses each hyperlink as a data point for its algorithms and creates a digitized map of all possible links among documents. Page and Brin started from the insight that the web “was loosely based on the premise of citation and annotation—after all, what is a link but a citation, and what was the text describing that link but annotation.”29 Page himself saw this citational logic as the key to modeling the Web’s own structure. Modern academic citation is simply the practice of pointing to other people’s work—very much like the footnote. As we saw with Enlightenment journals, a citation not only lists important information about another work, but also confers authority on that work: “the process of citing others confers their rank and authority upon you—a key concept that informs the way Google works.”30 With his original Google project, Page wanted to trace all of the links that connected different pages on the Web, not only the outgoing links, but also their backward paths. Page argued that pure computational power could produce a more complete model of the citational structure of the Web—a map of interlinked and interdependent documents by means of tracing hyperlinked citations. He intended to exploit what computer scientists refer to as the Web Graph—the set of all nodes, corresponding to static html pages, with directed hyperlinks from page A to page B. In early 1998 there were an estimated 150 million nodes joined by 2 billion links.31 Other search engines, however, had had this modeling idea before. Given the proliferation of Web pages and with them hyperlinks, Brin and Page, like all other search engineers, knew they had to scale up “to keep up with the growth of the web.”32 By 1994 the World Wide Web Worm (WWWW) had indexed 110,000 pages, but by 1997 WebCrawler had indexed over 100 million Web documents. As Brin and Page put it in 1998, it was “foreseeable” that by 2000 a comprehensive index would contain over a billion documents. They were not merely intent on indexing pages or modeling all of the links between documents on the Web, however. They were also interested in increasing the “quality of results” that search engines returned. In order for searches to improve, their search engine would focus not just on the comprehensiveness, but on the relevance or quality of its results. The insight that made Google Google was the recognition that all links and all pages are not equal. In designing their link analysis algorithm, PageRank, Brin and Page recognized that the real power of this citational logic rested not just in counting links from all pages equally, but in “normalizing by the number of links on a page.”33 The key difference between Google and early digital search technologies (like the WWWW and the early Yahoo) was that it did not simply count or collate citations. Other early search engines were too descriptive, too neutral. Brin and Page reasoned that users wanted help not just in collecting but in evaluating all of those millions 18

of webpages. From its beginnings at Stanford, the PageRank algorithm modeled the normative value of one page over another. It was concerned not simply with questions of completeness or managerial efficiency, but of value. It exploited the often-overlooked fact that hyperlinks, like those Enlightenment footnotes, not only connected document to document, but offered an implicit evaluation. The technology of the hyperlink, like the footnote, is not neutral but laden with normative evaluations. The Algorithmic Self In conclusion, I would like to forestall a possible concern that in historicizing information overload, I risk eliding the particularity of our own digital world and dismissing valid concerns, like Carr’s, about how we interact with our digital technologies. In highlighting the analogies between Google and Enlightenment print culture, I have attempted to resist the alarmism and utopianism that tend to frame current discussions of our digital culture, first by historicizing these concerns and second by demonstrating that technology needs to be understood in deep, embodied connection with the human. Considered in these terms, the question of whether Google is making us stupid or smart might give way to more complex and productive questions. What, for example, is the idea of the human person underlying Google’s efforts to organize the world’s information and what forms of human life does it facilitate? In order to address such questions, we need to understand that the Web relies on us as much as we rely on it. Every time we click, type in a search term, or update our Facebook status, the Web changes just a bit. “Google might not be making us stupid but we are making it (and Facebook) smarter” because of all the information that we feed them both every day.34 The links that make up the Web are evidence of this. They not only point to other pages, but also highlight the contingency of the Web’s structure by highlighting how the Web at any given moment is produced, manipulated, and organized by hundreds of millions of individual users. Links embody the contingency of the Web, its historical and ever-changing structure of which humans are an essential element. Thinking more in terms of a digital ecology or environment and less in a human vs. technology dichotomy, we can understand the Web, as James Hendler, Tim Berners-Lee, and colleagues recently put it, not just as an isolated machine “to be engineered for improved performance,” but as a “phenomenon with which we interact.” They write, “at the micro-scale, the Web is an infra-structure of artificial languages and protocols; it is a piece of engineering. However, it is the interaction of human beings creating, linking, and consuming information that generates the Web’s behavior as emergent properties at the macro-scale.”35 It is at this level of analysis, where the human and its technologies are inextricable and together form something like a digital ecology, that we can, for example, evaluate a recent claim of one of Google’s founders. Discussing the future of the search firm, Page described the “perfect search engine” as that which would “understand exactly 19

what I mean and give me back exactly what I want.”36 Such an “understanding,” however, is a function of the implicit normativity of the citational logic that Google’s search engine shares with the Enlightenment footnote. These technologies never leave our desires and thoughts unmediated and unmanipulated. But Google’s search engines transform the normativity of the citational logic of the footnote in important and particular ways that have come to distinguish the digital age from the print age. Whereas an Enlightenment reader might have been able to connect four or five footnotes without much effort, Google’s search engine follows hundreds of millions of links in a fraction of a second. The embodied human can all too easily seem to disappear at such scales. If, as I have done above, the relevance of technology has to be argued for in the Enlightenment, then the inverse is the case for our digital age— the relevance of the embodied human agent has to be argued for today. On the one hand, individual human persons play a rather insignificant role in Google’s operations. When we conduct a search on Google, the process of evaluation is fundamentally different from the form of evaluation tied to the footnote. Because Google’s search engine operates at such massive scales, it evaluates and normalizes links (judges which ones are relevant) through a recursive function. PageRank is an iterative algorithm—all outputs become inputs in an endless loop. The value of something on the Web is determined simply by the history of what millions of users have valued—that is, its inputs are always a function of its outputs. It is a highly scaled-up feedback loop. A Google search can only ever retrieve what is already in a document. It can only ever find what is known to the system of linked documents. The system is defined not by a particular object, operator, or node within the system, but rather by the history of the algorithm’s own operations. If my son’s Web page on the construction of his tree house has no incoming links, then his page, practically speaking, does not exist according to PageRank’s logic. Google web crawlers will not find it—or if they do, it will have a very low rank—and thus, because we experience the Web through Google, neither will you. The freedom of the Web—the freedom to link and follow links—is a function of the closed and recursive nature of the system, one that includes by necessarily excluding. Most contemporary search engines, Google chief among them, now share the assumption that a “hyperlink” is a marker of authority or endorsement. Serendipity is nearly impossible in such a document-centric Web. Questions of value and authority are functions of and subject to the purported wisdom of the digital crowd that is itself a normalized product of an algorithmic calculation of value and authority.37 The normative “I” that Google assumes, the “I” that Page’s perfect search engine would understand, is an algorithmic self. It is a function of a citational logic that has been extended to an algorithmic logic. It is an “I” constructed by a limited and fundamentally contingent Web marked by our own history of searches, our own well-worn paths. What I want at any given moment is forever defined by what I have always wanted or what my demographic others have always wanted. On the other hand, individual human persons are central agents in Google’s operations because they author hyperlinks. Columnists like Paul Krugman and Peggy Noonan make decisions about what to link to and what not to link to in their columns. Similarly, as we click from link to link (or choose not to click), we too 20

make decisions and judgments about the value of a link and thus of the document that hosts it. Because algorithms increase the scale of such operations by processing millions of links, however, they obscure this more human element of the Web. All of those decisions to link from one particular page to the next, to click from one link to the next involve not just a link-fed algorithm, but hundreds of millions of human persons interacting with Google every minute. These are the human interactions that have an impact on the Web at the macro-level, and they are concealed by the promises of the Google search box. Only at this macro-level of analysis can we make sense of the fact that Google’s search algorithms do not operate in absolute mechanical purity, free of outside interference. Only if we understand the Web and our search and filter technologies as elements in a digital ecology can we make sense of the emergent properties of the complex interactions of humans and technology: gaming the Google system through search optimization strategies, the decision by Google employees (not algorithms) to ban certain webpages and privilege others (ever notice the relatively recent dominance of Wikipedia pages in Google searches?). The Web is not just a technology but an ecology of human-technology interaction. It is a dynamic culture with its own norms and practices. New technologies, be it the printed encyclopedia or Wikipedia, are not abstract machines that independently render us stupid or smart. As we saw with Enlightenment reading technologies, knowledge emerges out of complex processes of selection, distinction, and judgment—out of the irreducible interactions of humans and technology. We should resist the false promise that the empty box below the Google logo has come to represent—either unmediated access to pure knowledge or a life of distraction and shallow information. It is a ruse. Knowledge is hard won; it is crafted, created, and organized by humans and their technologies. Google’s search algorithms are only the most recent in a long history of technologies that humans have developed to organize, evaluate, and engage their world. Endnotes Ann Blair, “Information Overload, the Early Years,” The Boston Globe (28 November 2010): .

1

Mark N. Hansen, Embodying Technesis: Technology beyond Writing (Ann Arbor: University of Michigan Press, 2010) 235.

2

Kevin Kelly, “Scan This Book!,” The New York Times (14 May 2006): .

3

Randall Stross, “World’s Largest Social Network: The Open Web,” The New York Times (15 May 2010): .

4

The most euphoric among them speak of a coming “singularity” when computer intelligence will exceed human intelligence.

5

For a less utopian and more nuanced account of a post-human era, see Friedrich Kittler, Gramophone, Film, Typewriter, trans. Geoffrey Winthrop-Young and Michael Wutz (Palo Alto: Stanford University Press, 1999).

6

21

Nicholas Carr, “Is Google Making Us Stupid?: What the Internet Is Doing to Our Brains,” The Atlantic (July–August 2008): . See also the expansion of his argument in The Shallows: What the Internet Is Doing to Our Brains (New York: Norton, 2010).

7

Quoted in Ann Blair, Too Much to Know: Managing Scholarly Information before the Modern Age (New Haven: Yale University Press, 2010) 15. The following historical account draws on Blair’s work.

8

Quoted in Stuart Brown, “The Seventeenth-Century Intellectual Background,” The Cambridge Companion to Leibniz, ed. Nicholas Jolley (New York: Cambridge University Press, 1995) 61 n28.

9

Johann Gottfried Herder, Briefe zur Beförderung der Humanität (Berlin and Weimar: AufbauVerlag, 1971) II: 92–93.

10

A review of Christian Thomasius’s Observationum selectarum ad rem litterariam spectantium [Select Observations Related to Learning], volume II (Halle, 1702), which was published in the April 1702 edition of the monthly British newspaper History of the Works of the Learned, Or an Impartial Account of Books Lately Printed in all Parts of Europein, as cited in David McKitterick, “Bibliography, Bibliophily and Organization of Knowledge,” The Foundations of Knowledge: Papers Presented at Clark Library (Los Angeles: Willam Andrews Clark Memorial Library, 1985) 202.

11

Johann Georg Heinzmann, Appell an meine Nation: Über die Pest der deutschen Literatur (Bern: 1795) 125.

12

Immanuel Kant, Philosophical Encyclopedia, 29:30, in Kant’s Gesammelte Schriften, ed. Königliche Preußische (later Deutsche) Akademie der Wissenschaften (Berlin: Walter de Gruyter, 1902– present).

13

See Richard R. Yeo, Encyclopaedic Visions: Scientific Dictionaries and Enlightenment Culture (Cambridge: Cambridge University Press, 2001).

14

Blair, Too Much to Know, 34.

15

Blair, Too Much to Know, 53.

16

Blair, Too Much to Know, 8.

17

Herder quoted in Ernst Behler, “Friedrich Schegels Enzyklopädie der literarischen Wissenschaften im Unterschied zu Hegels Enzyklopädie der philosophischen Wissenschaften,” Studien zur Romantik und idealistischen Philosophie (Paderborn: Schöningh, 1988) 246.

18

From an interview with Nicholas Carr available at .

19

Kittler xxxix.

20

Elizabeth L. Eisenstein, The Printing Revolution in Early Modern Europe (New York: Cambridge University Press, 2005) 332.

21

Francis Bacon, “On Studies,” Essays with Annotations (Boston: Lee and Shepar, 1884) 482.

22

Much of this work has been done in German-language scholarship. For an English-language overview, see Guglielmo Cavallo and Roger Chartier, eds., A History of Reading in the West (Amherst: University of Massachusetts Press, 1999).

23

See Jonathan Sheehan, The Enlightenment Bible: Translation, Scholarship, Culture (Princeton: Princeton University Press, 2005).

24

Immanuel Kant, Anthropologie, 7:208 in Kant’s Gesammelte Schriften, ed. Königliche Preußische (later Deutsche) Akademie der Wissenschaften (Berlin: Walter de Gruyter, 1902–present).

25

Hansen 271n8.

26

Anthony Grafton, The Footnote: A Curious History (Cambridge, MA: Harvard University Press, 1997) 32. Footnotes of this sort go back to at least the seventeenth century. John Selden’s History of Tithes (1618) and Johannes Eisenhart De fie historica (1679), which emphasized the importance of citing sources, reveal the process of knowledge production.

27

22

Grafton 32.

28

John Batelle, The Search: How Google and its Rivals Rewrote the Rules of Business and Transformed Our Culture (New York: Portfolio, 2005) 72.

29

Batelle 70.

30

James Glieck, The Information: A History, a Theory, a Flood (New York: Pantheon, 2011) 423.

31

Sergey Brin and Lawrence Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Computer Networks and ISDN Systems 30 (1998): 107–17.

32

Brin and Page.

33

Siva Vaidhyanathan, The Googlization of Everything (And Why We Should Worry) (Berkeley: University of California Press, 2011) 182.

34

James Hendler, et al., “Web Science: An Interdisciplinary Approach to Understanding the Web,” Communications of the ACM 51.7 (July 2008): 60–69.

35

Larry Page, as quoted at .

36

Critics of Google’s document-centric search technologies have long been promising the advent of a semantic web that would “free” data from a document-based web. Some see social media tools like Facebook and Twitter as offering something similar. For an early vision of what this might look like, see Tim Berners-Lee, James Hendler, and Ora Lassila, “The Semantic Web,” The Scientific American (17 May 2001): 34–43.

37

23

selected Articles

Knowledge, Virtue, and the Research University Chad Wellmon

Four Contemporary Responses Recently, a broad literature has chronicled, diagnosed, and attempted to solve what many have referred to as a “crisis” in higher education.1 Some authors tie the purported crisis to an out-of-touch faculty or lackadaisical students, while others blame a conservative or liberal political culture or the public’s general distrust of universities. Amidst all of these anxious arguments, however, we can discern four basic types. The first type is technocratic. These books tend to be sociological, data-driven critiques of the university as an institution. Exemplified by Richard Arum and Josipa Roksa’s recent Academically Adrift, they cast a pall on the university by focusing on particular problems: low graduation rates, skewed admissions policies, indifferent faculty, disengaged students, or uncontrollable costs.2 In response, the authors of these studies offer specific, procedural suggestions for solving the university’s various problems. A second type of argument could be called qualified utopian. As one author argues, the university is quickly becoming an antiquated and irrelevant institution that needs “bold” solutions.3 These books see universities facing existential threats and call for intrepid entrepreneurs with the clarity of vision to cast aside outdated assumptions and forge a brand new institution. They especially point to the unprecedented capacity of new digital technologies to disrupt universities. A “tsunami” of digital innovation threatens to render the university irrelevant, just as it did the

The AmericAn dreAm

Problems and Promises of the Self-Made Myth Jim cullen

This essay originally appeared in issue 15.2 of The Hedgehog Review, Summer 2013.

The Apocalyptic Strain in Popular Culture Paul A. cantor

The Gospel of the American Dream Tony Tian-ren Lin

WWW. hed geho grevieW.co m

24

newspaper and music industries.4 But the same new technologies, if embraced, can also reinvent the university of the twenty-first century. These two types of arguments provide cogent analysis, rousing critiques, and the promise of a different and better institution. The particular problems laid bare by the technocratic accounts are serious and need to be addressed. Likewise, universities do need a coherent response to the rapidly changing technological environment. The proposed solutions are generally piecemeal and could conceivably be carried out within existing institutional frameworks. But defenders of the university need to offer a clear account of why the crisis of the university is actually a crisis. Why should the university not be allowed to dissolve into a different, more efficient, more modern institution—one more technologically enhanced, economically lean, and socially relevant? What is the purpose of (or even need for) a university in the digital age when there are more efficient means for transmitting knowledge? Particular solutions need to be framed in terms of a comprehensive account of why the university as an institution is worth defending. Two other types of arguments do offer such an account. They also raise crucial questions about the future of the university, especially its role in democratic societies. The first type of argument could be referred to as the collegiate argument. Exemplified most recently by Andrew Delbanco’s College: What It Was, Is, and Should Be, these books argue that universities ought to reclaim a unique college model that formed students into particular kinds of people.5 These authors defend a tradition of humanist knowledge and its celebration of human experience over the endless accretion of research.6 Knowledge, they contend, is a good in itself and not simply a function of its technical use. Furthermore, the college’s basic humanism should inculcate in its students a devotion not only to “personal advancement but [also] to the public good.”7 As even Delbanco notes, however, the world of the eighteenth- and nineteenth-century American college, to which many authors appeal, is not our own. The singularity of purpose and moral vision that defined it was a function of a homogeneous (white, male, Protestant) culture that reinforced the very ends of the college. The appeals of authors like Delbanco and Anthony Kronman, however, are not simply nostalgic. They raise difficult questions: What kinds of shared commitments and purposes can contemporary universities embrace? Are sectarian institutions the only universities capable of espousing what Alasdair MacIntyre calls a “unity of vision” defined by shared values?8 Furthermore, the best of these books distinguish the college from the university and implicitly raise the question of how scalable the college model is. How can a large public university provide a college or liberal arts education to 20,000 students? Given finite resources, is a college or liberal arts education a necessarily elitist enterprise? Is this necessarily bad? There is a certain continuity between these celebrations of a college model and a fourth type of argument, which we could call democratic. Following a tradition that extends from Thomas Jefferson’s Rockfish Gap Report through William Harper Rainey’s The University and Democracy to Martha Nussbaum’s recent Not for Profit, these arguments claim that the university should form students into democratic citizens.9 The college model of individual discovery and formation extends out, so to speak, to the formation of students for citizenship and civic responsibility. These 25

appeals constitute a particularly American combination of an ancient liberal arts tradition with democratic ideals. Like the collegiate arguments, the democratic arguments demonstrate the complexity of the challenges confronting universities by showing how they bear directly on an imperiled pluralistic democracy. Nussbaum, for example, exhorts universities to cultivate in students “the ability to think about the good of the nation as a whole, not just that of one’s own local group.”10 Like Delbanco, however, she has less to say about the sources and contours of such a common good. In encouraging universities to inculcate virtues like empathy, imagination, rational argument, and respect, Nussbaum raises a more basic question concerning, as Aristotle put it, what the “ultimate good” toward which liberal arts students should be formed is.11 Can universities articulate a vision of a common good in such a radically pluralistic society as our own? As Nussbaum makes clear, just because we live in a pluralistic society does not mean that we hold nothing in common. Our democracy needs common ends and certain kinds of citizens. One of the contemporary university’s central questions, then, concerns the ethical sources of democratic commitments and belonging from which to draw.12 What particular notions of belonging and solidarity—elements central to any democratic community—can the university cultivate? Questions about the moral ends of the university are bound up with questions about democracy as an ethical resource. If these challenges were not fundamental enough, authors like Delbanco point to more basic institutional barriers. While many universities claim that they are committed to educating democratic citizens, many faculty members, writes former Harvard President Derek Bok, “display scant interest in preparing undergraduates to be democratic citizens, a task once regarded as the principle purpose of a liberal arts education and one urgently needed at this moment in the United States.”13 Second, and just as importantly, the broader public seems to have lost confidence in the university’s capacity or interest in training democratic citizens. If the actions of the titans of finance and the politicians of democracy are any indication of how elite institutions form their graduates, then, reason many critics of the university, universities have either failed to form democratic citizens, or perhaps democratic citizens are not nearly as moral as many had hoped. These four types of arguments raise crucial questions, but they do not address one of the central ideological features of the modern research university that often render attempts to speak of its ethical ends incoherent: the fact that it has historically conceived of itself as an autonomous or semi-autonomous social institution.14 Nobody has so felicitously defended this account of the university as Stanley Fish, a Milton scholar turned law professor and public spokesman for the university. Beginning with his admonitions against turning the classroom into a site for political advocacy, Fish has steadfastly refused to ground the university in anything but itself.15 Whereas many justify the university through appeals to its economic or social utility—studying literature or history will make students critical thinkers and thus better citizens—Fish argues that the university’s only justification is internal. The only way to defend the university is to tell the story of the university on its own 26

terms—that is, the story of an institution that is self-regulating, autonomous, and internally coherent.16 Central to this account is the assumption that the university is self-normed. The university’s standards of behavior and success are determined by professional standards of conduct that bear little to no relationship to external notions of what is good or bad. On this account, a university professor is like a plumber. Both belong to a specific professional guild that determines not only the boundaries (who gets in and who is excluded) but also the standards of success. The value and excellence of a particular practice—repairing a clogged pipe or writing an article—are determined in accord with standards internal to the practice itself. Understood from this highly functional perspective, the university is an autonomous institution, and it would make little sense to appeal to broader ethical norms or notions of the good, because the university’s governing norms are assumed to be independent of other cultural notions of the good. Understood in this light, attempts to tie the university to broader notions of the good, public or otherwise, challenge the highly normative claim that the university is a strictly functional institution. Faculty members’ oftentimes confused, sometimes incoherent, and always-erratic attempts to speak of the university’s moral purposes or ethical ends—and the unease with colleagues who do so regularly and confidently—make historical and institutional sense. The arguments made on behalf of the university more recently ring hollow for many both inside and outside the university because the university has detached itself from any vision or claims about a common good. The university purports to be its own end. (The obvious exception to this description, of course, is many of the sciences and policy-oriented social sciences that cast themselves as technological instruments that solve social problems.) But this detachment—even if it is primarily ideological—has had consequences. Why should the public care about an institution that has isolated itself? University faculty’s reluctance or inability to speak clearly and forcefully about the ethics of knowledge and the university is leading, or already has led, to an erosion in the public’s belief that universities are more than mere technologies for transmitting knowledge and credentializing aspiring laborers. The contemporary public, especially in the United States, is losing confidence in the university’s ability, capacity, and willingness to safeguard and carry out the deeply ethical project that is the human knowledge project.17 The Moral Sources of the University Along with the church and the state, universities are among the oldest and most central social institutions of Europe and western culture. From their beginnings in Bologna, Paris, and Oxford, universities have always been cultural and social institutions that created, evaluated, and authorized knowledge. These activities brought them into complex relationships with a broader culture, be it thirteenth-century Paris or nineteenth-century Berlin. But universities have always had clear sources from which to draw their norms, ends, and virtues, which they could then adopt 27

and adapt to clarify their own particular ends. Every historical crisis of the university—from the challenge to the scholastic universities by Humanism in the fifteenth and sixteenth centuries, to the near collapse of the Enlightenment university around 1800 in Germany, to Charles Eliot’s transformation of Harvard College into Harvard University at the end of the nineteenth century—centered on debates about the proper ends of a university and the normative sources that would fund these ends. The medieval university, for example, was a unitary corporation of students and masters bound together by Christian values and scholastic practices, like the lecture and the disputation.18 Initially without buildings or infrastructure, the first universities in Paris and Oxford appealed to Christian values and theology as the supreme science to ensure the university’s universal authority. As a studium generale, the medieval university welcomed students from, and granted them rights to teach, anywhere in Christendom.19 It consistently laid claim to an authority that transcended local interests and divisions. Undergirding this claim to authority was the legal and financial support of the Church, a universal curriculum and language (Latin), and ethical forms and virtues taken from Christian traditions. Most of the medieval university’s basic values and virtues were grounded in generally shared beliefs in, for example, a cosmological order accessible to human reason, humans’ fallen nature, the value of speculative or theoretical knowledge, and the authority of tradition. The medieval university scholar was, thus, characterized by a particular ethos, and the university embodied a desire to understand the rational order of God’s creation and “general ethical values like modesty, reverence, and self-criticism.”20 Although particular universities adopted, adapted, and even resisted specific Christian theological claims, medieval universities were generally grounded in a Christian theological tradition. Enlightenment University Beginning in the late seventeenth century and continuing through the eighteenth century, many Enlightenment critics both inside and outside the university argued that universities were simply medieval institutions that inculcated nothing but blind submission to the church.21 The confessional, medieval university obscured the true ends of knowledge and science—not contemplation of the divine but rather the practical needs and demands of a broader public and state. Over the course of the eighteenth century, some universities sought to redefine themselves as dedicated to a truth that was practical and socially useful. Universities like the University of Göttingen, established in Germany in 1737, embraced a public or state good, the pursuit of a better future, and a progress judged by material wellbeing. These Enlightenment universities were guided by different ends than those of the medieval university, which remained predominant throughout Europe until the end of the eighteenth century. The Enlightenment university’s manifold purpose was to produce 1) state revenues (through student fees) 2) more efficient members of the burgeoning state bureau28

cracy 3) technical solutions to particular problems, and 4) in its Jeffersonian democratic version, democratic citizens. The Modern Research University The modern research university emerged in Germany at the beginning of the nineteenth century and in the United States in the later part of the nineteenth century with institutions like Johns Hopkins and the University of Chicago. In Germany, the new university model was in part a response to a crisis of the university. Between 1720 and 1800 German university enrollment—once among the highest in Europe—had dropped by over 50 percent.22 By the end of the Enlightenment era in Germany, many students had begun to forego the broad-based, liberal arts and humanist education of the philosophy faculty and enroll directly in one of the three professional faculties: theology, law, or medicine. There were two basic critiques leveled against the German Enlightenment university around 1800. First, many critics argued that in its relentless pursuit of social relevance, the university risked becoming little more than a glorified trade school. What was the purpose of a medieval institution when auxiliary institutions and schools— like mining academies, medical clinics, and veterinary schools—could more efficiently train students for specific professions? Second, some commentators began to wonder if the proliferation of print in the last decades of the eighteenth century was a real threat to the university. As books became cheaper and more widely available, what reason did a young man have to pay to listen to a professor lecture from a text? As the philosopher J. G. Fichte put it, if universities continued to present students “the entire world of books, which already lies printed before everyone’s eyes,” universities would soon become redundant and irrelevant.23 What was the purpose of a university in an age of print? What kind of authority could it lay claim to if it were to distinguish itself from the medieval university’s commitment to theology, the Enlightenment university’s commitment to the state, and the increasing authority of an expanding culture of print? The most important response to the German university’s crisis of purpose emerged after an intense debate about the very idea and purpose of the university that raged in Prussia between 1795 and 1810. This debate culminated in the establishment of a new university in Berlin in 1810: the Friedrich Wilhelm University or the University of Berlin. Under the bureaucratic leadership of the Prussian statesman and scholar Wilhelm von Humboldt, a broad range of Prussian ministry officials, scholars, and philosophers, including the theologian Friedrich Schleiermacher and Fichte, offered a new language through which to re-imagine the purpose of the university: Wissenschaft or specialized, discipline-based science. The university, they argued, was the institution of knowledge. Not the church, not the state but, in the words of Humboldt, “the cultivation of science in the deepest and broadest sense” would be the orienting purpose of the university.24 The university should be the singular institution devoted to cultivating science or scholarship as, in the words of Fichte, a morality, an ethics, a way of life. 29

Although many of the research university’s basic elements had their precedent in the University of Göttingen, Humboldt and colleagues developed a clear moral language and concept for the new university model.25 They cast specialized science as an ethos, a disposition, and sought to institutionalize it in a university structure, replete with exam committees, elaborate governance structures, seminars, hiring practices, and reorganized libraries. In the contemporary research university, faculty take these practices and the ethos of specialized science for granted. They inhabit an institution that is assumed to give itself its own norms and sustain its own way of life.26 Philology as the Exemplary Discipline The lasting achievement of the early German research university, then, was to give specialized science an institutional form and thus guarantee its continuity and effectivity. Like any institution, the new research university developed and incorporated structures, practices, and methods to train and form students into a life of science. As Fichte put it, the true scholar pursues science as a comprehensive form of life. Driven by the “genius of industriousness,” he devotes his every waking minute to his particular science. Throughout the nineteenth century, one discipline in particular embodied the logic and practices of specialized science: classical philology (the study of ancient texts and cultures). Not physics, chemistry, or biology but philology was the consummate discipline. For generations of German scholars in every field, the philologist embodied the virtues of modern scholarship: “industriousness, attention to the most minute of details, devotion to method…an ethic of responsibility, exactitude, as well as a commitment and facility to open discussion.”27 The primary site of this inculcation into specialized science was the university seminar, the precursor to modern graduate programs. The nineteenth-century university seminar had its origins in the seminaries and theology seminars of early eighteenth-century Pietist Germany, especially Halle, where seminaries began as institutions for training ministers. Over the course of the eighteenth century, they became institutions for training teachers, and by the early nineteenth century, they had changed once again and had become institutions for training scientists and researchers, especially philologists.28 In 1707, August Hermann Francke, a Pietist and professor of ancient languages at the university in Halle, established the first seminar in Halle (seminarium selectum praeceptorium) to train teachers for his Pädagogium, a secondary Latin school established a decade before. The seminar trained theology students from the university in antique languages, modern philology, and biblical scholarship. These students then went on to train teachers in the Pädagogium, where, as “scholaren,” they guided younger students toward piety through “edifying discourse and good example.” This included evening and morning prayer, reading of scripture, and extensive linguistic training.29 In contrast to traditional humanist learning, which focused on ancient Greek and Roman texts, Francke’s seminar emphasized exercises in ancient languages (Latin, 30

above all) and focused almost exclusively on the study of the Bible. The epitome of Halle’s philological culture, however, was the Collegium Orientale theologicum. Established in 1702 by Francke, the Collegium trained a select group of university students in ancient languages and biblical exegesis. Besides Hebrew, students were also expected to learn Chaldean, Syrian, Samaritan, Arabic, Ethiopian, and rabbinic Hebrew. Like the seminar, the Collegium was devoted almost exclusively to the scholarly study of the Bible. Francke’s Pietist seminars embraced the most modern of scholarly techniques and materials: students were required to study six languages and master the methods of high textual criticism and emendation. The Pietist seminars in Halle combined these objective techniques with a rigorous moral education to produce particular subjective experiences.30 As Francke put it, a better, philologically enhanced Luther Bible would help people (scholars and non-scholars alike) experience God’s word more intensely. Scholarly methods would help make better Christians. Training in objective, scholarly techniques would produce particular types of ethical subjects. This basic premise would guide the development of seminars from the early eighteenth century throughout the nineteenth century. The key difference was that the seminars that emerged later in the century eschewed the Christian orientation of their Pietist predecessors, whose ultimate end was the production of a personal relationship with God. In 1738, a philology seminar was established at the University of Göttingen that from its beginning was devoted not to Christian piety but to the study of antiquity. The Göttingen seminar sought to form not skilled readers of the Bible but skilled readers of ancient Roman and Greek texts. And yet it shared the Pietist seminar’s methodological assumption that training in objective techniques would produce particular kinds of ethical subjects. The Göttingen seminar sought to habituate students, as the seminar ordinance put it, into “a high estimation of antiquity in general,” such that the ancient writers would be “an eternal monument of human reason and other good virtues.”31 In 1787, the German classicist Friedrich August Wolf, a student of the Göttingen seminar, founded a philology seminar at the University of Halle that from its beginning emphasized the cultivation of the student not as a Christian or humanist but as a philologist, as a specialized scholar. Wolf criticized the loosely humanist seminars for their lack of philological rigor. His seminar developed excellence in philological methods by placing small groups of students in intimate contact with a single professor, who would drill students in specific exercises.32 The goal was the formation of philological virtues, above all industriousness and exactitude. And yet in 1806, just as philology was becoming a distinct discipline, Wolf began to worry that the increasingly specialized character of philology might obscure its original ethical aims—the formation of students according to ancient virtues. Technical skill, virtuosity, was replacing virtue. In Berlin, Humboldt attempted to resolve the tension between the production of ever more technical knowledge and the formation of a particular type of person and character by casting science as its own form of life. He sought to reconcile the tension between technical research and moral formation by embedding 31

academic professionalization—the imperative to publish, to divide intellectual labor according to specialization, to focus on details—in a set of ethical ideals. Specialized science, he claimed, gave the scholar a moral orientation, a source of meaning and authority, and made him a member of an ethical community: a community of researchers who were contributing to a human knowledge project brick by brick, technical insight after technical insight. The university was his home, church, and nation. Founded in 1812, the philology seminar was the first seminar to be established at the University of Berlin. Its purpose, according to its founding ordinance, was to educate those who were properly prepared for classical philology [Altherthumswissenschaft] through a broad range of exercises that led into the depths of science, and through literary support of all kinds, so that through them this study can in the future be maintained, propagated and extended.33 The broader pedagogical goals of the seminar, so central to those in Halle and Göttingen, were subordinated to the more basic task of forming future philologists. The seminar’s goal was to close the gap between external goods (broader notions of the good external to the university) and the internal goods of philology as a science. The expectation was that students would eventually grasp and devote themselves to the goods internal to the practice of philology and excel at what philology as a distinct practice required. According to the seminar’s logic, becoming an excellent philologist was a good in itself and thus required no further justification. Philology constituted its own way of life. By tying the logic of science to the institution of the university, science could become a viable form of life replete with its own set of virtues: industriousness, attention to detail, self-discipline and restraint, openness to debate. Above all, science entailed a devotion to something that exceeded the self: science. The university would not need the church, the state, ancient Greece, or any other sources for its norms—science would serve as the normative source for the university. Within a decade of its founding, the philology seminar left questions about the good life and a common good aside and focused instead on historical reconstructions of particular passages, methodological and technical innovations, and debates within an increasingly restricted circle of specialists. As one German philologist put it in 1820: “we’re turning out men who know everything about laying the foundations but forget to build the temple.”34 German research universities were producing hyper-specialized scholars who had no idea why they were devoting their lives to one Greek preposition. But they did not need to because philology was selfnormed. Internally, the success of the philology seminar led to ever-greater fragmentation of the broader discipline as philologists focused on increasingly specific and technical questions. Externally, philology gradually detached itself from the broader, non-technical culture. Philologists saw no need to justify their activities or commitments to a public who could not understand their work anyway.

32

The Virtues of Specialization In a speech in Munich in 1918, the German sociologist Max Weber described with devastating, detached effect what had become of the modern German research university and the specialized science that it embodied: science could not bear the weight of the ethical demands that its forefathers in Berlin had placed upon it. Institutionalized science was not a way of life; it was a profession. Like any profession it had little to say about questions of value, meaning, worth, and the good. Questions about how one should live, claimed Weber, were of no concern to science. The logic of specialization was ineluctable, and attempts to glean moral guidance from science were misplaced, elegiac longings for an enchanted world that had long since disappeared. And yet despite Weber’s claims, which continue to haunt the research university, there was a clear moral imperative to what has been referred to as Weber’s “value neutrality” [Wertfreiheit]: To the person who cannot bear the fate of the times like a man, one must say, may he return silently without the usual publicity buildup into the wide and comforting arms of the church…. For me, [such a return] stands higher than academic prophecy [pronouncing values and ends in the classroom], which does not clearly realize that in the lecture rooms of the university no other virtue holds but intellectual virtuousness [Rechtschaffenheit].35 Despite his insistence on the non-normativity of science, Weber’s exhortation to act “like a man” contains a deep appeal to virtue. The question of whether one embraces the meaninglessness of science or the comforting enchantments of the church is not simply propositional—whether the dogmas are true or false—but rather and more importantly a reflection of who the scholar is as a person, a statement about the scholar’s dispositions. The capacity to exclude systematically broader questions of meaning and value, to focus simply on the “needs of the day”—the university’s rules, procedures, and practices— these are the virile virtues of the modern scientist. These are also the virtues that characterized the nineteenth-century philologist: industriousness, attention to detail, and adherence to method. This was the very bedrock of the modern university’s moral authority. Despite Weber’s claims to the contrary, science did lay claim to value, meaning, and worth. Weber’s man of “intellectual integrity” was another formulation of the virtues of the seminar and the normative structure of specialized science. Conclusion This brief history of the moral sources of the university demonstrates that the university has always been based on normative commitments: visions and claims about what is worthy, valuable, and good. Until the emergence of the research university, however, universities had generally acknowledged the ethical resources that supported them and gave them life and purpose. The research university, in 33

contrast, has from its beginnings in Berlin obscured its ethical orientation behind the language and logic of specialized science. It has always struggled to articulate its ends as it struggled to reconcile virtuosity with virtue, technical skill with ethical formation. This conflicted institution is the one that we still inhabit, and we have inherited its ethical incoherence. In this light, we can understand the reticence of many faculty members to invoke the moral character of the university as systematic, historical, and institutional. Contemporary university scholars have by and large been formed in and by an institution that has historically eschewed the language of ethics and purpose. The virtues of the nineteenth-century German philology seminar are the same virtues to which contemporary scholars aspire. But contemporary faculty do not tend to understand them as virtues, as dispositions oriented toward a broader good, as constituting an ethos that gives meaning and purpose. Weber’s story about an all-powerful, highly functional modernity continues to hold sway. And yet this totalizing story is insufficient. In their attempts to defend the very idea of a university, faculty members and advocates of the university appeal to vestiges of universities past, an odd admixture of the ethical elements of the medieval, Enlightenment, and modern research university. The university has always been a normative institution. The university has always embodied particular notions of what is valuable and authoritative, however explicitly or implicitly, and faculty will increasingly need to cultivate and marshal this ethical heritage if they want to defend the university as a relevant social force in the twenty-first century. Second, declinist narratives that long for an institution with, as Clark Kerr put it, a “single soul to call its own” are not productive.36 These stories of institutions lost to modernity, secularization, political correctness, or any number of other social bugaboos distract us from the specific cultural demands and needs of our day. We live in a highly pluralistic society in which there are sundry and competing notions of the good, and we should be under no illusion that universities can or should be oriented toward one notion of the good as defined by one particular tradition. The world of the eighteenth- and nineteenth-century American college is gone. The purported unity of purpose and clear moral education of the American college was possible because of the uniformity of the community. It is neither possible nor desirable to return to nineteenth-century New Haven or thirteenth-century Paris. Finally, just as it is impossible to return to the nineteenth-century college, so too is it increasingly difficult to defend publically an institution that stands for nothing but its own operations. How are we to convince the public to support an opaque, self-enclosed institution (especially the humanities and humanistic social sciences)? It is imperative that faculty articulate the value, goods, virtues, and authority that the university contributes to a pluralistic democracy and its citizens. Even the steadfast defender of professionalization, Stanley Fish, has wondered if there might come a time when such arguments will no longer be sufficient for the demands of the day.37 In order to refine their arguments and move the institution forward, the university’s defenders should more explicitly embrace the norms, virtues, and goods that have made it one of our oldest and most important institutions. From the medieval 34

university, the contemporary university might embrace virtues like epistemic humility, respect for traditions, and the desire to understand. From the Enlightenment university, it might embrace a commitment to a broader social good. From the modern research university, it might embrace the virtues of the community of researchers like diligence, attention to detail, openness to debate, and the demand for evidence. The university forms particular types of people, but those who hope to defend it need to be more reflective and articulate about how and to what ends it does so. Any compelling defense of the university as a normative institution, however, also requires a robust defense and embrace of the normative character of the individual people who constitute it. Faculty members and students come to the university with their own desires and hopes and visions of the good and the human person. In a pluralistic society, the university has to ask systematically how the manifold ends and purposes of the people who inhabit it might fit together, but it can only do that if it acknowledges that its scholars and students are full, embodied people with desires and, oftentimes, incommensurable notions of the good. Universities need to discard the ruse that its faculty can leave their notions of what is valuable and worthy at the campus gate; this is both false and detrimental to the university. One of the most important challenges facing the university today concerns how it can encourage and support a constructive handling of deep differences about the good within the framework of its own institutional commitments and history. How, that is, can the university both form its faculty and students in its own traditions and virtues and encourage them to bring their different traditions and visions of the good to bear on big, common questions? Endnotes On the perpetual “crisis” in higher education, especially in the humanities, see Frank Donoghue, The Last Professors: The Corporate University and the Fate of the Humanities (New York: Fordham University Press, 2008).

1

Richard Arum and Josipa Roksa, Academically Adrift: Limited Learning on College Campuses (Chicago: University of Chicago Press, 2011). See, also, Jerome Karabel, The Chosen: The Hidden History of Admission and Exclusion at Harvard, Yale, and Princeton (New York: Mariner, 2005); William G. Bowen, Matthew M. Chingos, and Michael S. McPherson, Crossing the Finish Line: Completing College at America’s Public Universities (Princeton: Princeton University Press, 2009).

2

Mark Taylor, Crisis on Campus: A Bold Plan for Reforming Our Colleges and Universities (New York: Knopf, 2010).

3

Stanford President John Hennessey as quoted in Ken Auletta, “Get Rich U,” The New Yorker (30 April 2012): .

4

Andrew Delbanco, College: What It Was, Is, and Should Be (Princeton: Princeton University Press, 2012). See, also, Anthony Kronman, Education’s End: Why Our Universities and Colleges Have Given Up on the Meaning of Life (New Haven: Yale University Press, 2007) and Mark Williams Roche, Why Choose the Liberal Arts (Notre Dame: Notre Dame University Press, 2010.).

5

Delbanco 96–101 and Kronman 91–136.

6

Delbanco 175.

7

Alasdair MacIntyre, Three Rival Versions of Moral Enquiry: Encyclopedia, Genaeology, and Tradition (Notre Dame: Notre Dame University Press, 1990) 222.

8

35

See also Woodrow Wilson, Princeton for the Nation’s Service (Princeton: Gillis, 1903).

9

Martha Nussbaum, Not for Profit: Why Democracy Needs the Humanities (Princeton: Princeton University Press, 2012) 25–26.

10

Nussbaum 26; Aristotle, Nicomachean Ethics, trans. H. Rackham (Cambridge, MA: Harvard University Press, 1968) I.i.4–ii.6.

11

Jürgen Habermas’s recent reconsideration of a more traditional secularization thesis has been prompted by similar questions about democracy’s ethical resources. See, for example, Jürgen Habermas, et al., An Awareness of What Is Missing: Faith and Reason in a Post-Secular Age, trans. Ciaran Cronin (Cambridge: Polity, 2010).

12

Quoted in Delbanco 149.

13

One of the few recent books to do so—Louis Menand’s The Marketplace of Ideas: Reform and Resistance in the American University (New York: Norton, 2010)—almost throws its hands up to the iron cage of the rationalized professional university.

14

See, for example, Stanley Fish, Save the World on Your Own Time (New York: Oxford University Press, 2008).

15

For a taste of Fish’s more recent arguments about the university, see “The Crisis of the Humanities Finally Arrives,” The New York Times (11 October 2010): .

16

Consider, for example, the annual New York Time’s article on the Modern Language Association’s meeting that always includes a mocking list of the most inane paper titles.

17

See, for example, Walter Rüegg “Introduction,” A History of the University in Europe: Volume I, Universities in the Middle Ages, ed. Hilde de Ridder-Symoens (Cambridge: Cambridge University Press, 1992) 3–34.

18

Jacques Verger, “Patterns,” A History of the University in Europe: Volume I, Universities in the Middle Ages, ed. Hilde de Ridder-Symoens (Cambridge: Cambridge University Press, 1992) 41–45.

19

Rüegg 32–33.

20

See, for example, Notker Hammerstein, “Aufklärung und Universitäten in Europa: Divergenzen und Probleme,” Universitäten und Aufklärung (Göttingen: Wallstein, 1995) 191–205.

21

See Charles E. McClelland, State, Society, and University in Germany, 1700–1914 (Cambridge: Cambridge University Press, 1980).

22

J. G. Fichte, Deduzierter Plan einer zu Berlin zu errichtenden höheren Lehranstalt in Gelegentliche Gedanken über Universitäten, ed. Ernst Müller (Leipzig: Reclaim, 1990) 59. Fichte is referring here to the historical practice of the university lecture whereby a professor read directly from a canonical text and interjected his own commentary.

23

Wilhelm von Humboldt, “On the Inner and Outer Organization of Berlin’s Institutions of Higher Knowledge,” trans. Paul Reitter and Chad Wellmon, The Rise of the Modern Research University: A Sourcebook, ed. Louis Menand, Paul Reitter, and Chad Wellmon (University of Chicago Press, forthcoming).

24

For an excellent account of Göttingen’s innovations, see William Clark, Academic Charisma and the Origins of the Research University (Chicago: University of Chicago Press, 2006).

25

J. G. Fichte, Ueber das Wesen des Gelehrten und seine Erscheinungen im Gebiete der Freiheit (Berlin: Himburgische Buchhandlung, 1806).

26

Lorraine Daston, “Die Akademien und die Einheit der Wissenschaften. Die Disziplinierung der Disziplinen,” Die Königlich Preußische Akademie der Wissenschaften zu Berlin im Kaissereich, ed. Jürgen Kocka (Berlin: Akademie Verlag, 1999) 83.

27

Kathryn Olesko, Physics as Calling: Discipline and Practice in the Königsberg Seminar for Physics (Ithaca: Cornell University Press, 1991) 1.

28

36

Kurtzer Francke, Bericht von den gegenwärtigen Verfassung des Paedagogi Regii zu Glaucha vor Halle (Halle: Waysenhaus, 1710).

29

Jonathan Sheehan, The Enlightenment Bible (Princeton: Princeton University Press, 2006) 60–61.

30

From the seminar’s statutes (Schulordnung), as quoted in Friedrich Paulsen, Geschichte des gelehrten Unterrichts auf den deutschen Schulen und Universitäten vom Ausgang des Mittelalters bis zur Gegenwart (Leipzig: Veit, 1885) 431.

31

See William Clark, “On the Dialectical Origins of the Research Seminar,” History of Science 27 (1989): 111–54.

32

“Seminarordnung,” originally published in the Unversitätskalender of 1813, reprinted in Die Vorlesungen der Berliner Universität, 791–92.

33

J. H. Voss quoted in Anthony Grafton, “Polyhistory into Philolog: Notes on the Transformation of German Classical Scholarship, 1780–1850,” History of Universities 3 (1983): 159–92, 173.

34

Max Weber, From Max Weber: Essays in Sociology, trans. H. H. Gerth and C. Wright Mills (New York: Oxford University Press, 1946) 45. I have slightly altered this translation to highlight the virtue language that Weber uses in the original.

35

Clark Kerr, The Uses of the University (Cambridge, MA: Harvard University Press, 1963) 45.

36

Stanley Fish, There’s No Such Thing as Free Speech: And It’s a Good Thing Too (New York: Oxford University Press, 1994) 242.

37

37

University of Virginia Watson Manor 3 University Circle Charlottesville, VA 22903 (434) 924-7705 www.iasc-culture.org