Adventures in Luddism: Part I
I teach a freshman writing class called Digital Culture and Counterculture, part of the purpose of which you might call “consciousness raising.” This meant something once, and I’d like to think it can mean something still. But lately I find that my students don’t quite fit my agenda. The agenda, that is, of teaching subversion.
We start the semester out with Wendell Berry’s “Why I am not going to buy a computer,” penned (literally) in 1987. Berry despises what he calls “technological fundamentalism,” the tendency to assume by virtue of unconscious indoctrination that everything innovative is good. We hear the voices of this fundamentalism everywhere, Berry charges. And it leads to a sickening superciliousness whereby everything old appears outdated and subject to revision. What about sunlight, pen and paper, and the standard model Royal typewriter he bought in 1956! Berry cries out. What about the sanctity of existing human relationships (his wife served as his editor) and the glorious tradition of writing by hand? At the end of his essay, Berry offers that “when somebody has used a computer to write work that is demonstrably better than Dante’s, and when this better is demonstrably attributable to the use of a computer, then I will speak of computer with a more respectful tone of voice, though I still will not buy one.”
All to no avail. When his essay was published in Harper’s, it generated several heated responses which the magazine printed perhaps to highlight the fury of computer proponents even then. Berry is a hypocrite, most charged. Berry should recognize the wonderful new possibilities of digital technologies and stop wasting everyone’s time with his crusty, quasi-Luddite critiques. To the magazine’s great delight, Berry responded and put his finger on the dike. He knows he is a hypocrite. The problem of being “a person of this century,” to use his elegant phrase, is that there is no way not to be a hypocrite. We are all plugged into the energy corporations, Berry admits, and most of us guzzle petroleum products in our homes and on the roads outside them like there’s no tomorrow. (Eventually, perhaps, there won’t be one.) All we can do is choose where to draw the line and stick to it.
Berry drew the line at buying a computer. Yet many of Harper’s readers found this attempt at setting a principled example unsatisfactory. They saw his moral scrupulousness as self-indulgent, and his critique of wanton consumption as out of touch. To this last charge, Berry took special issue. The root of technological fundamentalism, he argued, lay in his respondents’ passionate, almost fanatical, defense of the status quo:
At the slightest hint of a threat to their complacency, they repeat, like a chorus of toads, the notes sounded by their leaders in industry. The past was gloomy, drudgery-ridden, servile, meaningless, and slow. The present, thanks only to purchasable products, is meaningful, bright, lively, centralized, and fast. The future, thanks only to more purchasable products, is going to be even better. Thus consumers become salesmen, and the world is made safer for corporations.
When we read this passage in class I like to look around the room and notice my students’ responses. Do they identify with Berry’s critics? Are they moved by the ire that animates his eloquent rebuttal? Typically they seem unmoved, gazing forward at me as if I’m giving a Ted Talk. Judging by the papers I receive a few weeks after this opening discussion they find Berry’s argument unconvincing, partly for good reason. Berry was writing before the Internet and had no idea how significant computers would soon become. On a certain reading, his critique is myopic, unimaginative, and flat out wrong in light of recent history.
One glaring error students often point to is Berry’s insistence that computers lack any political utility. “I do not see that computers are bringing us one step nearer to anything that does matter to me: peace, economic justice, ecological health, political honesty, family and community stability, good work.” Naturally, college freshmen evaluating this claim in 2013 have plenty of ammunition with which to gun it down. They seem to take great relish in highlighting Berry’s inaccuracies, as if invalidating him validates some unknown voice in the back of their heads which they know must be right.
Very few students take issue with technology in the terms Berry provides; instead they prefer the more up-to-date Douglas Adams and his 1999 essay “How to stop worrying and learn to love the Internet.” Adams himself is great at highlighting the unsightly myopia that tends to affect writers like Wendell Berry. But his argument essentially turns on lauding all innovations as if they’re equal:
I suppose earlier generations had to sit through all this huffing and puffing with the invention of television, the phone, cinema, radio, the car, the bicycle, printing, the wheel and so on, but you would think we would learn the way these things work, which is this:
1) everything that’s already in the world when you’re born is just normal;
2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;
3) anything that gets invented after you’re thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it’s been around for about ten years when it gradually turns out to be alright really.
Apply this list to movies, rock music, word processors and mobile phones to work out how old you are.
Yes, isn’t that cute. We’re all indebted to the prejudices of our time. Perfectly natural that our parents and grandparents distrust the Internet and still worry about “privacy concerns.” They’ll be dead soon, anyway.
It would be nice if my students could synthesize Berry’s moralism with Adams’ pragmatism and come up with something more durable than either of them did. But most side with the pragmatists’ argument. After all, what choice do they have? None of them could get their schoolwork done without computers. And social life would be unimaginable without all their friends on Facebook. To preserve their sense of self—to preserve their sense of how the world works and how it should work—they have to argue against Wendell Berry; they have to resist his old-fashioned moralism even as they sense him breathing down their necks.
We came to a possible turning point last week when we discussed online dating. I assigned a 2011 New Yorker article by Nick Paumgarten called “Looking for someone: sex, love, and loneliness on the Internet,” thinking it would spur a good conversation. At first they were reticent as usual. We talked about the positives and negatives of this quintessential hallmark of digital culture, and the big sociological shifts that enabled its formation. According to Paumgarten and biological anthropologist Helen Fisher, the rise of Internet dating rests on three major turning points: 1) the massive influx of women into the workforce, 2) introduction of the Pill, and 3) rising divorce rates, all of which came to a head in the U.S. after 1945. As Fisher puts it, “Our social and sexual patterns have changed more in the last fifty years than in the last ten thousand.” Consequently, “our courtship rituals are rapidly changing, and we don’t know what to do.”
I hoped the existential implications of this dilemma would be manifest as we surveyed the contemporary dating scene. Match, OK Cupid, Plentyoffish, Jdate, Eharmony, Chemistry (Fisher started this one under the auspices, and on the payroll, of Match’s parent company, InterActivCorp), Howaboutwe, ScientificMatch…the list is nearly endless. All of these sites use different algorithms and presumably cater to different market niches. But the underlying principle is the same. According to Paumgarten, ScientificMatch “attempts to pair people according to their DNA, and claims that this approach leads to a higher rate of female orgasms.” Yet this only takes the approach of tamer (less ambitious?) sites to its outer limits.
What online dating is all about, I implore my students, is the principle of scientific management. We are all familiar with how this works in practice. When we find ourselves on the toothpaste aisle at the grocery store (likely a supermarket), we know that the available brands and accompanying brushes have all been vetted by multiple experts. This same knowledge applies to every consumer product: to cars, televisions, and of course, our personal computers. To live in the modern world, it seems we have to learn to depend on experts and the principle of scientific management. Otherwise we’ll be left behind in a fog of bad smells and other inefficiencies.
But where do we draw the line? At what point do we stop turning our lives over to scientists and their unimpeachably useful index of algorithms?
To dramatize the stakes I like to pose the following scenario (I’ve used it twice now, this semester and last). Imagine that some time in the not-too-distant future a new online service has been developed. If you choose to use it, this service guarantees you a detailed account of how and when you will meet each of your romantic partners for the rest of your life. Names, dates, descriptions of physical proportions and breakups—everything is there, and upon reading it your fate is sealed. It is up to you whether or not to use this service. But the technology is available. The algorithm has been perfected. Instead of the messy, haphazard process of sorting your way through lived experience, going down this path blindly with this person, going down that path blindly with another, you can have complete and total certainty. There is no longer any margin of error.
After presenting this scenario in the eeriest tone I can muster, I ask my students by a show of hands how many of them would choose to use such a service. Their answer, at least as late as February 2013, always depends.