What I tell my college students about using ChatGPT for essays.
The writing sounded like the typical 3 a.m. word-vomit of course concepts—the sentences relaying a superficial understanding at best of what we had done this semester and the argument only vaguely responsive to the prompt. It was the sort of paper that usually makes me wonder: Did this student even come to class? Did I communicate anything of any value to them at all?
Except there were no obvious tells that this was the product of an all-nighter: no grammar errors, misspellings, or departures into the extraneous examples that seem profound to students late at night but definitely sound like the product of a bong hit in the light of day.Perhaps, just before the end of the semester, I was seeing my very first student essay written by ChatGPT?
I put some of the text into one of the new A.I. writing detectors. But before I ran the test, the ridiculousness, and maybe hopelessness, of what I was about to do dawned on me: I’d use a machine to see if another machine had written a student essay. I realized I am now living in a Turing test—in that I, the human, can no longer be completely sure whether I’m reading the work of another human or a copied-and-pasted answer produced by generative A.I.
Fall semester, I was worrying about essays being bought off the internet—hard to police, but usually so off-topic that students do poorly anyway. Spring semester, the rules of the game have changed entirely: As I finish the term, I must be on guard against whether a machine has written my students’ papers for them.
After running the test (10.1 percent human-written, per the program), I put my head down on my kitchen table, overwhelmed by the onslaught of technological change that seems to have proceeded at light speed since January: generative A.I. for text, images, and art (and to a lesser extent, music and video), all further casting doubt on what we can trust and what is real.
For my sanity, I needed to know if both my internal BS detector and the automated GPT detector were right—that the essay was indeed the work of ChatGPT. In an email to the student, I gave them the option of disclosing whether they had used the A.I. tool, promising there would be no grade penalty or ethical repercussion for what was, at best, a B essay—after all, I hadn't explicitly disallowed it in the assignment. They had; and as with most efforts at cheating, it was because they felt tired, stressed, and desperate.
Technically, I had won my first (known) faceoff against a machine. But I didn't feel victorious.
I study and teach about media, politics, and technology, which means that helping people make sense of the disruptive potential of new media technologies for civic life is literally my job.
That has also meant that this semester has been among the most existentially challenging of my 17 years in the classroom—and I taught in D.C. during the 2016 election and the early years of the Trump presidency and on Zoom during the beginning of the pandemic (which taxed every molecule of my ADHD brain).
This year, I was tasked not only with playing ChatGPT whack-a-mole, but I also found myself trying to come to terms with what might be the single most significant technological shift since the introduction of the smartphone. Beyond classroom mechanics, I’m finding it more urgent than ever to help my students (and myself) find the language to talk about the changes we’re experiencing and to develop the questions that we need to ask to make sense of it all.
The disruptive potential of generative A.I. consumed me. I wasn't alone, of course: The Atlantic proclaimed the college essay dead; my university created a pop-up class for students and interdisciplinary faculty to explore the ethics of A.I. and convened a set of webinars and meetings to help faculty get their heads around the new leviathan we were suddenly confronting.
Meanwhile, in each of my three classes, I’ve been fixated on teaching about information disorder, or the many ways that our information environment is polluted, from deepfakes to clickbait to hyperpartisan news. And while I could explain the processes and the incentives for creating and consuming misleading content, there were moments when I found myself completely overwhelmed by the scale at and alacrity with which GPT was already causing chaos.
"I’ve got nothing," I told my students in response to the fake Trump arrest photos that had been created by a journalist at a well-respected investigative journalism outlet (who, in his words, "was just mucking about"). We traced the timeline and talked about who might be vulnerable to the misinformation, but reality had provided a teaching moment that felt completely surreal–who knew what would be next? (I teach at a Catholic university, so at least the Pope-in-a-puffer-coat photo provided a bit more levity.)
Still, my students were set on unnecessarily over-mystifying information disorder, the attention economy, and Big Tech more generally, thus giving away their own agency to understand what's happening. "The algorithm" and "A.I." became bogeymen in my classes: all-purpose words that at once spell the end of careers and encapsulate anxiety about everything from graduation and the job hunt to attacks on LGBTQ rights and abortion.
When I hear my students discuss these new tech bogeymen, I’m reminded of the mistakes we’ve made in critiquing the news media. When the words themselves hold so much meaning and so many possible interpretations depending on the speaker, we unnecessarily surrender our ability to comprehend the precision and nuance that we need to diagnose points of intervention and to separate our existential dread from more immediate threats to social justice, the environment, and democracy.
Consider the multiplicity of meanings for "fake news"—from memes online and politicians discrediting fact-based journalism to satire and late-night shows, among others. It becomes almost impossible to tell just who is calling what news fake and even harder to demand accountability.
Further, the collapsing of large categories or industries into single and unified entities overstates their ability to influence the public. Most Americans will collectively say they don't trust "the media"—imagined as a cabal of shadowy actors manipulating the public through some sort of coordinated attempt at mind control.
But a few follow-up questions will eventually result in people providing exceptions for the media that they do trust, whether that is Fox, the New York Times, or some bizarre conspiracy channel on YouTube. The media is not a monolith—it's shaped by the desires, decisions, and questions of the people who consume it.
In the same way, when it comes to generative A.I., if we throw our hands up and mourn the end of the college essay, the end of lawyering, and even the possible end of humanity, we cede our agency to dictate tech's potential futures to the most powerful voices investing in it.
In class this semester, a student led a presentation that showed off ChatGPT's parlor-trick capacities: It designed a rudimentary website and told us a bad joke—Why did the tomato blush? It saw the salad dressing. The student presenter failed to underscore that ChatGPT could also be wrong, and students left that day muttering, "That's it, we don't have jobs anymore."
Persuading them otherwise has been extremely difficult—but for my students and for the public, the quickest way to feel hopeless in the face of seemingly unstoppable technological change is to decide that it is all-powerful and too complicated for an ordinary person to understand. This hopelessness paralyzes public critique, enabling tech companies to proceed unchecked.
There's a section on the syllabus for my Surviving Social Media class titled "What Hath God Wrought," after the first telegraph message ever sent—an apt question that stands the test of time and one that reflects the inability of our current language and imagination to comprehend the ways that technological advances might shape the future.
In this section of our course, students grapple with the unknowns of cryptocurrency, biohacking, robot love, and how our digital life continues after our mortal life ends. My undergraduates are able to define the questions these advances inspire, assess the current landscape, and identify probable futures at the individual and social level—without being technological savants or computer scientists.
That's what I was trying to get at with the section's title: The fact that Samuel F.B. Morse's first telegraph message from 1844, itself a riff on a Bible passage, can still pose a relevant question should give us some hope that we actually do have the vocabulary to take back agency over a world that increasingly seems closer to Terminator-type annihilation than ever before. (In our final assignment, I specifically instructed students that putting up a PowerPoint image of nuclear holocaust was not an acceptable answer to the prompt about worst-case scenarios.)
What I hope I have shown my students is that when we break down the umbrella terms that make it so impossible to truly capture this moment—"A.I.," "algorithm," "Big Tech"—it becomes possible to see how the same questions and points of departure for previous technological critiques have aptly prepared us for this very moment.
We can begin with some basics: What kind of artificial intelligence are you talking about and what specific function or use worries you? Who stands to make money from this particular fork in the tech? Who has the ability to implement regulation or foster further development?
Or perhaps more simply, we might do well to take into account what the futurist and tech ethicist Jaron Lanier recently pointed out in the New Yorker: "The most pragmatic position is to think of A.I. as a tool, not a creature." When we remember this—that we have created these technologies as tools—we are empowered to remember that we have the ability to shape their use.
Practically speaking, I’m treating GPT like a calculator: Most of us used calculators in math class and still didn't get perfect grades. After discovering my first ChatGPT essay, I decided that going forward, students can use generative A.I. on assignments, so long as they disclose how and why. I’m hoping this will lead to less banging my head against the kitchen table–and, at its best, be its own kind of lesson.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.