Jan 25, 2026
Articles
The Real Cost of AI in the Classroom
On Emersonian self-reliance and the machine
Haley Moller
Co-founder & CEO

“Nothing is at last sacred but the integrity of your own mind. Absolve you to yourself, and you shall have the suffrage of the world.”
—Ralph Waldo Emerson, “Self-Reliance”
Over the past several months, I've interviewed more than two dozen teachers and students in the U.S. across middle schools, high schools, and colleges. I expected to hear about grading crises, AI detectors, and academic-integrity committees; I did hear about all of those, but what I wasn't prepared for was something more subtle (and, to me, more alarming).
Several teachers told me variations of the same story. A ninth-grade English teacher in California put it bluntly: “My students are terrified of sounding wrong. They won't submit anything unless AI has ‘blessed’ it first.” A college instructor in the Midwest described office hours that had started to feel uncanny. “It's like they need permission to express themselves,” she said.
This isn't about laziness. If anything, it's the opposite. Many of these students are conscientious, ambitious, and anxious to do well. They are doing exactly what school has trained them to do: optimize for approval. But that approval still comes from teachers, not machines. The difference is that students no longer trust their own judgment about what will be acceptable; AI becomes an intermediary—a way to preemptively smooth their words, hedge against being wrong, and increase the odds that a human reader will approve what they submit.
A high-school English teacher with twenty years in the classroom described it as a shift in posture. “Before, students would argue with me about their interpretations,” she said. “Now they ask whether their interpretation is ‘valid.’ That word—valid—comes up constantly. As if meaning were something you had to validate externally.”
Students echoed this feeling, often without being prompted. A sophomore I spoke with admitted he runs nearly everything he writes through AI, even personal messages. “It's not that I can't write,” he said. “It's that if the AI can write it better, why wouldn't I use it? And then after a while you start wondering if what you'd say on your own is ever good enough.”
When every sentence is run through a machine to see whether it sounds “good enough,” the student never has to sit with uncertainty or develop the internal sense of judgment that writing is meant to build. That question Is my voice good enough? is not merely technical or stylistic. It's existential. And when it's answered externally, over and over again, the voice itself never has a chance to flourish.
Ralph Waldo Emerson urged his readers to “trust thyself: every heart vibrates to that iron string.” By self-reliance, he did not mean rugged individualism so much as a refusal to let external approval overrule one's own perceptions. Public opinion (the “joint-stock company,” as he called it) was already powerful in his day. Now it has a new shareholder in artificial intelligence. When a sixteen-year-old won't send a sentence until a system has labeled it “good,” that iron string slackens. Authority, once lodged in parents, teachers, and peers, migrates into a tool that feels objective, though it merely recombines other people's words. The risk is that students learn to trust the aggregate voice of the crowd more than the fragile, developing voice Emerson believed they were obliged to cultivate.
School has always involved proxies. We grade essays, tests, and projects not because they perfectly capture thinking, but because they are legible and scalable. For decades, students learned (often unconsciously) that success meant producing the right artifact. While artificial intelligence didn't invent that logic, it helped expose that logic's fragility.
When a machine can generate passable prose on demand, fluency stops being evidence of thought. But instead of prompting a wholesale rethinking of what we value, the immediate response in many classrooms has been defensive—more surveillance, more restrictions, more pressure on the artifact to “prove” its authenticity.
Students feel that pressure. A ninth grader admitted to me that she uses AI to write her assignments “out of necessity” since “everyone uses it.” It begins to resemble the logic of doping in competitive sports. When performance-enhancing drugs become widespread, the choice not to use them stops feeling like a moral stand and starts feeling like a self-handicap. Athletes who want to compete clean are forced to decide whether integrity is worth falling behind. For students, AI plays a similar role. Even those who would prefer to struggle through the work on their own feel compelled to use it just to keep pace, until what started as an optional aid becomes an implicit requirement of the game.
What gets lost in this dynamic is not just originality, but the experience of thinking itself. Writing—especially in the humanities—is not merely a way of displaying conclusions. It's a way of discovering what you think. When students bypass that struggle, they don't just save time; they miss the moment where uncertainty hardens into insight and where a half-formed idea becomes theirs.
Several teachers described trying to push back by banning AI outright. None felt it was working. “They just hide it better,” one said. “And the distrust gets worse—on both sides.” Students feel policed, teachers feel like detectives, and nobody feels as though they're learning.
What struck me, again and again, was how rarely anyone talked about thinking as visible work. The conversation stayed stuck on outputs: Was the essay AI-generated? Was the email polished enough? Did it meet the standard? The process—the false starts, the revisions, the moments of confusion—remained invisible, even though that is where learning actually happens.
One student said something that has stayed with me. “When I use AI, I feel calmer,” she said. “But when I turn it in, I feel less proud. Like it didn't really come from me.”
That trade-off—calm for pride, fluency for ownership—seems to be becoming normal.
I don't think this shift is inevitable. But it does require changing what school asks students to show. If the only thing that “counts” is the final answer, then of course students will optimize for the best-sounding one. If, instead, what counts is the reasoning that leads there—the questions asked, the evidence weighed, the revisions made—then the incentives shift.
Several teachers I spoke to have begun experimenting with this, often informally. One now asks students to submit brief reflections explaining how their ideas changed while writing. Another holds short, low-stakes oral defenses. “The confidence comes back when they realize they can explain themselves,” she told me. “Even if the writing isn't perfect.”
The deeper issue AI has surfaced is not cheating. It's trust. Do we trust students to think? Do students trust themselves to have something worth saying?
In my conversations with teachers and students, I heard frustration, fatigue, and anxiety. But I also heard something else: relief, when students were given permission to struggle out loud, and excitement when teachers could finally see how students were thinking—not just what they produced.
The real cost of AI in the classroom is not cheating or automation; it is the quiet erosion of confidence in students' own intuitions and words. When every sentence can be instantly improved or replaced, students lose the chance to develop a voice they recognize as their own. I do not believe, however, that this is an inevitable consequence of artificial intelligence itself; it is a consequence of how today's models are designed, which is to be maximally helpful and all too willing to do the thinking for you.
The irony is that these tools could be built to do the opposite, by surfacing uncertainty, inviting revision, and strengthening judgment. But the largest AI companies have little incentive to build for that outcome. Their market is productivity and scale, not students learning how to think.
Confidence doesn't come from sounding perfect. It comes from realizing you can think, revise, and defend an idea even when no machine is there to approve it first. That realization is Emerson's self-reliance: the discovery that your own perceptions are worth trusting before you hand them over to anyone—or anything—else.
Emerson's old insight, more than any policy or detector, may be what education in the age of AI most urgently needs to recover.