Dec 1, 2025
Articles
Preserving Personhood Among Machines: The Socra Method
Why Socra centers personhood and process over speed and polish in an AI age.
Haley Moller
CEO

The current wave of AI in education tools rests on the same promise we see in every other industry AI has touched: Do the same things, just faster. Grade faster. Draft faster. Summarize faster. “Personalized learning” often ends up meaning “more efficient content delivery,” not deeper understanding. This promise is tempting in a system constantly squeezed for time, but it misunderstands what schools are for. Education isn’t a logistics problem; it’s a human development problem. The goal is not to reduce the time it takes a student to produce an answer, but to deepen the quality of the thinking that leads to that answer.
AI isn’t going anywhere, and it’s wishful thinking to imagine that banning it will help students learn. We have to teach young people how to use this tool responsibly—as a tool to strengthen their own thinking and voice, not as a shortcut to instant answers. What’s at stake is more than critical thinking skills; it’s the development of the self‑reliance that underlies all decision‑making.
LLMs themselves embody the confusion. They are optimized to produce fluent, plausible text on demand, which makes them very good at mimicking the surface of intellectual work, but poor at cultivating the underlying capacities which make that work meaningful. If you ask an AI to write your paper, you might get passable prose. What you don’t get is the slow, frustrating process of grappling with a passage, revising your interpretation, confronting your own confusion, and eventually arriving at an insight that feels genuinely yours. That process is where education actually happens. As a friend of mine likes to tell his students, “Using AI to write your essays is like putting your brain in a Cuisinart.”
When students use AI primarily as a shortcut, the most immediate casualty is not grades or test scores but confidence. If a machine can always phrase something better, what does that say about your own voice? If every time you’re stuck you jump straight to “just ask the AI,” you slowly lose the habit of wrestling with difficult ideas yourself. Over time, that erosion becomes existential; the sense that you have something to say, that your perspective matters, that your reading of a text is worth laboring over, begins to fade.
This is why the stakes are existential, not just academic. In the humanities especially, learning to read closely and argue carefully is not only about passing exams. It’s about becoming the kind of person who can pay attention, notice nuance, detect manipulation, and articulate what he or she thinks, even when no one provides reassurance. To cultivate that kind of personhood, you cannot outsource thinking to the very tool you’re supposed to be learning to master.
At Socra, we start from a contrarian assumption: Large language models are powerful, but they are not teachers of intellectual work. A calculator doesn’t teach you number sense; it assumes you already have it and helps you move faster once you do. In the same way, LLMs are designed to perform intellectual tasks—summarizing, drafting, rephrasing—not to cultivate the underlying habits of mind. If we treat these machines as tutors, we risk confusing fluent output with genuine understanding. If we treat them as tools inside a well‑designed learning process, however, we can help students see how AI can sharpen their own thinking rather than replace it.
That’s why Socra is built around close reading, not content generation. Instead of asking the AI to produce interpretations, we use it to guide a structured dialogue in which the student must do the intellectual work. The student chooses what to notice in a passage. The system asks, “Why that?” The student offers a tentative claim. The system responds, “Where do you see that in the text?” The back‑and‑forth is meant to slow the student down and push him or her to do the unglamourous work of revision. Crucially, the end product is not an AI‑written essay but a student‑authored claim that he or she can proudly defend.
For teachers, this approach surfaces something that traditional assignments—and most AI tools—leave invisible: the reasoning trail. Polished paragraphs tell you very little about how students got there. Did they notice contradictions in the text? Did they misread a key line but recover later? Did they have an interesting idea that they abandoned too quickly? When AI simply produces answers, it erases this trail entirely. Socra, by contrast, preserves it. Each session yields an annotated passage, a transcript of the student’s thinking, and clear markers of where they struggled or had a breakthrough. This is not efficiency for its own sake; it is efficiency in service of insight into the learner’s mind.
That insight is essential if we want to teach AI use ethically. We cannot just tell students, “Don’t cheat with AI,” and then give them no structured way to practice using it well. We need to design experiences where the only path forward is to think. Socra’s constraints—no AI‑written essays, no direct answers—model a responsible relationship to AI: The machine can ask better questions, keep you honest about evidence, and help you articulate a sharper claim, but it cannot and should not replace your judgment. Students learn, in practice, what it means to let AI be a scaffold rather than a ghostwriter.
Ultimately, the question is not whether AI will enter the classroom. It already has. The question is what kind of humans we are trying to form in its presence. If we optimize only for speed and convenience, we will get a generation very good at prompting machines and very unsure of themselves. If, instead, we insist that education is about quality of attention, depth of understanding, and the development of a personal voice, then we must design AI tools that respect those goals—even when it’s harder, slower, and (yes) less profitable.
Socra exists to take that harder path. We assume that students are capable of real thought, that teachers care about more than outputs, and that AI can be integrated into education without hollowing it out. Our bet is simple: Teach young people how to use AI responsibly, and you don’t just preserve personhood, you strengthen it.