Moral Learning Outcomes
noun
Moral Learning Outcomes name a shift from evaluating what students produce to evaluating how they conduct themselves as thinkers in an age when cognition can be cheaply outsourced. Rather than measuring surface competencies—polished arguments, tidy paragraphs, or competent source integration—Moral Learning Outcomes assess intellectual integrity: the willingness to seek truth rather than confirmation, to engage opposing views fairly, to revise or abandon a thesis when evidence demands it, and to tolerate complexity instead of retreating into binary claims. These outcomes privilege forms of engagement AI cannot convincingly fake—oral defense, personal narrative anchored in lived experience, and transparent decision-making—because they require the full presence of the Total Person. In this framework, writing is not merely a technical skill but a moral practice, and education succeeds not when students sound intelligent, but when they demonstrate judgment, accountability, and the courage to think without hiding behind a machine.
***
My college writing courses come packaged, like all respectable institutions, with a list of Student Learning Outcomes—the official criteria by which I grade essays and assign final marks. They vary slightly from class to class, but the core remains familiar: sustain a thoughtful argument over an entire essay; engage counterarguments and rebuttals to achieve intellectual rigor; integrate multiple sources to arrive at an informed position; demonstrate logical paragraph structure and competent sentences. In the Pre-AI Age, these outcomes made sense. They assumed that if a student produced an essay exhibiting these traits, the student had actually performed the thinking. In the AI Age, that assumption is no longer defensible. We now have to proceed from the opposite premise: that many students are outsourcing those cognitive tasks to a machine that can simulate rigor without ever practicing it.
If that is true—and it is—then the outcomes themselves must change. To test thinking, we have to demand what AI cannot plausibly supply. This is why I recommend an oral presentation of the essay, not read aloud like a hostage statement, but delivered as a fifteen-minute speech supported by a one-page outline. AI can generate arguments; it cannot stand in a room, hold an audience, respond to presence, and make a persuasive case grounded in credibility (ethos), logic (logos), and shared human feeling (pathos). A speech requires the full human organism. Outsourcing collapses under that weight.
The written essay, meanwhile, is scaffolded in pieces—what I call building blocks—each requiring personal narrative or reflection that must connect explicitly to the argument’s theme. If the class is writing about weight management and free will in the GLP-1 age, students write a 400-word narrative about a real struggle with weight—their own or someone close to them—and link that experience to the larger claim. If they are debating whether Frederick Douglass was “self-made,” they reflect on someone they know whose success can be read in two conflicting ways: rugged individualism on one hand, communal support on the other. If they are arguing about whether social media leads to “stupidification,” they must profile someone they know whose online life either deepened their intelligence or turned them into a dopamine-soaked attention addict. These are not confessional stunts. They are cognitive anchors.
It would be naïve to call these assignments AI-proof. At best, they are AI-resistant. But more importantly, the work required to transform those narratives into a coherent essay and then into a live oral defense demands a level of engagement that can be measured reliably. When students stand up and defend their arguments—grounded in lived experience, research, and reflection—they are participating in education as Total Persons, not as prompt engineers.
The Total Person is not a mystical ideal. It is someone who reads widely enough to form an informed view, and who arrives at a thesis through trial, error, and revision rather than starting with a conclusion and cherry-picking evidence to flatter it. That process requires something many instructors hesitate to name: moral integrity. Truth-seeking is not a neutral skill. It is a moral stance in a culture that rewards confirmation, outrage, and self-congratulation. Writing instructors are misfits precisely because we insist that counterarguments matter, that rebuttals must be fair, and that changing one’s mind in the face of evidence is not weakness but discipline.
Which is why, in the AI Age, it makes sense to demote Student Learning Outcomes and elevate Student Moral Outcomes instead. Did the student explore both sides of an argument with equal seriousness? Were they willing to defend a thesis—and just as willing to abandon it when the evidence demanded? Did they resist black-and-white thinking in favor of complication and nuance? Could they stand before an audience, fully present, and deliver an argument that integrated ethos, logos, and pathos without hiding behind a machine?
AI has forced instructors to confront what we have been doing all along. Assigning work that can be painlessly outsourced is a pedagogical failure. Developing the Total Person is not. And doing so requires admitting an uncomfortable truth: you cannot teach credible argumentation without teaching moral integrity. The two have always been inseparable. AI has simply made that fact impossible to ignore.

Leave a comment