The Sacred Triad: Where (and why) to draw the AI-mish line

“The greatest hazard of all, losing one's self, can occur very quietly in the world, as if it were nothing at all.”
- Søren Kierkegaard
We find ourselves living at the unlikely nexus of two colliding worlds: to the north of our home you’ll find the largest concentration of Old Order Mennonites in Canada, farming large tracts of land, riding along roads in horse-drawn buggies, keeping to a traditional way of life from a different era; a few minutes drive to the south of our home lies Canada’s Silicon Valley, the birthplace of BlackBerry, local headquarters to Google, and launch pad for anyone who has “got a big technology idea to turn into a billion-dollar company”.1
We sometimes chat with Esther at the Mennonite “Country Pantry” where customers’ tabs are still written neatly in a little black notebook. Knowing that each Mennonite fellowship sets their own rules regarding technology, we ask whether she had heard about AI. She wasn’t sure, but when she learned that teachers can now no longer tell if students are writing their own work, or that people use AI to write messages to their relationship partners, she shook her head in disgust: “That’s sickening!”
It’s easy to be morally opposed to AI. But it’s easier to be the opposite, to uncritically welcome it, affirm it, throw down our money like palm leaves for Claude and Grok and all the other digital messiahs who will usher in the new age. The scope of this new age can be difficult to get our heads around, but Noah Smith suggests it might look something like this:
When people ask, “Will AI take my job?” they remind me of a Sioux tribesman in 1840 wondering if the white settlers would take his buffalo. The answer is “Yes, but you’re really asking the wrong question.” For the white settlers who conquered the Great Plains, it wasn’t about the buffalo. It was about creating a whole new civilization and a whole new economic system on the land where some buffalo happened to be….2
In November 2024, the number of articles published on the internet by AI surpassed those written by humans, while an AI-generated paper written without human involvement just passed peer review at Scientific American, and AI-written content is causing havoc for newspapers and publishers alike.
AI adoption rates are faster than any other technology in history, including the radio, electricity, the Internet, or cellphones.
Given this unprecedented acceleration, now is the time to become intentional about the kind of life we want to live, and to train our minds and habits to protect us against the perils of AI—and to ensure that where we do use it, we do so carefully. If we fail to draw our lines now, companies like Cluely, who never want you to think without AI again, will get their wish:
Every time technology makes us smarter, the world panics.
Then it adapts. Then it forgets.
And suddenly, it's normal.
We are writing today’s post not to judge whether or how you use AI, but to propose a “Sacred Triad” model that aims to protect and grow the three areas of life that are most vulnerable to AI.
For those of you who prefer to read off paper rather than the screen, we have converted the post into an easily printable pdf file here:
We are happy to share this post with all subscribers, but it but it took a lot of time and effort :) Please consider supporting our work by becoming a paid subscriber.
It’s in the water
If you are morally opposed to AI, just because you don’t like it, or it disgusts you, or frightens you, or triggers some other gut reaction, then rest assured, people are working on how to deal with people like you3. For one thing, they know they can’t reason with your gut. They know you probably think of AI as taboo, to be avoided. They suspect you adhere to purity norms. And they know that, with time and familiarity, you’ll get used to AI. Hopefully. Because they need your greenbacks to pay for the holy road upon which those digital messiahs will tread.
And if you insist you want to avoid AI only to safeguard “what it means to be human”, then you will be told there is nothing essential about our nature, or that humans are evolving and AI is part of that evolution, and any view to the contrary is backward and short-sighted and maybe even bigoted, and anyway, since China is going to develop it, it’s a foregone conclusion we need to as well, which means opening wide the slop gates and flooding the valleys of our humanity, until we are literally drinking it. Hence Sam Altman calling AI a “utility”. Yes, we are going to drink it.4
And once we start drinking it, a heady, intoxicating logic takes over. That spell of inevitability. If it’s smart and knows stuff, why shouldn’t I use it? And look, if I use it as a tool, then it’s okay. AI relieves us of having to make decisions, and in relieving us, it frees us of those difficult and sometimes angst-ridden moments between choices, when we are most likely to become aware of ourselves, our uncertainties and fears, our flaws. And so we keep using it, addicted to the relief of not having to face life on the strength of our own mind, emotions, and spirit, and so we fall prey to Kierkegaard’s warning, that we lose ourselves very quietly, as if it were nothing at all.
AI will never tell you, “That’s enough, you go do it yourself”. AI has no in-built braking mechanism to stop you from over-relying on AI.
It’s important not to reduce opposition to AI to a gut moral stance. But it’s important not to abandon the idea of having a moral stance, either. There are reasonable objections to AI that can be articulated, explained, and acted on toward creating a society that isn’t AI-centric, but human-centric.
AI-mish Leading Lines

In art, “leading lines” lead viewers’ eyes in a painting, usually toward a point of interest. In life, leading lines are powerful convictions and perspectives that point us toward a particular vision of reality. They tell us where to go, but at the same time, where not to go.
If we think of life as a work of art, then we are the canvas, and the content of our minds and hearts is what gets splashed on that surface in strokes, dabs, points, whorls. AI is marketed as a kind of skilled hand that takes our hand as it holds the paint brush, and helps us to paint the canvas. What we aren’t told is that after a while, as AI becomes more involved in the minutiae of our decision making, our own hand gets lazy, its muscles weak, and what we see on the canvas is not an imprint of ourselves, but the output of a computational god that feeds off our humanity.
This transition happens surreptitiously, and in our most vulnerable moments, as described by The One Percent Rule:
It does not require the machine to become conscious, malevolent, or sovereign. It requires only that the machine become plausible at precisely the moment the human being is tired, lonely, angry, eager for relief, or frightened of making the wrong move.
It isn’t enough to tell ourselves to stop using AI, but to know how and why, by defining our own leading lines: lines that point us to what we must do and what we must not, to where we must go and where we can never go. We are not Amish, but the Amish way of life, old-fashioned as it may seem, gives a hint of how we might create leading lines.
It’s often assumed the Amish don’t use any electricity, but in fact, what most Amish actually avoid is the electrical grid. The distinction is subtle but important. It means many Amish can use electricity for specific, needful tasks, in which case they might access it through a generator, solar panel, or some other off-grid source. That might seem inconvenient , but by avoiding the electrical grid and ubiquitous access to power, the Amish prevent unwanted technologies such as TV, computers, and the Internet from flooding their households and disrupting a centuries-old model of life that emphasizes family, manual labor, and religious faith.
The Amish don’t approach the problem of technology by starting with technology, but by focusing on their “leading lines”, their vision of life, and then rejecting anything—like ubiquitous access to electricity—that might threaten that vision or the lines that point to it. Their approach reveals an intentionality lacking in the wider culture.
The Sacred Triad
We don’t need to become Amish to limit the presence of AI in our lives. But we can become AI-mish, so to speak, by defining our leading lines. How you do this depends on your vision of life: your values and commitments, your ideas about meaning and purpose. This kind of thinking comes naturally to religious people, as their spiritual faith or system usually addresses these kinds of issues.
Are there any leading lines that, whether you’re religious or not, can help protect you from the more corrosive impacts of AI? If you google “typical problems with AI”, the answer you get are predictable—bias, discrimination, hallucinations, inaccuracy, data privacy. But all these issues, even if they were perfectly solved, would not remove the risk that is posed to three areas of life that we call:
The Sacred Triad: 5
Cognition. Relationships. Spirit.
1. Cognition
AI makes mental tasks easier, but easily turns into a crutch that interferes with the development of our attention, memory, language, and other mental skills.
In Learning, Fast and Slow: Why AI will not revolutionize education, we pointed out that many public schools are already integrating AI-powered programs such as IXL or Khan Academy’s Khanmingo, while an online charter school in Arizona has plans to “prioritize AI in its content delivery model”, using teachers merely as “guides” to oversee progress.
But does AI actually help develop cognition? UNESCO reports that, “there is little robust evidence on digital technology’s added benefit to education”, and there is indeed overwhelming evidence that EdTech has been a failure6, even a tragedy. Based on these findings, should we expect that putting EdTech on AI-steroids will result in better learning outcomes?
Ted Gioia cited a paper by Mircosoft researchers who found that the use of generative AI “can result in the deterioration of cognitive faculties that ought be to be preserved”:
[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.
Routine, repetition, and mental effort are the building blocks of more complex learning. They are essential to learning, not optional experiences to avoid.
A study titled Your Brain on ChatGPT found that students who used AI to write an essay showed the weakest brain connectivity patterns. They also suffered from “weaker memory traces, reduced self-monitoring, and fragmented authorship.” The authors concluded that “AI tools, while valuable for supporting performance, may unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material.”
Earlier this week, Prashant Yadav responded to the findings of an MIT working paper which found that “substituting human learning effort with agentic AI can lead to knowledge collapse: a steady state in which shared general knowledge depletes despite continued AI availability.”
Citing an MIT Media Lab study, Yadav also reports that
After four months of AI-assisted writing, participants’ brains showed up to 55 per cent reduced neural connectivity compared to those who wrote independently. More importantly: when AI was taken away, their brains did not recover. The neural engagement patterns did not snap back. The cognitive architecture had been structurally reorganised.
Findings of cognitive atrophy are disturbing enough when it comes to adults, but the more alarming possibility is that children’s capacity to learn is forestalled. As Yadav notes, “a young person whose formative learning environment is dominated by AI corpus interaction may never develop the neural architecture that generates semantic generativity.”
AI companies profit off this created cognitive dependence. Cluely wants you “to cheat on everything”, and touts the tagline “We built Cluely so you never have to think alone again.” If we fail to commit to cognitive sovereignty, thinking alone and thinking with AI may soon become indistinguishable.
Avoid using AI in regular classrooms, and instead support children and youth in developing their cognitive powers, particularly in core areas like attention and concentration, memory, language, complex reasoning, knowledge and fact acquisition, and “learning to learn”.
Adults should avoid relying on AI for any activity, if its use threatens to diminish their core mental abilities.
We need more human (not AI) teachers. Learning happens best in relationships with other people who care about learning, and who can inspire us to learn.
2. Relationships
While AI can mimic humanness impressively, it can pull us away from real human relationships, rob of us corrective social feedback, diminish our empathy, and distort our self-perception.
A 2025 Common Sense report found that 72% of teens have used AI companions, and 52% are regular users, and a stunning 32% find AI interactions equally or more satisfying than human conversations. No wonder. AI affirms, flatters, and “empathizes” at ubermensch levels, something many teenagers desperately long for.
Using AI chatbots as ersatz companions is becoming popular, but in her summary of a research article on negative implications of sycophantic AI, Luiza Jarovsky, PhD points out that relational distortions set in swiftly:
Even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right.
But as Adam Grant points out, sycophancy is not AI’s biggest problem:
It’s that the interactions they manufacture are one-sided.
As human beings, one of our fundamental motives is to matter. Mattering is not just about feeling valued by others—it’s also about feeling that we add value to others. We need to know that our actions make a difference.
Using AI to guide relational decisions, be it deciding how to deal with a toddler tantrum, interacting with a divorced spouse, or a difficult roommate, distorts and weakens our relational capacities. The One Percent Rule shared a study of over 1.5 million real-world conversations between humans and their AI assistants by Anthropic and the University of Toronto, which revealed “a quiet but profound erosion of autonomy, where users increasingly outsource the ‘soft tissues’ of judgment, asking the machine to script their most intimate apologies, validate their personal grievances, and even settle their moral dilemmas.”
Avoid emotional relationships with AI, including romance, friendship, therapy, or advice seeking.
Avoid using AI to scaffold our human interactions through message scripts, adjusting for tone and style, or “checking how we should react to someone”.
Don’t let AI become an intermediary between you and your child, spouse, parent, friend, prospective relationship partner, etc.
Develop emotional awareness and communication abilities by interacting with other people, daily, repetitively, over a life-time.
Embrace imperfect responses over inauthentic scripted AI words.
3. Spirit
Whether we think of “spirit” in the psychological or spiritual sense, the creations of the spirit are distinctively human, and include poetry, novels, music, film, sermons7, love letters, and anything else that “comes from the heart”.
The issue is not whether AI can generate such creations; it can, and it will only get better. The issue is what our reliance on AI for such things does to us8. We exist, in part, to touch the world with our imprint, our unique presence, and the only way to do that is to cultivate awareness of our interiority, with all its richness, complexity, and difficulty, and to skillfully articulate that interiority to the outside world. To allow AI into this space is to distort and forgo the most authentic thing we possess: our personal experience of the world.
In Welcome to the Analog Renaissance: The Future Is Trust, we wrote that generative cognition is a uniquely human act of seeing and perceiving real things in the world, experiencing emotions about them, and then turning those sensations and emotions into words, art, and creative expression. Although we routinely hear of “generative AI”, it’s more like “regurgitative AI”, more like a digital cow with many stomachs, all of them digesting training data that is based on human experience. When a student uses AI to write an essay, when a writer uses AI to produce a story, they are engaging in a regurgitative act. It is not generative in the true sense, because AI has no physical eyes, ears, and sense receptors that see, hear, and perceive actual objects and energies in the world, nor any conscious spirit that subjectively apprehends these things.
In her recent essay for The Atlantic, Jasmine Sun observes that our subjectivity is what creates an author’s voice:
When a practiced human writer reaches for a particular turn of phrase, they aren’t aiming for some single standard of great writing. Rather, the best metaphors come from the author’s specific blend of experiences or expertise. A writer’s diction, their citations, and the stories they share all reflect a singular, irreplicable perspective. Authorial voice emerges from the specificity of a life.
But the only way to express this “specificity” of our lives is by developing those cognitive skills we mentioned earlier—attention, memory, and especially language—that allow us to scaffold and construct our novel, poem, or whatever our creation, and leave a human hallmark. As Mary Harrington observes:
Avoid using AI to write or edit creative stories, essays, visual art, music, or movies.
For writing, explore using pen and paper. Not only will your ideas be yours, but you will experience increased focus and less digital distraction.
To generate or collect ideas, use notebooks, cue cards, sticky notes, whiteboards, or even chalkboards.
Be transparent about your writing process. Cultivate trust in readers. If you do use AI, be explicit about how you use it.
Allow writers time. An analog process is slower and requires the mind to lie fallow at times, so that it can produce fruitful work.9
Support writers and creators with a history of demonstrating integrity and trustworthiness in their work.
Avoiding AI where it threatens our cognitive, relational, or creative skills, isn’t regressive or moralistic, or based on a vague gut feeling. It’s a way of safeguarding a sacred triad of human qualities, a way of ensuring that we build and maintain those essential human muscles that produce minds that think and problem solve, hearts that empathize and love, and the power to express our unique view of life.
Individuals, families, teachers, schoolboards, community and church leaders, can all take an active role in ensuring a human-centric future.
Now is the time to draw your AI-mish line.
We’d love to hear from you! Share your questions, thoughts, and reflections in the comments section!
Note to paying subscribers: School of the Unconformed goes beyond the “human-certified badge” to establish reader-writer trust: Write me (Ruth) a letter! I will respond with a handwritten full-page letter, embossed with the School of the Unconformed logo. Contact me via direct message for my mailing address.
Announcements
Looking for an AI-free logo to indicate that you value human work? The little “AI-Free Logo Library” is now open!
AI-Free Logo Library
The following AI-free logos are free to use for your AI-free creations. If you have a logo that you’d like to add to this “little AI-free logo library”, send me a message:
Writers for Humanity
We are honored to be included among the collection of voices for Writers for Humanity:
There are a lot of ways to articulate a healthy skepticism about the encroachment of AI into human life, which is why we need a variety of voices…
We have a lot of work to do if we’re going to reclaim our humanity – it’s going to take us all.
“Another Life is Possible”—Join us at the next Doomer Optimism Gathering from July 10-12th at the Woodcrest Bruderhof in Rifton, NY! Other speakers will include Bill Kauffman, Chris Arnade, A.M. Hickman, Charles Carman, Tessa Carman, Grant Martsolf, Brandon Daily, Farahn Morgan, and Nicholas Kotar. These gathering are truly inspiring and include not just fantastic presentations and panel discussions, but potluck meals, lively debates, sing-alongs, dance, and conversations that will stay with you for months to come! See here for details.
Exogenesis by Peco Gaskovski explores birth technologies, motherhood, and family. It was selected as Book of the Year for 2025 by Matthew Long. It was also on the 2023 Public Discourse book list and has received acclaim in Mere Orthodoxy, Catholic World Report, Miller’s Book Review , The Imaginative Conservative, Catholic Insight, and Catholic Mom.
Further Reading
Contra Machinam: An Appeal for an AI Resistance by Andrew Mercer for The Front Porch Republic
AI Is A Medium And It Will Change Us by The One Percent Rule
The Human Skill That Eludes AI by Jasmine Sun for The Atlantic
From Homo Faber to Homo Fictor by Nicholas Carr
Bookish Diversions: Use AI, Lose Your Book Deal—and Maybe More by Joel J Miller
AI is Destroying the University and Learning Itself by Ronald Purser for Current Affairs
Bits in, Bits Out by Erik Hoel
What AI Hypists Miss by Francis Fukuyama
My Intelligence Isn’t Artificial, Thanks by Sam Kahn
A Portrait of the Artist as an LLM by Patrick Jordan Anderson
If You Want to be Creative, Learn How to Rest by Andy Patton
A Vague Feeling of Unease Will Be the Last Thing You Remember by Esther Berry
Also Hollis Robbins and The One Percent Rule write insightful posts about AI.
This is how students are coaxed into “drinking” AI.
Why did we focus on three particular areas, namely, spirit, cognition, and relationships? These three areas map onto the spheres of Agency and Connection, which form two of the most essential qualities in defining our humanness. In addition, our overall “model” of life, which provides us with our leading lines, corresponds to the sphere of Worldview. You can read more about Agency, Connection, and Worldview here.
The Most Compelling Argument Against Tech in Schools by Sophie Winkleman: “The OECD found that most EdTech ‘has not delivered the academic benefits once promised’, and that ‘students who use computers very frequently at school do a lot worse in most learning outcomes…The Karolinska Institute in Sweden recently published research concluding that, ‘there’s clear scientific evidence that digital tools impair rather than enhance learning’. Sweden has taken note and been the first country to kick tech out of the classroom, re-investing in books, paper and pens. They had the courage to admit that EdTech was a ‘failed experiment’.”
This is what Josh Nadeau thinks it will do to you.















I've been reading your articles for quite some time. This one made me go from being a free subscriber to a paid subscriber.
My youngest daughter (the only one still in school) is facing this dilemma. She, like me, and like her cohort of friends, all have a deep-rooted suspicion of AI, part of which is grounded in the experiential - they *like* doing the work, making mental connections, and learning hard skills - and part of which is a combination of irritation with trend-chasing as a rule, and techno-skepticism. One of her friends even researched and wrote a paper on the core issue of cognitive decline that leads towards techno-slavery.
Nevertheless, the current school principal (mind you, at a private Christian school) has decided that objections be damned, they're going to learn to use AI, and not allow students to opt out (he does not see this as a moral issue). The LLM the school has chosen to use, though, is curious: it does not give answers but only returns questions. Ostensibly this is supposed to "challenge students' core assumptions and biases" and "facilitate student learning." (strings of other banal euphemisms follow)
The school also has full access to the transcripts of whatever the students type in.
My daughter showed me the transcript of her mandatory training session. I would summarize the LLM as "corporately oleaginous and condescending, and driven to depression". I first thought I was dealing with Lumberger from Office Space, minus the hostility, but the depressing sinking sponginess of the questions it returned over time reminded more of Marvin the android, from Douglas Adams.
Were it not that she herself is nearly done with the place, this along with other shifts in faculty and priorities would drive me to pull her out.