Welcome to the Analog Renaissance: The Future Is Trust
Protecting generative cognition, haystacking truth, classroom conundrum, and the analog-digital hybrid model
For those of you who prefer to read off paper rather than the screen, we have converted the post into an easily printable pdf file.
You must trust and believe in people or life becomes impossible.
Anton Chekov
Live not by technological lies (by Peco)
In the early 1900s, my grandfather left the mud-brick Balkan village where he was born and took a steamship to America. There, he worked for a few years, then recrossed the Atlantic and returned to the village, where his wife and children were surely overjoyed to see him—and to see that jingling bag of American money he had saved up.
It was a voyage of trust. Trust the ship would not sink as it bobbed across the ocean, trust he would find work in a country where he didn’t speak the language, trust he would come back alive. He was not alone in taking these risks, of course. Many men (it was mostly men back then) went off to make their fortune.
One father went to America, and for several years continued writing to his family and sending back money. Eventually his son went to join him, but when they met, the father revealed that he had married an American woman and started another family. It was, for the son, a violation of trust. He was so shattered that he threw himself off a bridge.
Though we know the risks, we keep trusting. We expect our spouses to be faithful to us. Children expect parents to be truthful. Friends expect other friends to be loyal and honest, and strangers expect, or at least hope, to be treated fairly.
We trust others, and make ourselves trustworthy, and in doing so we stick together, work together. To trust is a deeply human act, and the quintessential force that gives individuals and societies integrity—their moral wholeness and their durability.
Lose trust, and nations and civilizations will crumble, no matter how wealthy or powerful they are.
And no matter how technologically sophisticated.
Enter AI.
When the subject of trust and AI comes up, it often turns into a question of whether we can trust AI to function reliably, safely, without errors or hallucinations. Can AI accurately summarize the most recent scientific papers on the mating habits of the Antarctic penguin? Can it drive a car without making dumb mistakes, like endlessly circling in a parking lot while the dizzy, distressed passenger calls customer service for help?
If we were using AI only for functional tasks, then we might not have to worry about anything other than functional trust. But this is not the future of AI, not as it’s being conceived by the tech leaders of our age. AI will not be a tool, to be used and then locked up in the cabinet when we are done, nor just an obsequious digital assistant who keeps popping out of our screens with cheerful advice. It will be an invasion of chatbots and robots and myriad babbling devices, swarming over every part of society, taking over jobs, teaching our youth, giving dating advice, writing novels, preaching sermons, making movies, and churning out Substack essays faster than we ever could.
And it will challenge our ability to trust each other like no other invention in human history.
Protecting human generative cognition
Can you be certain that I, Peco, wrote these words myself, without AI? How do you know?
At the heart of it is trust. Ruth and I began publishing on Substack before the advent of generative AI, which helped establish our credibility as humans rather than machines (and that is a strange thing to have to say). We have had a few online meetings with readers, which may also have helped to verify our status as flesh-and-blood beings with original ideas.
Like us, many others have gained the trust of their readers, yet as AI becomes more powerful in its ability to produce articles, we face a problem. Many creators do not disclose the role of AI in their work. Even when they do, they fail to do it clearly, so that readers don’t always realize it, then end up feeling cheated to discover they’ve been reading the eloquent words of a machine posing as human.
In the old days we might say you were conned by a psychopath. Today we call it a technological success.
Generative cognition is the uniquely human act of seeing and perceiving real things in the world, experiencing emotions about them, and then turning those sensations and emotions into words, art, and creative expression. Although we routinely hear of “generative AI”, it’s more like “regurgitative AI”, more like a digital cow with many stomachs, all of them digesting training data that is based on human experience. When a student uses AI to write an essay, when a writer uses AI to produce a story, they are engaging in a regurgitative act. It is not generative in the true sense, because AI has no physical eyes, ears, and sense receptors that see, hear, and perceive actual objects and energies in the world, nor any conscious spirit that subjectively apprehends these things.
AI won’t go away, but we can adopt boundaries around its use. One boundary is generative cognition: the act of producing creative or original ideas. We can choose to avoid using AI to write or edit the sentences in our stories or essays, or to draw our pictures, or make our music or movies—or else, if people insist on doing so, they can clearly indicate where and how AI was used, rather than obscuring or hiding the fact.
Otherwise, we are forced to adopt a suspicious mindset where we are never quite sure if we are engaging with the creative work of an actual person versus a machine (or a person coaching a machine). The work might be technically brilliant, but if we know or suspect a machine is behind it, then we will never shake the feeling that we are being played, manipulated, duped.
Haystacking truth
It might be tempting to think all of this is only a problem in the arts. Oh, those writers, musicians, and painters—they’re just so sensitive! But even professions outside the arts, like the practice of law, are being impacted. A lawyer was just punished in the state of Utah for filing a legal brief with a “fake precedent” created by AI.
Meanwhile, in education, 86% of students globally are using AI. In science, a majority (65%) of surveyed scientists felt it was acceptable to use AI to generate the text for their scientific papers, with 18% actually doing it but not disclosing it. In media, the Chicago Sun Times published an AI-generated summer reading list that included a number of non-existent books, while other publications, like the LA Times and US Weekly, have also used AI to write content—and so have a few of Substack’s top newsletters.
To some, these examples might seem excusable on various grounds, but either way, the more we turn to AI as a substitute for human generative cognition, the more we will come to mistrust the content that is presented to us.
Anybody who cares about truth will find him or herself feeling increasingly uncertain, both of the world around them and in their relationships. Are the people I look up to and admire—my friends, my family, my classmates and colleagues—actually who they claim to be? Or does everybody’s success and capability depend on being a savvy digital charlatan, propped up by AI?
And should I be a charlatan too?
The atmosphere of mistrust will engulf the online world (if it hasn’t already). AI has been accused of generating digital pollution, spewing ever-growing streams of low-quality content. But it’s also creating something analogous to a needle in a haystack, where accurate facts, honest reporting, and reasonable opinion, become harder to find as the haystack grows ever larger, filled with so much falsity and really, really good deepfake video.
Like this deepfake news video about wildfires in Canada, created by Google's new Veo 3 AI. Will we ever be able to look at another video again without wondering, Did this really happen?
And the deepfakes will go viral, like this video about an exciting new Avengers movie—except the video is AI-generated. But that hasn’t stopped it from getting millions of views, according to Forbes.
How long before millions of people are reacting to fake stories about a new fatal pandemic, a missile strike by China on the American west coast, or some other fresh hell?
A moral implosion
Human beings have a primordial moral sense, an instinct (though not always followed) for truthfulness, fairness, of promises that should be kept and vows upheld. And when we discover someone has broken faith with us—or we with them—it can damage and destroy relationships.
What happens when large segments of society live by technological lies in their education, their work, their personal lives? There is a risk that we will move away from the ideal of truth, and away the ideal of meritocracy, of a meaningful connection between effort and achievement, and instead drift toward a “skeptocracy”, where we are chronically skeptical or suspicious of the success of others, doubting if their achievements are based on actual effort or talent.
We know a student whose university professors regularly give tests that can be completed from home. The profs instruct their classes not to use ChatGPT or any other AI, but everybody knows that almost everybody does—which means non-cheating students who want to learn, who might even be among the brightest, can end up with lower test marks.
Of course, the profs are aware that much of the class is cheating, so to correct the artificial inflation of test scores they will sometimes downgrade the marks of the entire class to ensure a more normal distribution of scores. But this again punishes the honest students for their honesty, now by unfairly lowering their hard-earned grades.
More than that, the complicity of many students and profs in accepting the misuse of AI gives rise to the belief that duplicity is okay as long as you don’t get caught. It’s a moral implosion, replacing the cultivation of conscience with opportunism and pragmatism.
Worse, a whole society saturated with AI could encourage duplicity as a normative way of life: cheating your way to good grades in school; cheating on the job; and leaving people with a false impression of who you are.
Yet we have a choice.
We can refuse to live by technological lies, and start building outposts of human trust in the digital wilderness.
Welcome to the Analog Renaissance (by Ruth)
Moving from analog to digital is always a process of throwing things away…Analog is always the source, always the truth. Reality is analog.
The Revenge of the Analog by
Over the last few years a mini-digital-rights revolution prompted several cantons in Switzerland to enshrine “digital integrity” in their constitutions, which includes in its definition “the right to an offline life”. In Zurich the constitution even entails “the right not to be judged by a machine” as well as “not to be tracked, measured, or analysed.” Over 90% of voters supported the right to an analog life.
While it may seem like wishful thinking that we can avoid AI altogether, we should also not fall for the inevitability narrative. As the Swiss have demonstrated, a strong will among the people can translate into political and legislative action. Not every country has a direct-democracy such as the Swiss, but bottom-up change is still possible.
Some of this change will be prompted by the realization that trust simply can no longer be fully established in the digital realm. Some of this change will also emerge out of the recognition that work, education, and creative writing and art are simply better when firmly grounded in the analog.
These companies are not turning analog out of some Mad Men-inspired nostalgia for the way business was once done, or because the people working there are afraid of change. They are the most advanced, progressive corporations in the world. They are not embracing analog because it is cool. They do it because analog is the most effective, productive way to conduct business.
The Revenge of Analog by
Over a decade ago some companies discovered that doing business analog was not backward, but gave them a competitive advantage. Designers1 discovered that if they started out sketching their ideas with pen on paper, rather than on the computer, they produced work that was “more thought out, and frankly better”. Meetings were shorter and more productive if technology was banned. One CEO even refused to have important meetings “by phone, e-mail, or other digital means unless absolutely necessary.”
Now they have an added advantage: trust.
A company that has established an analog work ethic, face-to-face meetings, and personal rapport with its clients will stand apart in an era of skeptocracy.
As a consumer consider:
Supporting companies that put humans first, just as you would pay a little extra for organic food or ethically sourced products.
Not supporting companies that state that they are “AI-first”, and telling them so.
Supporting local companies that you can visit in person, making a connection with vendors, paying in cash whenever possible. Although these interactions might be brief, over time they help build the web of human trust.
As a worker consider:
Building strong face-to-face relationships in your workplace.
Pouring yourself into your work, even if others are taking shortcuts. In the words of
, “The realm of banal mediocrity is flooded, but there is a life raft for those who approach their work with a commitment to excellence. The trick to success is that there is no trick: the trick is to be better. Yes, that’s right, quality execution builds a durable, valuable moat around any endeavor—a principle that applies to all types of work in any field and any industry.”Following the advice Kevin Roose offers in Futureproof:
“…refusing to compete on the machines’ terms, and focusing instead on leaving our own, distinctively human mark on the things we’re creating. No matter what our job is, or how many hours a week we work, we can practice our own version of monozukuri2, knowing that what will make us stand out is not how hard we labor, but how much of ourselves shows up in the final product. In other words, elbow grease is out. Handprints are in.”
The classroom conundrum
“Armed with generative AI, a B student can produce A work while turning into a C student.”
The Myth of Automated Learning,
The AI pandora’s box is turning the educational system into a sham. Returning focus and integrity to the learning environment is imperative if we are to establish trust between teachers and students.
If you are an educator consider :
Engaging your students in an AI discussion such as
recently did. You might be surprised to find that they would prefer to keep AI out of the classroom for both students and teachers alike.Returning to tried-and-tested methods of traditional teaching including handwritten exams and verbal examinations, such as outlined by
in 5 Ways to Stop AI Cheating. Not only do these methods help eradicate trust issues, but will support students in restoring cognitive focus.Leading by example. Tell your students how you do and do not use AI in your teaching and explain your reasoning. Establish trust by being transparent.
Take inspiration from
: My School Banned Smartphones for the Year. Here’s What Happened.
If you are a student:
Approach your teachers/professors and administrators with your concerns regarding AI use. Students who complete work with integrity are at risk of being unfairly discriminated in new grading schemes that adjust for AI cheating.
Be willing to receive a lower grade for honest work, rather than an inflated grade that is vapid and deceptive.
Be present in class, participate, ask questions, engage with your teachers and fellow students. Let your teachers know that you value their work. These small actions not only help you to learn better, but help to build trust.
Continued self-education helps to inoculate us against AI content (to some degree), consider:
Reading and collecting physical books. Here are some reading lists to get you started, and here is a Guide to Booklegging and building a history library.
Visiting museums to familiarize yourself with real art3 and/or looking at art books. Here are the books I regularly look over:
The analog-digital hybrid model
I was dismayed when I started to discover that some writers on
use AI to fabricate their posts. Some, like , explicitly state their AI use in their About page (although few seem to realize this). who helps writers find their footing on the platform, uses Claude to offer writing guidance for viral Notes. There are even monthly AI subscriptions specifically designed to write posts and Notes for you!4GPTZero is launching a “certified human badge” in the recognition that “authenticity” will increasingly matter to people as online content continues to decrease in trustworthiness. But lo and behold, programs like Quillbot that help by-pass AI detection are springing up like mushrooms. Never mind that AI detection programs are utterly unreliable, at times identifying fully human created content as 100% AI.
The only way that we might hope to trust digital content, is if it has a transparent analog foundation.
Our digital-analog hybrid model is based on the commitment that anything we produce online, is first sourced and developed in the analog world. In addition, we aim to connect with writers and readers in person, via zoom, or through personal communication to help establish trust.5
As a writer consider:
Starting your writing with pen and paper. Not only will your ideas be yours, but you will experience increased focus and less digital distraction.
Using notebooks (see Peco’s brainstorming notes for today’s post below), yellow legal pads (me), cue cards, sticky notes, or even whiteboards.
Sharing photos of your drafts with readers and being transparent about your writing process. If you do use AI, be explicit about how you use it.
As a reader consider:
Giving writers time. An analog process is slower and requires the mind to lie fallow at times, so that it can produce fruitful work.
Supporting writers and creators with a history of demonstrating integrity and trustworthiness in their work.
There are additional ways
plans to go beyond the “human certified badge” to establish reader-writer trust:I recently came across 600 pages of beautiful linen foolscap paper. It seemed made for our readers! Starting this summer, I am thus inviting paying subscribers to write me a letter. It can simply be a letter to introduce yourself, to share ideas, poems, offer reading recommendations, or to ask questions.
I will respond with a handwritten full-page letter, embossed with the School of the Unconformed logo6.
In addition, I will invite annual paying subscribers to a 40-min one-on-one zoom chat. While not as good as a real-life conversation over a hot drink, it will allow us to “meet” each other, ask questions, and simply connect.
A final word about the unique potential of the
platform itself. A few years ago Substack became a pioneer by enabling indie writers to have a voice in the mainstream culture (and has done so incredibly well as discussed here).Now Substack has an opportunity to spearhead an Analog Renaissance. This is a watershed moment where a decision to build trust with its readership can not only differentiate the platform, but safeguard its readers and writers from AI content by further developing analog-digital connections:
Substack already hosts IRL gatherings for its writers7 which helps to build real life communities. Host more of them, more often, even in smaller towns!
Include readers in some IRL events, offering a pathway to establish lasting trust and connections between readers and writers.
Create algorithms that reward writers who do not use AI, as well as those who are transparent about the role of AI in their writing.
Facilitate printable pdf versions of posts, or even better, printable anthologies. Make Substack Press a reality!
Let
, , and know what you think of these suggestions, and add your own!In closing
We opened this piece with Chekov’s quote, “You must trust and believe in people or life becomes impossible.” AI is starting to make trust impossible, both in and beyond the digital realm. It enables obfuscation and duplicity as to who is behind any kind of product based on words, images, or video. This might seem trivial when we see it only in isolated cases, but its widening undercurrent is redefining our moral structure by elevating opportunism and pragmatism over honesty and transparency.
Our position is not that all AI is bad, but if we want the benefits of this technology to outweigh the costs, then we must prioritize human originality and human effort in our classrooms, workplaces, and personal lives.
We can do this by rejecting the inevitability narrative about AI and by telling a different story, one that puts humans, not machines, first.
The Analog Renaissance has begun with the very ink that scribed these words.
And this, we promise, you can trust.
We are happy to offer this post to you for free, but it took a lot of time and effort :) If you found this post helpful (or hopeful), please consider supporting our work by becoming a paid subscriber, or simply show your appreciation with a like, restack, or share.
We’d love to hear from you! Please share your questions, thoughts, and reflections in the comments section!
Further reading
What My Students Had To Say About AI by
andWriting AImish: Drawing a new line by
The Myth of Automated Learning by
When the Music Stops: Facing AI acceleration in a fragile and complacent world by
The Work of AI Writing on Substack by
Secretary Jobs in the Age of AI by
Drawn to the Real: Tell AI to stick it by
Would you like to meet us in person? Come join us,
, , , , Michael Toscano, among others, at the Doomer Optimist gathering in Ligonier, PA on Nov. 7 - 8! This is a small-scale event and spots are very limited. See here for details:About the Authors
Peco Gaskovski is the author of Exogenesis and also writes at Pilgrims in the Machine, a newsletter about being human in an age of acceleration. Ruth Gaskovski is a home educator, polyglot, and loves long classic novels. Together they explore navigating the impact of technology in daily life on School of the Unconformed. As Swiss-Canadian dual nationals, they make their home on the borderlands of Mennonite country in Canada.
Designers referred to in this section of the book worked for companies such as Google, Twitter, Dropbox, Pintrest, among others.
Japanese term for “the art of making things”.
Substack writers I know that I can trust based on having met IRL, via zoom, or personal communication: Peter Limberg, Tommy Dixon (he bakes excellent sourdough bread) ,Dixie Dillon Lane (we’ll lead a pilgrimage together on the Camino in June!), Grant Martsolf (come and join us at the Doomer Optimist gathering in Nov.),
, Meg Mittelstedt, Caroline Ross, Paul Kingsnorth, Nicholas Kotar, Hadden Turner, Peter Kwasniewski, Griffin Gooch, Freya India, Ted Balaker, The Haeft, Tsh Oxenreider, Brett McKay, Mills Baker, Dominick Baruffi, Jonathon Van Maren, Ben Christenson, , Kristin Haakenson, Shannon Hood (she writes beautiful letters!), Lisa Rose, Scott Newstok, Latham Turner, Ivana Greco, , Molly Young, Keturah Hickman, , Aaron Long, Patrick B. Whalen, Jairaj Sethi, Adam Wilson, Joshua Pauling, Gideon Heugh, Kelsie Hartley…just to name a few.This embosser was a most wonderful birthday gift idea from my daughter:)
I attended such an event in Toronto last summer where I got to chat with one of the founders
(and found out that we went to the same university) and met , who turned out to be a kindred spirit.
I love the idea of The Analogue Resistance. I’ve been using the idea of Re-wilding in my digital illustration process, with the aim of bringing back more physically made elements into how I make my art. Although not strictly about AI, the efficiencies gained by digital tools come at a cost—in my case, the loss of a sense of discovery and the satisfaction of seeing a work come together in ways I couldn’t have planned. Digital has made my process less interesting to me.
AI generated content is a stupid answer to a stupid question. Instead of starting with something worth expressing, we start with the pressure to feed the machine with content. If we’re calling what we produce and post “content”, we’ve already lost, and perhaps we might as well just let computers write (and read) it for us.
Great article. Thank you!
Very glad you are honing in on the topic of 'trust' with regards to AI Peco and Ruth and a great piece.
For me, trust is the biggest and most fundamental issue with it. I don't want to live in a society where I am forced to doubt whether everything I read, every photograph I see, every audio I listen to is made by a human or not, let alone if the mountain (to take an example) in the photo is a real place or not. Such a world is disorientating, ugly, and repressive. Societies where no one can trust anyone are defined by these adjectives (just think of China or North Korea) - AI risks turning our societies into such defined places. One could argue that our digital ecosystems are already far down this path with no return.
Hope you don't mind me putting my Refuges of Authenticity essay in the comments in case anyone wants to read my anti-AI manifesto!
https://overthefield.substack.com/p/a-refuge-of-authenticity