I loved this article. I'm so tired of hearing about how AI is going to make all of us obsolete. I'm a surgeon, and while technology can sometimes be helpful in our profession for making things safer, less invasive, etc, I think it would be a sad day when there was no longer a person to discuss your symptoms with and look you in the eye when delivering difficult news. Some of my greatest learning took place in my high school AP English class where we gathered our desks in a small circle with an "old school" teacher who led thoughtful discussions about classics we were reading like Hamlet or Wuthering Heights. He was tough on our weekly essays, marking them with red ink and lots of comments. I'm so grateful for the two years of AP classes I had with him. He touched countless students' lives with his very "low-tech", "high-relationship," high standard approach to teaching.
Thanks for adding your reflections Jaime! I think the combination of red ink and "high-relationship" is what has potent learning outcomes. I attended a very strict high school in Switzerland, and there were lots of corrections and tough grades, but our teachers truly cared about us. Every year we had several weekends (and even one full week) together skiing or hiking, which helped us to get to know each other and reinforced our learning realtionship when we returned to school.
100% for education I’d agree, but not all the other jobs. Maybe you think AI will not replace jobs because when you think “AI”, you think of Large Language Models, like ChatGPT, and the picture/video equivalent.
It’s questionable to me that anything that could even potentially be considered “AI” actually exists today.
Also, about the surgeon thing, I’d avoid a doctor if possible. My nephew had jaundice
(which is rarely serious) and the doctor said they’ll keep him in a room for a few days away from his parents. My brother refused because of the obvious psychological issues with separating a newborn from his mother for the first few days of his life, and the doctor threatened calling CPS—and mind you—a parent is allowed to refuse this treatment in my province, Ontario (which my brother didn’t know, because why would he). Anyway, I trust a Large Language Models more than a Canadian doctor because it won’t coerce me into following its advice.
Even if you think my brother should have given his son the treatment, it still doesn’t negate that the doctor was coercing him into giving up his rights.
That's crazy!! All a newborn with jaundice needs is a little sunshine. That's what my pediatrician recommended when my daughter was born with jaundice. It's common in newborns.
That might also be a flaw of a Large Language Model. Since it can't (or won't) "coerce" you into following its advice, it's totally up to you. The problem is, you're definitely not an expert in every field, so you need someone else who has the expertise—and thus the confidence—to "coerce" you and do what is best given a particular circumstance.
"Avoid a doctor if possible"? Hope you're talking about exercising and keeping yourself healthy, rather than asking an LLM for which pills you should take when you're sick.
Ok obviously not some random LLM. I was referring to an LLM made and trained specifically to give medical advice, probably with the help of doctors.
Me personally, I’d rather not have a doctor coerce me. I’d rather have a LLM tell me everything there is to know while retaining the right to ignore it without being either guilt tripped into doing something or being exaggerated to. It will make me able to speak freely.
I bet a lot of people would be more willing to tell an LLM their issues than a doctor as it won’t judge them for ignoring it. This will make people more open and honest with LLM’s and on average get advice that better suits their problems.
Kind of like how if someone knows a psychologist legally has to report if he thinks you might commit suicide (in Canada), they likely won’t be nearly as honest with him.
I loved this article so much! I'm an academic librarian with a teaching load. I teach research methods and routinely start with the question: what makes good research? Answer: Good reading and good writing. I have been told for years and years that some technology will render my work and position obsolete. Ways I plan to update my course for next academic year, is to request a non-computer-lab-classroom; require the annotated bibliography be handwritten first in a bluebooks and then typed/submitted for the final (I'll compare the entries and enter two separate grades weighting the handwritten notebook higher than the final product). It's certainly more work for me, but I am hopeful that it will be worth it in the long run.
How wonderful that you are choosing to go against the stream! Your students will benefit greatly from the extra effort that will be required on their part. Importantly, it shows that you truly care about your work and that is a potent message for students to witness.
"Learning is more than just about getting information and skills into somebody’s head. It happens in the context of relationships." Yes! Artificial relationships created by AI have no place in a classroom. The underlying message we send to children when we place them in front of an AI chat bot to learn is that they are not worthy of the better version, and actual teacher!
Agreed! Thanks for all the work that you are doing in relation to exposing underlying motivations in the EdTech movement. The huge investments, and profits, of tech firms in this endeavor is something that we did not address here. But I think it is clear that profit rather than learning progress underlies much of this AI hype.
Right, AI can’t download wisdom to our brains, only information and analysis. Contextualized, embodied, learning is needed. Our cognitive memory and grey matter are enhanced when we allow our brains to use our bodies, our sensory apparatus, which is why acquiring a language is far superior to learning a language (and why brains of multilingual individuals experience delayed onset of certain types of Alzheimer’s). The more neuro-musculature nodes engaged, the better!
We seem to be caught in a vicious cycle. We use a new gimmick in schools that’s meant to fix a problem, the gimmick makes the problem worse, and then we use another gimmick to fix the problem created by the last one.
This has led us to an increasingly mechanized (gimmick-ified) system that is less effective and more dehumanizing. It’s also so saturated with gimmicks that we start blaming the one good thing that continues to sustain formal education: competent teachers who care.
I’m currently witnessing the demise of the Project Based Learning (PBL) gimmick. Our district built whole new campuses dedicated to the dumb idea that making everything a group project would improve learning. This led to lower test scores and more misbehavior, so these campuses have returned to “traditional” instruction.
There’s money to made with all this stuff, which is why it’ll keep on being a problem. Ironically, we’d save money just hiring more teachers and training them better.
Thanks for adding your perspective Auguste. We seem indeed caught in a gimmick cycle, and AI will surely amplify this even more. And I agree that at the core of it lies not only our mechanized view of the world, but the immense profit to be made.
Yes, you can’t get rid of relationships without downgrading education, but educrats don’t understand that. They want “efficiency,” and skills regardless of the knowledge behind those skills. On top of that, there’s so much money to be made from selling AI to schools and money to be saved by giving every kid an AI teacher that I don’t see how governments and taxpayers don’t leap at that. I hope you’re right and I’m wrong, but I see so many of my peers and students offshoring any difficult thinking to AI so that they don’t have to work nearly as hard to make me think otherwise. I’ve made nearly everything hand written and done in class to try and negate that, but there’s only so much I can do. Kids even know to make intentional writing errors on an AI response so detectors miss it, and plagiarism is hard to prove because of that.
This is brilliant and a perfect summary of the situation. My daughter was until recently in a primary school that was going full Ed-tech (Scotland seems to be a test bed for this stuff) and had even introduced VR headsets to my absolute horror. After returning to school after the lockdowns I noticed how she was going backwards in almost everything and not learning anything, totally disengaged and bored. The lack of slow brain deep learning was exactly the problem! along side the loss of creativity in favour of rapid maths (a disaster when you havent even got the basics mastered) and chromebook based learning. Now she is in a school were tech use is limited, the teaching is creative and interactive and she is flourishing. Good old fashioned nurturing teaching - thats all we want for our kids please! Thank you so much for this important bit of writing. I have been raising the alarm bells within my community and the silence is deafening but I think people are starting to wake up and push back. We have to! x
It intrigues me that Charlotte Mason (19th-century educational pioneer who I know you have referenced in other articles) saw the necessity of first defining the child as a full person before crafting an educational approach. We do not educate persons by delegating their souls to the care of machines. We hear often about schools failing children academically, and ed-tech is one solution people propose (against the evidence) to address this problem - but using computers and AI to "scale" education is trying to give children the world at the expense of their souls, a trade-off that was so evidently bad to Jesus of Nazareth that he posed it as a rhetorical question.
I bumbled along for ages trying to learn an instrument from online videos, and good ones at that. Two months of once-a-week private lessons in the room with a teacher and I've made more progress than I had in years online. There's nothing like individualized instruction and feedback.
My forme boss used to say that what has no price has no value. Or to offer a discount to a client which is unaware of it is like trowing money through the window. It's probably over-simplistic but an education without effort, or leaning only through fun with no failure, will deprive the children to learn tenacity and perseverance, but also to legitimately enjoy the satisfaction to have overcome their limits, to have suddenly understood a mathematical problem and, Eureka, a new high is achieved, a new world is to discover. Without cost and efforts (and love), we are mere fraud.
I just can’t wrap my head around how so many writers are embracing AI. Of course it makes things easier, but the result is we are no longer actually learning to write…. Thank you for another extraordinary piece. I’ll be chewing on this for a while.
I'm always drawn by your pictures selection. I'm an Old Brit, too disabled, though I hope temporarily, to pilgrim in Spain. Wishing you all good walking. It goes with good talk.
I taught for a bit, had family as teachers. I was heavily critical of much British education in my time though it improved in the 1960s for a while, but has been a mixed bag since then. I tend to hold off now given the way things have gone and are currently threatened. Literacy matters, numeracy is still incredibly limited across the whole range of our class structure. More and more society travels on mental tramlines.
Concern for the sustainability of an AI-driven culture extravagant with energy and materials, and environmentally destructive, extends to worrying for our humanity trained by machine. McGilchrist I am finding offers plausible explanations for the training our 'modern' brains have already received. He makes use of clinical and neurological as well as philosophical examples, including use of the branch of natural philosophy we call science.
The uses that the mechanised society has for human beings, as 'it' extends a notionally hybrid intelligence, might not be that inclusive depending on who or what takes charge of priorities. We see enhanced arms races across finance, business, and military, which is not reassuring.
This is one of the most grounded, reflections I’ve read on AI and education. The contrast between System 1 and System 2 couldn’t be more relevant right now, especially as so many schools rush to integrate new tools without asking the harder question: Are we bypassing the struggle that leads to learning?
My wife is an elementary school librarian who hates what devices do to the kids and teachers alike, though she is who they go to for repairs and problems...
I am in the UK and undertook most of my education before computers were readily available. The height of technology in the University library was the microfiche so we could find the books we needed. I was taught how to think in order to solve problems rather than learning by rote. I also did a college course a few years ago, and studied Sociology, Psychology and English Literature. My English Lit teacher was brilliant and I finally understood poetry. Took me a little while, but I got there. Would a computer have been able to teach me that?
This is an excellent well balanced article, struggling with one of the key issues of the day. I was blessed to know Danny Kahneman well, I have written about him many times on substack and his brillant mentoring of young academics, he would always remind me "We think much less than we think we think". The decline in reflective and critical thinking and reading is an area I am deeply concerned with, I see it every day on campus and in business settings. Your article connects with me deeply. The emphasis on the distinction between System 1 and System 2 thinking is, I believe, the crux of the issue.
I often wonder if a contributing factor to this decline is a lack of productive stress in the formative years of education. Have we, in an attempt to make learning "easier," inadvertently removed the necessary cognitive friction that builds the intellectual muscle for System 2 thinking? Jordan Peterson has a wonderful 5 minute video where he is asked by students what one piece of advice he would give - and he responds 'learn to write,' and read 'more.' I show all students, masters, bachelors and postgraduate executives this short talk: https://www.youtube.com/watch?v=Kwjw2J6ByJo
Your article's historical perspective on the failed promises of educational technology is a crucial reminder to temper our expectations. However, it's important to differentiate AI, and specifically Large Language Models (LLMs), from their predecessors like TV, radio, and MOOCs. LLMs present a fundamentally different capability.
This brings me to the Bloom 2 sigma problem, a significant finding in education research. In the 1980s, educational psychologist Benjamin Bloom discovered that students who received one-on-one tutoring, tailored to their individual needs, performed two standard deviations better than students in a traditional classroom setting. This is a massive improvement, but one-on-one human tutoring for every student has always been logistically and financially impossible to scale.
Here is where responsible AI, particularly LLMs, could be a genuine game-changer. These models have the potential to provide personalized, adaptive learning experiences for every student, effectively solving the Bloom 2 sigma problem by offering a scalable version of individualized tutoring.
Of course, the key word is "responsible." I have had the privilege of teaching over 1,000 professors and academics on the responsible implementation of AI in education and how it can benefit students. The goal is not to replace the teacher or eliminate effort, but to augment the learning process. An anecdote from one of my postgraduate courses serves as a cautionary tale. A bank CEO was presenting his work, and when I questioned him on whether he had carefully read what he was presenting, he candidly admitted that he had not and had "become lazy and overly reliant on the AI." This highlights the critical need for an educational framework that emphasizes critical thinking and verification when using these powerful tools.
Ultimately, the responsible use of AI in learning can lead to significant benefits. By automating lower-level cognitive tasks, we can free up students' mental bandwidth to focus on the higher-order thinking skills that are the hallmark of Danny's System 2. AI can be a powerful Socratic partner, a tireless tutor, and a personalized learning guide. The future of education may not be a revolution in the way Edison imagined, but a powerful evolution where technology, used wisely, helps us to become deeper thinkers, not shallower ones.
Thanks for adding your reflections Colin! I can understand why LLMS might seem the hoped-for solution to Bloom’s 2 sigma problem and will be very curious whether the effect is replicated whith an AI rather than a human interlocutor. My guess would be that the human relationship plays a crucial role, especially in ways that might not be empirically measurable. Will be curious to see future studies on this question.
I agree, the human relationship is critical. This is shown in schools and universities across the world.
With respect to LLMs it very much depends upon the system used. For example, if a student uses an AI avatar of me and asks me questions, in natural language, as if they are having a one-on-one question with me (and that avatar relies on an upload of all my notes)... this is different to simply asking a GPT something. The student feels as if they are engaging with me directly and then in the 'physical' world I can see the students interactions and learn where they need specific help.
How can a teacher scale interpersonal relationships to a large class? Indeed, they can't, and it's good to hear it. I am a music teacher with about 70 students, mostly in groups of 6-12 at a time in classes 30-55minutes long. It occurs to me that these group sizes are similar to a traditional extended family size. I can remember 70 name-to-face-to-ability-level correlations only because they come in these appropriately sized groups. Remembering this personal information about each student gives me a chance to use every child's name repeatedly during every class and to give specific feedback on each child's specific challenges at least some weeks.
While teacher-student relationships are not scaleable beyond biological limits, I note that the teacher may have significant influence over the quality of student-student relationships. The structure of the learning environment may determine, for example, whether students' natural competitiveness is expressed in a healthy or unhealthy way, and healthy competition between students is a great driver of quality learning. Most teachers who care are likely already at capacity for personally knowing their students' name-to-face-to-personal-matters correlations, but many students probably aren't. If the environment (which is, granted, generally pretty entrenched and often barely infuencable by a front-line teacher) would stop incentivising unhealthy competition, they could learn through strong relationships with peers, older and younger children. Mixed age groups seems a key to this. Kids naturally know how to teach each other some aspects of life while other aspects they hilariously don't. Basically: spend very little time explaining what went wrong, and then at least twice as much time explaining what should have happened instead (some adults also forget or never knew this one...).
Having spent some years considering how to make my teaching efforts more scaleable, the other line of effort I've come up with is resource development. Every group of kids and teaching context is unique, so there is a never-ending need for well tailored teaching resources that meet the needs of some group somewhere, and at best foster good relationships within that group. (I guess the AI could have a role as an effective search engine to help human teachers find the resources that other human tecahers have created and uploaded in the past. Maybe even a role in adapting an existing resource to a local context with some kind of find-and-replace function).
Thanks so much for sharing your experience Megan! Thanks especially for the detailed description of how you approach the student-teacher relationship:`; the comparison to extended family size was very insightful.
I loved this article. I'm so tired of hearing about how AI is going to make all of us obsolete. I'm a surgeon, and while technology can sometimes be helpful in our profession for making things safer, less invasive, etc, I think it would be a sad day when there was no longer a person to discuss your symptoms with and look you in the eye when delivering difficult news. Some of my greatest learning took place in my high school AP English class where we gathered our desks in a small circle with an "old school" teacher who led thoughtful discussions about classics we were reading like Hamlet or Wuthering Heights. He was tough on our weekly essays, marking them with red ink and lots of comments. I'm so grateful for the two years of AP classes I had with him. He touched countless students' lives with his very "low-tech", "high-relationship," high standard approach to teaching.
Thanks for adding your reflections Jaime! I think the combination of red ink and "high-relationship" is what has potent learning outcomes. I attended a very strict high school in Switzerland, and there were lots of corrections and tough grades, but our teachers truly cared about us. Every year we had several weekends (and even one full week) together skiing or hiking, which helped us to get to know each other and reinforced our learning realtionship when we returned to school.
100% for education I’d agree, but not all the other jobs. Maybe you think AI will not replace jobs because when you think “AI”, you think of Large Language Models, like ChatGPT, and the picture/video equivalent.
It’s questionable to me that anything that could even potentially be considered “AI” actually exists today.
Also, about the surgeon thing, I’d avoid a doctor if possible. My nephew had jaundice
(which is rarely serious) and the doctor said they’ll keep him in a room for a few days away from his parents. My brother refused because of the obvious psychological issues with separating a newborn from his mother for the first few days of his life, and the doctor threatened calling CPS—and mind you—a parent is allowed to refuse this treatment in my province, Ontario (which my brother didn’t know, because why would he). Anyway, I trust a Large Language Models more than a Canadian doctor because it won’t coerce me into following its advice.
Even if you think my brother should have given his son the treatment, it still doesn’t negate that the doctor was coercing him into giving up his rights.
That's crazy!! All a newborn with jaundice needs is a little sunshine. That's what my pediatrician recommended when my daughter was born with jaundice. It's common in newborns.
I mean, how bad jaundice is depends on the child, but that’s good to know. If it was possible I’d be disappointed that the doctor didn’t say that.
That might also be a flaw of a Large Language Model. Since it can't (or won't) "coerce" you into following its advice, it's totally up to you. The problem is, you're definitely not an expert in every field, so you need someone else who has the expertise—and thus the confidence—to "coerce" you and do what is best given a particular circumstance.
"Avoid a doctor if possible"? Hope you're talking about exercising and keeping yourself healthy, rather than asking an LLM for which pills you should take when you're sick.
Ok obviously not some random LLM. I was referring to an LLM made and trained specifically to give medical advice, probably with the help of doctors.
Me personally, I’d rather not have a doctor coerce me. I’d rather have a LLM tell me everything there is to know while retaining the right to ignore it without being either guilt tripped into doing something or being exaggerated to. It will make me able to speak freely.
I bet a lot of people would be more willing to tell an LLM their issues than a doctor as it won’t judge them for ignoring it. This will make people more open and honest with LLM’s and on average get advice that better suits their problems.
Kind of like how if someone knows a psychologist legally has to report if he thinks you might commit suicide (in Canada), they likely won’t be nearly as honest with him.
I loved this article so much! I'm an academic librarian with a teaching load. I teach research methods and routinely start with the question: what makes good research? Answer: Good reading and good writing. I have been told for years and years that some technology will render my work and position obsolete. Ways I plan to update my course for next academic year, is to request a non-computer-lab-classroom; require the annotated bibliography be handwritten first in a bluebooks and then typed/submitted for the final (I'll compare the entries and enter two separate grades weighting the handwritten notebook higher than the final product). It's certainly more work for me, but I am hopeful that it will be worth it in the long run.
How wonderful that you are choosing to go against the stream! Your students will benefit greatly from the extra effort that will be required on their part. Importantly, it shows that you truly care about your work and that is a potent message for students to witness.
Effort is always worth it.
"Learning is more than just about getting information and skills into somebody’s head. It happens in the context of relationships." Yes! Artificial relationships created by AI have no place in a classroom. The underlying message we send to children when we place them in front of an AI chat bot to learn is that they are not worthy of the better version, and actual teacher!
Agreed! Thanks for all the work that you are doing in relation to exposing underlying motivations in the EdTech movement. The huge investments, and profits, of tech firms in this endeavor is something that we did not address here. But I think it is clear that profit rather than learning progress underlies much of this AI hype.
Ruth, they are using the same play book of the National Education Technology Plan from 2010. All they did was replace "technology" with "AI"!
Right, AI can’t download wisdom to our brains, only information and analysis. Contextualized, embodied, learning is needed. Our cognitive memory and grey matter are enhanced when we allow our brains to use our bodies, our sensory apparatus, which is why acquiring a language is far superior to learning a language (and why brains of multilingual individuals experience delayed onset of certain types of Alzheimer’s). The more neuro-musculature nodes engaged, the better!
We seem to be caught in a vicious cycle. We use a new gimmick in schools that’s meant to fix a problem, the gimmick makes the problem worse, and then we use another gimmick to fix the problem created by the last one.
This has led us to an increasingly mechanized (gimmick-ified) system that is less effective and more dehumanizing. It’s also so saturated with gimmicks that we start blaming the one good thing that continues to sustain formal education: competent teachers who care.
I’m currently witnessing the demise of the Project Based Learning (PBL) gimmick. Our district built whole new campuses dedicated to the dumb idea that making everything a group project would improve learning. This led to lower test scores and more misbehavior, so these campuses have returned to “traditional” instruction.
There’s money to made with all this stuff, which is why it’ll keep on being a problem. Ironically, we’d save money just hiring more teachers and training them better.
Thanks for adding your perspective Auguste. We seem indeed caught in a gimmick cycle, and AI will surely amplify this even more. And I agree that at the core of it lies not only our mechanized view of the world, but the immense profit to be made.
PBL is such a huge waste of time.
Yes, you can’t get rid of relationships without downgrading education, but educrats don’t understand that. They want “efficiency,” and skills regardless of the knowledge behind those skills. On top of that, there’s so much money to be made from selling AI to schools and money to be saved by giving every kid an AI teacher that I don’t see how governments and taxpayers don’t leap at that. I hope you’re right and I’m wrong, but I see so many of my peers and students offshoring any difficult thinking to AI so that they don’t have to work nearly as hard to make me think otherwise. I’ve made nearly everything hand written and done in class to try and negate that, but there’s only so much I can do. Kids even know to make intentional writing errors on an AI response so detectors miss it, and plagiarism is hard to prove because of that.
This is brilliant and a perfect summary of the situation. My daughter was until recently in a primary school that was going full Ed-tech (Scotland seems to be a test bed for this stuff) and had even introduced VR headsets to my absolute horror. After returning to school after the lockdowns I noticed how she was going backwards in almost everything and not learning anything, totally disengaged and bored. The lack of slow brain deep learning was exactly the problem! along side the loss of creativity in favour of rapid maths (a disaster when you havent even got the basics mastered) and chromebook based learning. Now she is in a school were tech use is limited, the teaching is creative and interactive and she is flourishing. Good old fashioned nurturing teaching - thats all we want for our kids please! Thank you so much for this important bit of writing. I have been raising the alarm bells within my community and the silence is deafening but I think people are starting to wake up and push back. We have to! x
Thanks for adding your perspective Rosalind! For those who would like to read more about her experience see her post https://substack.com/home/post/p-157608444
Derek Muller's talk is well worth sharing with other parents, teachers, and school administrators.
It intrigues me that Charlotte Mason (19th-century educational pioneer who I know you have referenced in other articles) saw the necessity of first defining the child as a full person before crafting an educational approach. We do not educate persons by delegating their souls to the care of machines. We hear often about schools failing children academically, and ed-tech is one solution people propose (against the evidence) to address this problem - but using computers and AI to "scale" education is trying to give children the world at the expense of their souls, a trade-off that was so evidently bad to Jesus of Nazareth that he posed it as a rhetorical question.
I bumbled along for ages trying to learn an instrument from online videos, and good ones at that. Two months of once-a-week private lessons in the room with a teacher and I've made more progress than I had in years online. There's nothing like individualized instruction and feedback.
My forme boss used to say that what has no price has no value. Or to offer a discount to a client which is unaware of it is like trowing money through the window. It's probably over-simplistic but an education without effort, or leaning only through fun with no failure, will deprive the children to learn tenacity and perseverance, but also to legitimately enjoy the satisfaction to have overcome their limits, to have suddenly understood a mathematical problem and, Eureka, a new high is achieved, a new world is to discover. Without cost and efforts (and love), we are mere fraud.
I just can’t wrap my head around how so many writers are embracing AI. Of course it makes things easier, but the result is we are no longer actually learning to write…. Thank you for another extraordinary piece. I’ll be chewing on this for a while.
I'm always drawn by your pictures selection. I'm an Old Brit, too disabled, though I hope temporarily, to pilgrim in Spain. Wishing you all good walking. It goes with good talk.
I taught for a bit, had family as teachers. I was heavily critical of much British education in my time though it improved in the 1960s for a while, but has been a mixed bag since then. I tend to hold off now given the way things have gone and are currently threatened. Literacy matters, numeracy is still incredibly limited across the whole range of our class structure. More and more society travels on mental tramlines.
Concern for the sustainability of an AI-driven culture extravagant with energy and materials, and environmentally destructive, extends to worrying for our humanity trained by machine. McGilchrist I am finding offers plausible explanations for the training our 'modern' brains have already received. He makes use of clinical and neurological as well as philosophical examples, including use of the branch of natural philosophy we call science.
The uses that the mechanised society has for human beings, as 'it' extends a notionally hybrid intelligence, might not be that inclusive depending on who or what takes charge of priorities. We see enhanced arms races across finance, business, and military, which is not reassuring.
This is one of the most grounded, reflections I’ve read on AI and education. The contrast between System 1 and System 2 couldn’t be more relevant right now, especially as so many schools rush to integrate new tools without asking the harder question: Are we bypassing the struggle that leads to learning?
My wife is an elementary school librarian who hates what devices do to the kids and teachers alike, though she is who they go to for repairs and problems...
I am in the UK and undertook most of my education before computers were readily available. The height of technology in the University library was the microfiche so we could find the books we needed. I was taught how to think in order to solve problems rather than learning by rote. I also did a college course a few years ago, and studied Sociology, Psychology and English Literature. My English Lit teacher was brilliant and I finally understood poetry. Took me a little while, but I got there. Would a computer have been able to teach me that?
May I also recommend the following book:
https://www.amazon.co.uk/Mother-Invention-Ideas-Ignored-Economy/dp/0008430772
Ms Marcel discusses how a computer can learn to play chess but would never be able to play tennis to the standards of Serena Williams.
Thank you so much for this brilliant article. I’m glad I was a tortoise.
This is an excellent well balanced article, struggling with one of the key issues of the day. I was blessed to know Danny Kahneman well, I have written about him many times on substack and his brillant mentoring of young academics, he would always remind me "We think much less than we think we think". The decline in reflective and critical thinking and reading is an area I am deeply concerned with, I see it every day on campus and in business settings. Your article connects with me deeply. The emphasis on the distinction between System 1 and System 2 thinking is, I believe, the crux of the issue.
I often wonder if a contributing factor to this decline is a lack of productive stress in the formative years of education. Have we, in an attempt to make learning "easier," inadvertently removed the necessary cognitive friction that builds the intellectual muscle for System 2 thinking? Jordan Peterson has a wonderful 5 minute video where he is asked by students what one piece of advice he would give - and he responds 'learn to write,' and read 'more.' I show all students, masters, bachelors and postgraduate executives this short talk: https://www.youtube.com/watch?v=Kwjw2J6ByJo
Your article's historical perspective on the failed promises of educational technology is a crucial reminder to temper our expectations. However, it's important to differentiate AI, and specifically Large Language Models (LLMs), from their predecessors like TV, radio, and MOOCs. LLMs present a fundamentally different capability.
This brings me to the Bloom 2 sigma problem, a significant finding in education research. In the 1980s, educational psychologist Benjamin Bloom discovered that students who received one-on-one tutoring, tailored to their individual needs, performed two standard deviations better than students in a traditional classroom setting. This is a massive improvement, but one-on-one human tutoring for every student has always been logistically and financially impossible to scale.
Here is where responsible AI, particularly LLMs, could be a genuine game-changer. These models have the potential to provide personalized, adaptive learning experiences for every student, effectively solving the Bloom 2 sigma problem by offering a scalable version of individualized tutoring.
Of course, the key word is "responsible." I have had the privilege of teaching over 1,000 professors and academics on the responsible implementation of AI in education and how it can benefit students. The goal is not to replace the teacher or eliminate effort, but to augment the learning process. An anecdote from one of my postgraduate courses serves as a cautionary tale. A bank CEO was presenting his work, and when I questioned him on whether he had carefully read what he was presenting, he candidly admitted that he had not and had "become lazy and overly reliant on the AI." This highlights the critical need for an educational framework that emphasizes critical thinking and verification when using these powerful tools.
Ultimately, the responsible use of AI in learning can lead to significant benefits. By automating lower-level cognitive tasks, we can free up students' mental bandwidth to focus on the higher-order thinking skills that are the hallmark of Danny's System 2. AI can be a powerful Socratic partner, a tireless tutor, and a personalized learning guide. The future of education may not be a revolution in the way Edison imagined, but a powerful evolution where technology, used wisely, helps us to become deeper thinkers, not shallower ones.
Thanks for adding your reflections Colin! I can understand why LLMS might seem the hoped-for solution to Bloom’s 2 sigma problem and will be very curious whether the effect is replicated whith an AI rather than a human interlocutor. My guess would be that the human relationship plays a crucial role, especially in ways that might not be empirically measurable. Will be curious to see future studies on this question.
I agree, the human relationship is critical. This is shown in schools and universities across the world.
With respect to LLMs it very much depends upon the system used. For example, if a student uses an AI avatar of me and asks me questions, in natural language, as if they are having a one-on-one question with me (and that avatar relies on an upload of all my notes)... this is different to simply asking a GPT something. The student feels as if they are engaging with me directly and then in the 'physical' world I can see the students interactions and learn where they need specific help.
How can a teacher scale interpersonal relationships to a large class? Indeed, they can't, and it's good to hear it. I am a music teacher with about 70 students, mostly in groups of 6-12 at a time in classes 30-55minutes long. It occurs to me that these group sizes are similar to a traditional extended family size. I can remember 70 name-to-face-to-ability-level correlations only because they come in these appropriately sized groups. Remembering this personal information about each student gives me a chance to use every child's name repeatedly during every class and to give specific feedback on each child's specific challenges at least some weeks.
While teacher-student relationships are not scaleable beyond biological limits, I note that the teacher may have significant influence over the quality of student-student relationships. The structure of the learning environment may determine, for example, whether students' natural competitiveness is expressed in a healthy or unhealthy way, and healthy competition between students is a great driver of quality learning. Most teachers who care are likely already at capacity for personally knowing their students' name-to-face-to-personal-matters correlations, but many students probably aren't. If the environment (which is, granted, generally pretty entrenched and often barely infuencable by a front-line teacher) would stop incentivising unhealthy competition, they could learn through strong relationships with peers, older and younger children. Mixed age groups seems a key to this. Kids naturally know how to teach each other some aspects of life while other aspects they hilariously don't. Basically: spend very little time explaining what went wrong, and then at least twice as much time explaining what should have happened instead (some adults also forget or never knew this one...).
Having spent some years considering how to make my teaching efforts more scaleable, the other line of effort I've come up with is resource development. Every group of kids and teaching context is unique, so there is a never-ending need for well tailored teaching resources that meet the needs of some group somewhere, and at best foster good relationships within that group. (I guess the AI could have a role as an effective search engine to help human teachers find the resources that other human tecahers have created and uploaded in the past. Maybe even a role in adapting an existing resource to a local context with some kind of find-and-replace function).
I recently identified a very specific local need and made this resource to fit into it. https://substack.com/home/post/p-161438388
Thanks so much for sharing your experience Megan! Thanks especially for the detailed description of how you approach the student-teacher relationship:`; the comparison to extended family size was very insightful.