Articles

Less certainty, more awareness: AI is showing the way that schools are afraid to take

When a machine says, “I don’t know,” it’s time to reinvent education.

By Dr. Tawhid CHTIOUI, Founding President of aivancity, the leading school for AI and data

Imagine this. You ask a state-of-the-art artificial intelligence—packed with artificial neurons, fed a massive amount of global data, and more connected than your teenager on a Saturday night—a question, and it replies, without batting an eye: “I don’t know.”

Not a loading error. Not a server outage. No: a deliberate act of algorithmic modesty.

On the other end of the line, you’re left speechless. It’s as if ChatGPT had suddenly been struck by an existential crisis.

This scene isn’t science fiction. It’s the result of a very real innovation from Themis AI, a startup that emerged from MIT’s labs, which has just given generative AI an unexpected superpower:doubt. An AI that hesitates. An AI that chooses not to answer. An AI that resembles us… when we’re being honest. A programmed act of lucidity.

For years now, we have trained machines to respond to everything, to talk more than they think, to assert rather than question. We have entrusted them with the mission of enlightening the world… while forbidding them to admit when they can’t see clearly.

In this grand digital farce, teachers have long been the first to be relegated to the role of extras. There has been an effort to replace their voices with synthetic ones, their doubts with automated certainties, and their thoughtful deliberation with instant responsiveness.

But now AI itself is taking a step back. It is discovering that knowledge without consciousness is nothing but the ruin of cognition. It is restoring doubt to its rightful place.

What if education in the 21st century were no longer about answering every question, but about learning to ask the right ones? No longer about piling up knowledge, but about cultivating a skill that is rarer, more subversive, and more fruitful: the ability to say, “I don’t know. And that’s why I’m searching.”

At the heart of this mini-revolution lies a simple yet radical step: teaching artificial intelligence to recognize when it doesn’t know. That’s exactly what Themis AI, the startup founded by MIT researchers, is doing. Their idea? To add a“layer of awareness of limitations”to large language models—a sort of filter that measures, in real time, the AI’s level of confidence in its own response. And if the doubt is too great, the machine replies: “I don’t know.”

A refusal to overcome obstacles? No. A technical feat coupled with an ethical turning point. Because trust isn’t built by perfecting the model’s performance. It’s built by giving it the ability to doubt itself.

By reducing hallucinations in professional use by 64%, Themis AI has not only improved a tool—it has redefined the criteria for the maturity of artificial intelligence.

But the most fascinating thing is that this technological innovation… is merely making up for what schools should have been doing for a long time.

Because, in truth, doubt is not a weakness. It is a method. A mindset. A skill.

And if an AI can learn to doubt, why does our education system continue to value confidence over clarity? To prioritize quick answers over thoughtful questions?

For centuries, a teacher’s authority rested on one decisive advantage: they knew. They possessed the knowledge, controlled the information, and took it upon themselves to correct those who strayed. They were the guardians of the temple of knowledge.

Then came the internet, then Wikipedia, then ChatGPT… And suddenly, that role as the official distributor of knowledge became as obsolete as a cassette player. The “encyclopediateacher” ? Outdated.
The “search engine teacher”? Out of the picture.

But this apparent demotion is actually a tremendous opportunity for reinvention. For while AI is capable of reciting, rephrasing, and even writing with disconcerting flair, it still cannot guide, contextualize, question, nuance, or foster growth.

Yet this is precisely what a 21st-century teacher must do:
no longer simply provide answers, but guide students’ learning journeys.
The teacher becomes a skills architect.

Their role is no longer to know everything, but to help others understand better. It is not to impose a single truth, but to create spaces for exploration. It is not to fear AI, but to use it as an educational mirror, revealing gray areas, blind spots, and unspoken truths. In other words, where AI generates content, the teacher creates meaning.

And in a world saturated with information churned out nonstop, meaning has become far more valuable than raw knowledge. An AI can write a perfect essay. But it cannot explain why this question is worth asking.

So, no, the teacher isn’t dead. He’s just changed his outfit. He’s traded in the scholar’s robe for the hard hat of a skills builder. And his best tool? Doubt…

He would probably have started by silencing him.

Then, he would have taught her to ask questions. Not questions meant to elicit answers. Questions meant to cast doubt on the answer. Questions meant to explore the shadows surrounding certainty. Questions like gaps in knowledge, where thought can breathe.

Because Socrates—and we tend to forget this all too quickly— never actually taught anything. He never gave a lecture, never graded an assignment, never assigned a score out of ten. All he did was engage in dialogue, tirelessly . And when pressed to take a stand, he would reply , “All I know is that I know nothing.”

Today, he would fail the agrégation exam…

But his mind is making a strong comeback, and through an unexpected avenue: that of machines.

Themis AI, this artificial humility module, does nothing more than establish an algorithmic version of Socratic doubt. It does not seek to make AI more knowledgeable, but more self-aware. Less arrogant. More reliable, precisely because it knows it can be wrong.

What if we taught our students to identify the blind spots in an argument—even when it’s well-presented, well-written, or even signed by ChatGPT? What if we reimagined uncertainty as a learning tool, rather than a weakness to be corrected?

Because, deep down, intelligence isn’t about answering quickly; it’s about being able to stay focused on the question without panicking.

So yes: AI doubts, therefore it thinks. And if it starts to think like Socrates, perhaps it is up to human education to do the same.

In recent days, the headlines have been as sensational as premature obituaries:

  • “Generative AI makes people dumber.”
  • “ChatGPT is eroding our critical thinking.”
  • Students cheat, and minds go blank.”

The latest round? An MIT study involving 54 participants (aged 18 to 39), divided into three groups (ChatGPT users, Google Search users, and “brain-only” users). Result: the ChatGPT group showed significantly lower brain activity: less creativity, poorer memory, and essays deemed “soulless” and “uniform”[i]. A similar finding points to a decline in critical thinking among those who “trust” AI.

It’s as if artificial intelligence were some kind of cognitive microwave: convenient, fast, but it takes the joy out of cooking. Except there’s a slight problem with that line of thinking: it’s not AI that makes us stupid; it’s how we use it.

Give a three-year-old a drill, and they won’t build you a treehouse. Give ChatGPT to an education system designed for multiple-choice answers, grading rubrics, and essays, and you’ll get… cookie-cutter answers.

But if you give that same AI to minds trained to doubt, to question, and to analyze, then it becomes a formidable intellectual opponent. A training ground. A sparring partner for the mind.

It’s not the tool that makes the level; it’s the level that makes the tool.

So, yes, AI can make you lose your mind.
But only if you’ve been taught to obey, not to explore.
Only if you’ve been led to believe that “having the right answer” is more important than “understanding why that answer makes sense.”

In other words: it’s not artificial intelligence that poses a threat to us. It’s artificial pedagogy.

And if we want to avoid becoming processor-assisted intellectual zombies, perhaps it’s time to reintroduce doubt, humility, and friction into our learning. To stop pretending that there’s always a right answer. To stop valuing speed over depth. To turn mistakes into a method, ignorance into a starting point, and uncertainty into a compass.

What if we reimagined education—not just school, but education in the broadest sense—as a way of navigating uncertainty?

No longer a parade of neatly arranged correct answers, but a laboratory of uncertainty, a playground for testing, formulating, getting lost, and starting over.

Because while AI can generate text, it still cannot generate critical thinking. It lacks intuition, constructive doubt, and that little inner voice that says, “Hmm, really?”
And if schools are to prepare the citizens of tomorrow, they would do well to teach what machines will never be able to do: to think critically about one’s own ideas.

And if we want to nurture minds capable of resisting algorithmic illusion, it will take much more than dedicated teachers: it will also take engaged parents.

Yes, teaching children to question things starts at home. It starts when we stop answering every question with absolute certainty. It starts when we tell a child, “I don’t know, but we can look it up together.” It starts when we value curiosity over certainty, and inquiry over rote memorization.

So, what concrete steps could we take? Here is a brief, yet thoroughly sensible plan for teaching children— both at school and at home—how to cope with uncertainty:

  • Learning to Say “I Don’t Know” (and Being Proud of It)

Turn the admission of ignorance into a rite of passage.

At school, start each lesson with a question that no one can answer right away—not even ChatGPT. Create “I don’t know yet” badges instead of “you got it wrong” grades.

At home, when a child asks a question, resist the urge to explain everything. Sometimes respond with, “That’s a good question… do you want us to look it up together?” Show them that even grown-ups don’t know everything, and that’s okay. It’s actually exciting.

  • Teaching paradoxical thinking

What if two opposing statements could both be… useful?

In school, cultivate the art of saying “yes, but,” “it depends,” and “look at it from a different angle.” Turn doubt into a skill, not a flaw. Help students appreciate nuances, complexities, and gray areas.

At home, stop answering “true or false?” questions with simple “true” or “false” answers. Sometimes, say, “It’s more complicated than that.” Read stories together that don’t have obvious good guys or bad guys. Show that the world isn’t black and white, and that’s what makes it interesting.

  • Rehabilitating Error as a Method

A mistake isn't a fault. It's the rough draft of intelligence.

In school, rather than “correcting” students, let’s analyze their missteps, their flawed assumptions, and their poorly articulated intuitions. Let’s show that thinking is built on missteps, not certainties. Intelligence without error is intelligence without learning.

At home, when a child makes a mistake, avoid saying, “See, you’re not thinking!” and instead say, “That’s interesting… what were you trying to say?” Help them work through their reasoning and correct the mistake on their own, without feeling ashamed. Create an environment where mistakes aren’t seen as failures, but as stepping stones.

  • Putting AI to the test (not the other way around)

In schools, instead of banning ChatGPT, let’s learn to ask it meaningful questions. Let’s learn to spot its hallucinations, assess its reliability, and discuss its limitations. Let’s treat machines as subjects of study, not as oracles.

At home, explore AI with your children, not behind their backs. Ask them, “What do you think?” and then, “And you—do you agree?” Teach them to verify, to question, and to dig deeper. Show them that just because something is well-phrased doesn’t mean it’s true.

  • Train investigators, not encyclopedists

The student of the future doesn’t need to know everything. They must know how to formulate a problem, propose a hypothesis, test an intuition, and dismantle a line of reasoning. In short, they must become an investigator of reality, a craftsman of doubt, and an architect of fluid truth.

In school, replace traditional tests with mini-investigations. Teach students how to research, formulate a hypothesis, and articulate a doubt. Turn students into detectives of the real world, not mere repositories of knowledge.

At home, encourage curiosity. When a child says, “I don’t understand,” don’t give them the answer—instead, offer a hint. Help them look for answers in various sources. Teach them that understanding is a journey, not a box to check off.

Because in a world saturated with pre-packaged certainties, true intellectual luxury lies in not knowing. In searching, for a long time. In doubting, intelligently. And in finally understanding that it’s not the answer that matters. It’s the way we approach it.

What if the greatest educational revolution of our century were not technological, but philosophical? What if the most valuable lesson AI teaches us were not speed, productivity, or performance… but humility?

Because, let’s face it: we’ve focused too much on teaching students how to answer questions and too little on teaching them how to ask them. We’ve created systems where those who doubt are seen as slow, weak, and confused. Where mistakes are punished, uncertainty is stigmatized, and nuance is frowned upon.

But doubt is the lifeblood of intelligence, the necessary pause between information and understanding, the breath before the leap.

And now an AI is reminding us of this. An algorithm has finally dared to say what so many teachers no longer dare to say: “I don’t know.”

AI’s “coming out” isn’t that it thinks—it’s that it doesn’t know everything. And that might just be the smartest thing it’s ever said.

This may be the beginning of something great. Not the end of knowledge, but a rebirth.
A shift from a top-down education system—where knowledge is poured into open minds—to a circular, dialogic, Socratic approach.

So to all of you who teach, train, and mentor: Don’t be afraid to admit what you don’t know.

Be wary of pretending to know. Make doubt a tool, humility a way of life, and uncertainty a means of learning. Not to give up on understanding, but to relearn how to think.

Because when faced with machines that claim to know everything, perhaps the most human, the freest, and the most powerful act is simply… to say: “I don’t know. And that’s why I’m alive.”

Selected by Keyrus as one of the 25 most influential global figures in the field of AI and data (January 2025), Tawhid CHTIOUI is an international expert, speaker, and serial entrepreneur in the field of higher education and training. He is the Founding President of aivancity, the Grande École of Artificial Intelligence and Data. Holder of a PhD in Management Sciences from Paris Dauphine University and a degree from Harvard University’s Leadership Development Program in Higher Education, he has held academic and leadership positions at various business schools in France and internationally. Tawhid CHTIOUI was named a Knight (2016) and later an Officer (2022) of the Order of Academic Palms and has also received several international awards, including the “Top 100 Leaders in Education Award” from the Global Forum on Education & Learning, “The Name in Science & Education Award” from the Socrates Committee Oxford Debate University of the Future, the “Top 10 Most Inspiring People in Education, 2022” award from CIO VIEWS, and the “Trophée de la Pédagogie” 2024 from Eduniversal.

https://www.medialab-factory.com/ia-intelligence-artificielle-rend-plus-bete-ou-augmente-creativite/

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Related posts
Articles

War in the Age of AI

When algorithms enter the fray

By Dr. Tawhid CHTIOUI, Founding President of aivancity School of AI & Data for Business & Society; selected by Keyrus as one of the 25 most influential global figures in the field of AI and data…
Articles

When AI Reaches the Level of Average Human Creativity: Schools and the Workplace Confront the End of a Comforting Myth

By Dr. Tawhid CHTIOUI, Founding President of aivancity, the leading school for AI and data A student submits a brilliant paper. The ideas flow smoothly, are well-structured, and are original without being confusing. The reasoning is coherent,…
Articles

2026: The surge in free AI courses from Microsoft, Google, Stanford, and MIT. Can we learn AI without learning about the world it is transforming?

By Dr. Tawhid CHTIOUI, Founding President of aivancity, the Leading School of AI and Data 1. The Comforting Illusion of Technical Training By the end of 2025, a strange consensus had taken hold. Faced with the sudden emergence…
The AI Clinic

Would you like to submit a project to the AI Clinic and work with our students?

Leave a comment

Your email address will not be published. Required fields are marked with *