Site icon aivancity blog

Less certainty, more awareness: AI is blazing a trail that schools are afraid to follow

When a machine says, “I don’t know,” it’s time to reinvent education.

By Dr. Tawhid CHTIOUI, President and Founder of aivancity, the leading institution for AI and data

Just imagine. You ask a question of a state-of-the-art artificial intelligence system—packed with artificial neurons, fed a massive amount of data, and more connected than your teenager on a Saturday night—and it answers you, without batting an eye: “I don’t know.”

Not a loading error. Not a server failure. No: a deliberate act of algorithmic modesty.

At the other end of the machine, you’re left speechless. It’s as if ChatGPT had suddenly been struck by an existential doubt.

This scene isn’t science fiction. It’s the result of a very real innovation by Themis AI, an MIT startup that has just endowed generative AIs with an unexpected superpower:doubt. An AI that hesitates. An AI that chooses not to answer. An AI that’s just like us… when we’re honest. A programmed act of lucidity.

For years, we’ve been training machines to answer every question, to speak more than they think, to assert more than they question. We’ve entrusted them with the mission of enlightening the world… while forbidding them to admit when they don’t see clearly.

In this brilliant digital comedy, teachers have long been the first to be relegated to the role of extras. Their voices have been replaced by computer-generated voices, their doubts by automated certainties, and their thoughtful deliberation by instant reactivity.

But now AI itself has taken a step back. It has realized that knowledge without consciousness is the downfall of cognition. It is giving doubt its rightful place.

What if 21st-century education no longer meant answering all the questions, but learning to ask the right ones? No longer piling up knowledge, but cultivating a rarer, more subversive, more fertile skill: knowing how to say, “I don’t know. And that’s why I’m looking.”

At the heart of this mini-revolution lies a simple yet radical approach: teaching an artificial intelligence to recognize when it doesn’t know. That’s exactly what Themis AI, the startup founded by MIT researchers, is doing. Their idea? Add a “ boundary awareness layer ” to large language models. A sort of filter that measures, in real time, the AI’s level of confidence in its own response. And if the doubt is too great, the machine responds: “I don’t know.”

An obstacle refusal? No. A technical feat combined with an ethical turning point. Because building trust doesn’t come from perfecting the model’s performance. It comes from giving it the ability to question itself.

By reducing errors in professional use by 64%, Themis AI has not only improved a tool: it has redefined the criteria for the maturity of artificial intelligence.

But what’s most fascinating is that this technological innovation… is simply catching up to what the school should have done a long time ago.

Because the truth is, doubt isn’t a weakness. It’s a method. A mindset. A skill.

And if an AI can learn to doubt, why does our education system continue to prioritize self-assurance over clarity? To prioritize quick answers over thoughtful questions?

For centuries, the teacher’s authority rested on one decisive advantage: he knew. He possessed the content, controlled the knowledge, and had the authority to correct those who strayed. He was the guardian of the temple of knowledge.

Then came the Internet, then Wikipedia, then ChatGPT… And suddenly, this role of official distributor of knowledge became as obsolete as a cassette player. The encyclopedia teacher? Outdated .
The search engine prof? Decommissioned.

But this apparent downgrade is in fact a tremendous opportunity for reinvention. For while AI is capable of reciting, rephrasing, and even writing with unsettling brilliance, it still lacks the ability to engage, provide context, question, add nuance, and evolve.

Yet this is precisely what a 21st-century teacher must do:
no longer simply provide answers, but rather guide students along their learning paths.
The teacher becomes an architect of skills.

Its role is no longer to know everything, but to help people understand better. Not to impose a truth, but to create spaces for exploration. Not to fear AI, but to use it as a pedagogical mirror, revealing areas of uncertainty, blind spots, and things left unsaid. In other words, while AI generates content, the teacher creates meaning.

And in a world saturated with mass-generated information, meaning has become a far more precious commodity than raw knowledge. An AI can write a perfect essay. But it can’t explain why that question is worth asking.

So, no, the professor isn’t dead. He’s just changed his outfit. He’s traded in the smock of the know-it-all for the hard hat of the skills builder. And his best tool? Doubt…

He would probably have started by silencing her.

Then he would teach her to ask questions. Not questions meant to elicit answers. Questions meant to cast doubt on the answer. Questions meant to explore the shadows surrounding certainty. Questions like holes in knowledge, where thought can breathe.

Because Socrates—and it’s easy to forget— never taught anything. He never gave a lecture, never graded a paper, never assigned a score out of ten. All he did was talk, tirelessly. And when pressed, he would reply: “All I know is that I know nothing.”

Today, he would fail the agrégation…

But his spirit has returned with a vengeance, and through an unexpected avenue: that of machines.

Themis AI, this artificial humility module, does nothing more than implement an algorithmic version of Socratic doubt. It does not seek to make AI more knowledgeable, but more lucid. Less arrogant. More reliable, precisely because it knows it can make mistakes.

What if we taught our students to identify the flaws in a line of reasoning, even when it’s well presented, well written, or even generated by ChatGPT? What if we embraced uncertainty as a learning tool, rather than a weakness to be corrected?

Because, in the end, intelligence isn’t about answering quickly; it’s the ability to stay focused on the question without panicking.

So yes: AI has doubts, so it thinks. And if it starts to think like Socrates, maybe it’s time for human education to do the same.

In recent days, headlines have thundered like premature autopsies:

The latest development? An MIT study of 54 participants (aged 18 to 39), divided into three groups (ChatGPT, Google Search, or “brain-only” users). The result: the ChatGPT group showed significantly lower brain activity: less creativity, poorer memory, and essays judged to be “soulless” and “uniform”[i]. A similar finding points to a decline in critical thinking among those who “trust” AI.

It would seem that artificial intelligence is a kind of cognitive microwave: practical, fast, but it takes the joy out of cooking. Except there’s a slight problem with this line of reasoning: it’s not AI that makes you stupid; it’s the way you use it.

Give a three-year-old a drill, and he won’t build you a treehouse. Give ChatGPT to an educational system designed for multiple-choice questions, grading scales, and essays, and you’ll get… clones of answers.

But if you give that same AI to minds trained to doubt, question, and deconstruct, then it becomes a formidable intellectual adversary. A training ground. A sparring partner for reasoning.

It’s not the tool that makes the level; it’s the level that makes the tool.

So, yes, AI can cause you to lose brain cells.
But only if you’ve been taught to obey, not to explore.
Only if you’ve been led to believe that “getting the right answer” is more important than “understanding why that answer makes sense.”

In other words: it’s not artificial intelligence that threatens us. It’s artificial pedagogy.

And if we want to avoid becoming processor-assisted intellectual zombies, maybe it’s time to bring doubt, humility, and friction back into our learning. To stop pretending that there’s always a right answer. To stop prioritizing speed over depth. To transform error into a method, ignorance into a starting point, and uncertainty into a compass.

What if we reimagined education—not just schooling, but education in the broadest sense of the term—as a way of navigating uncertainty?

No longer a parade of neatly arranged correct answers, but a laboratory of uncertainty, a playground for testing, experimenting, getting lost, and starting over.

Because while AIs know how to generate text, they still don’t know how to generate critical thinking. They have no intuition, no fertile doubts, no inner voice that says, “Hmm, really?”
And if schools are to prepare tomorrow’s citizens, they would do well to teach what machines will never do: to question one’s own assumptions.

And if we want to prepare minds capable of resisting the algorithmic illusion, we’ll need more than just dedicated teachers: we’ll also need enlightened parents.

Yes, teaching children to embrace doubt starts at home. It begins when you stop answering every question with absolute certainty. When we tell a child, “I don’t know, but we can look it up together.” When we value curiosity over certainty, and inquiry over rote memorization.

So what can we do about it? Here’s a simple yet effective program to teach—both at school and at home—about uncertainty:

Turning an admission of ignorance into a rite of passage.

At school, start every lesson with a question that no one can answer right away—not even ChatGPT. Create “I don’t know yet” badges instead of “you’re wrong” notes.

At home, when a child asks a question, resist the urge to explain everything. Sometimes respond with, “Good question… do you want to look it up together?” Show them that even adults don’t know everything, and that it’s okay. It’s even exciting.

What if two conflicting statements could both be useful?

At school, cultivate the art of “yes, but,” of “it depends,” and of “look at it from another angle.” Turn doubt into a skill, not a flaw. Cultivate an appreciation for nuance, complexity, and gray areas.

At home, stop answering “Is it true or false?” with a simple “true” or “false.” Sometimes say, “It’s more complicated than that.” Read stories together that don’t have obvious good guys or bad guys. Show that the world isn’t black and white, and that’s what makes it interesting.

Mistakes aren't faults. They're the building blocks of intelligence.

At school, rather than “correcting,” let’s analyze discrepancies, failed hypotheses, and poorly formulated intuitions. Let’s show that thinking is built on deviations, not certainties. Intelligence without error is intelligence without learning.

At home, when a child makes a mistake, avoid saying, “See, you’re not thinking!” and instead say, “Interesting… what did you mean?” This helps the child explain their reasoning and correct the mistake on their own, without feeling ashamed. Create an environment where mistakes aren’t seen as failures, but as stepping stones.

At school, instead of banning ChatGPT, let’s learn to ask it meaningful questions. Let’s identify its errors, evaluate its confidence levels, and discuss its limitations. Let’s treat machines as subjects of study, not as oracles.

At home, explore AI with your children, not behind their backs. Ask them, “What do you think?” and then, “Do you agree?” Teach them to question, to be skeptical, and to dig deeper. Show them that just because something is well-formulated doesn’t mean it’s right.

The student of the future doesn’t need to know everything. They need to know how to formulate a problem, propose a hypothesis, test an intuition, and challenge a line of reasoning. In short, they need to become an investigator of reality, a craftsman of doubt, and an architect of fluid truth.

In school, replace knowledge tests with mini-investigations. Teach students how to research, formulate a hypothesis, and articulate a question. Turn students into detectives of reality, not mere repositories of knowledge.

At home, encourage curiosity. When a child says, “I don’t understand,” don’t give them the answer, but give them a hint. Help them explore different sources. Teach them that understanding is a journey, not a box to be checked off.

Because in a world saturated with automated certainties, the true luxury of the mind is not knowing. To search, for a long time. To doubt, intelligently. And to understand, finally, that it’s not the answer that matters. It’s how you approach it.

What if the greatest educational revolution of our century weren’t technological, but philosophical? What if the most valuable thing AI could teach us wasn’t speed, productivity, or performance… but humility?

Yes, we have to admit it: we have taught people to rely too much on answers and to seek too little on their own. We’ve created systems where those who question things are seen as slow, weak, and confused. Where mistakes are punished, uncertainty is stigmatized, and nuance is frowned upon.

But doubt is the breath of intelligence, the necessary pause between information and understanding, the breath before the leap.

And now an AI is reminding us. That an algorithm finally dares to say what so many teachers no longer dare: “I don’t know.”

The AI’s revelation isn’t that it thinks; it’s that it doesn’t know everything—and that’s perhaps the smartest thing it’s ever said.

Perhaps this is the beginning of something big. Not the end of knowledge, but a rebirth.
A shift from vertical education—where knowledge is poured into open minds—to circular, dialogical, Socratic education.

So to all those who teach, train, and support: don’t be afraid to speak up.

Fear pretending to know. Make doubt a tool, humility a way of life, uncertainty a way of learning. Not to give up on understanding. But to relearn how to think.

Because in the face of machines that claim to know everything, the most human, the freest, the most powerful act is perhaps, simply… to say: “I don’t know. And that’s why I’m alive.”

Selected by Keyrus as one of the 25 most influential global figures in AI and data (January 2025), Tawhid CHTIOUI is an international expert, speaker, and serial entrepreneur in higher education and training. He is President and Founder of aivancity, the Grande École of Artificial Intelligence and Data. He holds a PhD in Management Sciences from Paris Dauphine University and has completed a leadership development program in Higher Education at Harvard University, and has held academic and administrative positions at various business schools in France and abroad. Tawhid CHTIOUI is a Chevalier (2016) and Officier (2022) of the Ordre des Palmes Académiques, and has also received several international awards, including the “Top 100 Leaders in Education Award” from the Global Forum on Education & Learning, “The Name in Science & Education Award” from the Socrates Committee Oxford Debate University of the Future and the Top 10 Most Inspiring People in Education, 2022, issued by CIO VIEWS, and the “Trophée de la Pédagogie” 2024 from Eduniversal.

- https://www.medialab-factory.com/ia-intelligence-artificielle-rend-plus-bete-ou-augmente-creativite/ 
Exit mobile version