A grandmother's voice, a formidable AI
The voice is soft, slightly hesitant. She talks about her family, asks for details, deliberately loses her train of thought… but she isn’t a real elderly person. On the other end of the line, a scammer thinks he’s tricked a vulnerable target. He has no idea that he’s wasting his time dealing with artificial intelligence designed to trap him.
That is precisely the goal of Daisy (or dAIsy), the new chatbot launched by O2, the British mobile phone operator. Designed to outsmart scammers trying to defraud vulnerable people, Daisy plays the role of a digital grandmother—slow, endearing, but above all… extremely patient. Here’s how it works.
This type of initiative follows in the footsteps of projects like Jolly Roger Telephone Co. in the United States, which use voice bots that mimic “grandmothers” to make life difficult for telephone scammers.
A creative response to a very real threat: According to the U.S. Federal Trade Commission (FTC), more than 2.5 million complaints about telephone scams were filed in the United States in 2024, resulting in more than $1.1 billion in losses1.
How does this anti-AI work?
AI uses a combination of accessible yet well-coordinated technologies:
- a language model designed to understand scammers' speech and generate appropriate responses;
- realistic speech synthesis that mimics the characteristics of an older person's voice (intonation, hesitation, speech rate);
- automated detection of suspicious keywords, such as “bank transfer,” “banking issue,” “family emergency,” etc.
The bot simulates a real conversation while adopting a confused yet polite tone, which encourages scammers to continue the conversation.
According to internal project data, Jolly Roger’s AI systems divert calls for an average of 15 to 25 minutes per call, with some calls lasting over 40 minutes2.
A form of defensive and creative AI
This strategy falls under a new category of artificial intelligence: proactive defensive AI, which not only acts preemptively (through filtering) but also disrupts malicious activities.
Rather than simply blocking calls, this AI uses the element of time to undermine the operational effectiveness of fraud networks. A scammer tied up with a fake victim cannot contact other targets—a powerful tool against mass attacks.
Tests conducted on decoy numbers showed that these AI systems reduced the number of active calls by 28% in certain automated scam networks3.
What are the technical and ethical challenges?
Behind this mischievous idea, several important questions arise:
- Voice stereotypes: How can we prevent the “granny voice” from becoming a caricature or a twisted comedic trope?
- Consent and confidentiality: Conversations are sometimes recorded for analysis—but under what legal frameworks?
- Misuse: Could this type of technology be repurposed for malicious purposes (such as disinformation or harassment)?
Even if the purpose is defensive, the tools used rely on the same technologies that power voice deepfakes or manipulative AI. This therefore calls for responsible governance, even when used for humorous or vigilante purposes.
When humor meets cybersecurity
This project demonstrates that it is possible to repurpose artificial intelligence capabilities for creative cybersecurity applications without deploying heavy infrastructure or mobilizing industrial resources.
Instead of relying on surveillance, it draws on humor, subversion, and cunning—time-honored tactics that have been revitalized here by AI.
Beyond their tactical effectiveness, these bots also help raise public awareness about phone scams by explaining common fraudulent tactics and providing real-life examples to study.
Learn more
Be sure to check out this insightful article: Artificial Intelligence and the Future of Defense: The Decisive Battleground for Europe’s Autonomy
It explores how AI is redefining European defense in the face of cyber and hybrid threats, in a context similar to that of targeted phone scams—highlighting AI’s role as a tool for deterrence and citizen protection.
References
1. Federal Trade Commission (FTC). (2024). Consumer Sentinel Network Data Book.
https://www.ftc.gov/
2. Jolly Roger Telephone Co. (2025). Grandma Bots: Stats and Outcomes.
https://jollyrogertelephone.com/
3. MIT CSAIL. (2025). AI Counter-Scam Systems in Telephony: A Case Study.
https://csail.mit.edu/ai-voice-defense

