Technology Under Scrutiny: AI-Based Age Estimation Under Fire
In recent years, artificial intelligence systems capable of estimating a person’s age based on their face have become increasingly common in retail settings. Using so-called “smart” cameras, these devices analyze facial features in real time to estimate an age range, without necessarily identifying the individual. The stated goal: to facilitate age verification in sensitive contexts, such as the sale of alcohol, gambling, or tobacco.
In France, a pilot program has recently been proposed for tobacco shops. The idea is to equip these establishments with AI-powered cameras to automatically authorize or block the purchase of products prohibited to minors. This project, supported by some industry stakeholders, raises fundamental questions: What data is collected? Is it stored? What rights do the people being filmed have?
The Case of Tobacco Shops: A Controversial Experiment
The pilot project was led by a group of tobacco retailers in collaboration with a French technology company. The proposed system relies on cameras installed in retail locations that can automatically detect a face, extract its morphological features, and then analyze them to estimate a probable age, without recording the image or identifying the individual.
This type of system claims to comply with the GDPR, arguing that it does not involve facial recognition but only age analysis for preventive purposes. However, as soon as the project was announced, several digital rights organizations expressed concerns about the widespread adoption of algorithmic video surveillance.
The CNIL’s Position: An Objection Grounded in Law and Ethics
In a statement released in early July 2025, the French Data Protection Authority (CNIL) strongly opposed this pilot program. Its argument is based on three main points:
- Disproportionate biometric processing: even without identification, automated facial analysis constitutes the processing of sensitive data, which is subject to strict rules under European law.
- The risk of sliding toward a surveillance society: The CNIL fears that AI-based monitoring systems will become the norm in everyday settings such as local stores.
- Insufficient safeguards for individual rights: difficulty in obtaining informed consent, inability to verify whether data is processed locally or remotely, and a lack of transparency regarding the algorithms used.
In summary, the institution believes that this type of use does not comply with the data minimization principle enshrined in the GDPR and could set a problematic precedent.
The technical and ethical limitations of these devices
Beyond the legal framework, technical issues arise. AI-based age estimation is subject to numerous biases related to image quality, lighting, the subject’s posture, and, above all, the composition of the training datasets. Studies have shown that these systems are less reliable for certain age groups or ethnic groups, with error margins that can sometimes be significant1.
Furthermore, even if faces are not recorded, the mere act of capturing an image and processing it automatically without direct human interaction calls into question the concept of consent. In a tobacco shop, can a minor customer truly understand that they are being subjected to algorithmic processing? And what assurances do they have that their data is not being used for other purposes?
What alternatives are there for verifying age without infringing on civil liberties?
The CNIL does not object to the principle of age verification, but rather to the proposed method. It points out that non-biometric alternatives exist, such as:
- presentation of identification in case of doubt,
- anonymous electronic devices that verify age without facial recognition,
- awareness and training campaigns for merchants.
More broadly, it calls for prioritizing solutions that are proportionate, transparent, and respectful of individual freedoms, in accordance with the European legal framework. AI can play a role, but only on the condition that it does not become a tool for widespread surveillance.
A decision that raises questions about the commercial use of AI
Beyond this specific case, the CNIL’s opinion is part of a broader trend toward greater vigilance regarding the commercial uses of artificial intelligence. The rise of smart cameras, profiling algorithms, and emotional recognition technology is prompting regulators to clarify the boundaries between useful innovation and violations of fundamental rights.
As the European Union finalizes the implementation of the Artificial Intelligence Act (AI Act), this decision sends a strong message: any use of AI in the public or commercial sphere must comply with the principles of purpose limitation, proportionality, and human oversight.
Learn more
Check out our blog for 7 Ethical Principles for Trustworthy Artificial Intelligence, a thoroughly researched article that explores the foundations of trustworthy AI, directly related to the European AI Act and the concerns raised by the CNIL.
References
1. Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research.
https://proceedings.mlr.press/v81/buolamwini18a.html

