The emergence of generative artificial intelligence, capable of interacting in natural language, is profoundly transforming our relationship with technology. These conversational tools are becoming everyday assistants, whether for work, learning, creating, or staying informed. But as their adoption accelerates, a major ethical question arises: what happens to the data we entrust to them?
The default practice of systematically recording conversations—with the aim of improving models or personalizing responses—raises questions. In this context, Google recently announced the introduction of a temporary chat mode in Gemini, its AI assistant based on the Gemini 1.5 models. This feature allows users to interact without their messages being stored or used to train the models.
This new tool, though still modest in scope, is a symbolic step toward developing artificial intelligence that better respects privacy.
Privacy and Generative AI: Striking the Right Balance
By default, most current chatbots collect and store the data exchanged with users. This information can be used for several purposes: continuous model improvement, debugging, service personalization, and even behavioral analysis in some cases.
This practice is raising growing concerns, particularly in sensitive sectors (healthcare, education, business), where users do not want their queries analyzed or stored. Recent regulations, such as the GDPR in Europe and the AI Act currently being adopted, strengthen requirements for transparency, informed consent, and the right to be forgotten for developers of AI systems.
How does the temporary chat feature work in Gemini?
This feature, which can be enabled in the Gemini Assistant settings, allows you to start a conversation in which none of the messages are saved in the chat history. Furthermore, the messages you send are not used to improve the models, even in aggregated or anonymized form1.
This feature is similar to incognito mode, although that term is not officially used. A specific icon (a mask or a padlock, depending on the interface) indicates that the conversation is temporary. Once the session ends, the data is deleted from Gemini’s servers.
Please note that certain advanced features (access to history, context restoration, personalized suggestions) are disabled in this mode, in accordance with the principle of data minimization.
Toward a New Balance Between Service and Privacy
The introduction of this feature reflects changing user habits and expectations. It is designed for those who want to ask a sensitive question, perform a one-time search, or test out an idea without linking it to their Google account.
It also raises the question of the trade-off between personalization and privacy. Indeed, the more a model knows about the user, the more it can provide contextualized and tailored responses. Conversely, a one-time interaction is often perceived as less seamless, because the AI “forgets” everything between requests.
This tension encourages us to view AI as a user-controlled space for dialogue, with adaptive privacy settings, rather than simply as a response machine.
Limitations and prospects of a system that is still incomplete
While this option represents a step toward greater algorithmic transparency, it does not resolve all issues. For example, there is no guarantee that data is completely absent from technical logs or server-side temporary caches, unless an independent audit is conducted.
Furthermore, manually enabling this mode requires constant attention on the part of the user, which can reduce its actual effectiveness. It would make sense for these options to be enabled by default or at least offered upon first use.
Finally, this feature is currently limited to Gemini in certain regions and on certain browsers, which restricts its reach. However, it is part of a broader trend: that of ephemeral AI, which runs locally or anonymously and allows users to choose whether their activity is traceable.
AI and Privacy: Toward a New Ethical Standard?
The introduction of temporary conversations on Gemini marks a significant shift in the design of artificial intelligence interfaces. Following a phase of optimization driven by the accumulation of massive amounts of data, the industry is now shifting toward models that better respect individual privacy. This trend is still limited, but it could eventually lead to new standards, where users regain control over their interactions with machines. It is likely that within a few years, AI assistants will offer several “conversation modes” by default (personalized, temporary, local, etc.), each with its own level of privacy and functional implications.
Learn more
In the same vein, check out our article: Gemini in the Apple Ecosystem: Toward a Smarter Siri, Made by Google – aivancity blog
There, you’ll learn how Google is seeking to integrate its AI models into third-party environments while rethinking issues of data control and security.
References
1. Google. (2025). How temporary chats work in Gemini.
https://support.google.com/gemini/temporary-mode

