An unexpected alliance between two rival giants
Apple and Google have long embodied an iconic rivalry in the tech world. One advocates for vertical control over the user experience, while the other favors a more open approach focused on online services. But in June 2025, at its WWDC conference, Apple surprised everyone by announcing the integration of Gemini, the generative AI model developed by Google, into certain advanced features of Siri.
This unprecedented partnership marks a strategic shift. As the race to develop artificial intelligence accelerates, Apple has chosen to rely in part on external AI to boost the performance of its voice assistant, while maintaining strict privacy standards. It is an unexpected alliance, but one that reflects the ongoing realignments within the AI ecosystem.
Gemini in Siri: A Hybrid Model Designed for the User
The new architecture announced by Apple is based on a hybrid model:
- For simple queries and device-related tasks, processing remains local, using the Apple Intelligence models built into the devices;
- For more complex tasks (document analysis, text drafting, detailed planning, etc.), users can choose to run them in the cloud using Gemini.
This integration will initially be available to users of the iPhone 16 Pro and Macs equipped with Apple Silicon chips, as part of the Apple Intelligence beta program scheduled for late 2025. The use cases include:
- writing emails or smart summaries,
- document search with semantic understanding,
- the generation of complex context-aware suggestions (travel, schedules, messages).
Requests processed by Gemini will be anonymized, temporarily stored, and not linked to the user’s Apple ID, in line with the company’s longstanding commitment to privacy.
Why is Apple working with Gemini?
This decision is based on both technical and strategic objectives:
- Gemini 1.5 Pro and 1.5 Flash have established themselves as high-performance models for conversational and context-aware tasks, with context windows expanded to 1 million tokens, offering better understanding of long prompts1.
- Although Apple is well ahead in terms of local optimization, it does not yet have a large-scale model that is as versatile as those capable of competing with GPT-4o or Claude 3 Opus.
- This partnership allows Apple to quickly address certain gaps without delaying its rollout schedule for Apple Intelligence.
It is also a strategic choice aimed at flexibility: rather than developing everything in-house, Apple prefers to adopt a modular approach and select technical partners for certain components.
The Challenges of Controlled Integration
Integrating AI developed by a long-standing competitor naturally raises several questions:
- Dependency: Even though the service is optional, Apple becomes partially dependent on a model it does not control. This raises questions about service continuity, future pricing, and control over updates.
- Privacy: Apple claims that data sent to Gemini is filtered and processed without long-term storage, but some observers are calling for greater transparency regarding the technical processes involved.
- Marketing positioning: The brand will need to manage the perception of a "Google-powered" assistant, which could undermine its image of independence.
Apple nevertheless intends to strictly regulate this integration, giving users an explicit choice and allowing them to completely disable cloud access.
Potential impacts on the AI ecosystem
This partnership marks a significant shift in the approach taken by major platforms. We are moving from a model in which each player develops its own closed-loop assistant to a model of technological modularity, in which multiple AI systems can coexist within the same environment.
For Google, this is also a strategic victory: Gemini, already used in Android, Workspace, and YouTube, is now making its way into iOS, solidifying its position as the go-to general-purpose model.
In the longer term, this trend could:
- promote customized personal assistants built using AI components from various providers;
- strengthen interoperability, with models designed to integrate into a variety of interfaces;
- reshuffle the deck in the digital assistant market, which is once again becoming central to the user experience.
A technological advancement or a strategic shift?
Apple’s decision cannot be reduced to a mere technical optimization. It reflects a profound shift in the brand’s strategic positioning, as it accepts that it can no longer control everything in exchange for an immediate improvement in quality.
It also raises a new question: will the future of virtual assistants be shaped by the convergence of proprietary and open models, local and cloud-based systems, and Apple and non-Apple platforms? By opening the door to Gemini, Apple isn’t just reimagining Siri—it’s ushering in a new era of collaboration among major AI platforms.
Learn more
To learn more about new forms of AI-driven visual creation, particularly in the field of video, read the article: Kling AI 2.0: A Revolution in AI-Powered Video Generation
This article describes the advancements of Kling AI 2.0 in generating videos from complex instructions, a development that seamlessly complements your analysis of the partnership between Meta and Midjourney.
References
1. Google DeepMind. (2025). Gemini 1.5 Pro Technical Report.
https://deepmind.google/research/gemini-15

