A new frontier for onboard artificial intelligence
What if robots became truly autonomous, without relying on a cloud connection? On June 24, 2025, Google DeepMind unveiled Gemini Robotics On-Device, an embedded version of its Gemini artificial intelligence model, designed to operate directly on robotic machines. This technological advancement marks a strategic step forward in the field of adaptive robotics, with a key focus on local responsiveness, free from network latency.
This launch is part of a broader trend toward the miniaturization and optimization of foundation models, which can be run locally on hardware platforms with limited resources while maintaining a high level of cognitive performance.
What are the applications of AI embedded in robots?
Gemini Robotics On-Device has been designed to power general-purpose robots, enabling them to understand, adapt to, and interact seamlessly with their physical environment. Unlike traditional cloud-dependent robotics systems, this AI operates in real time, even without an internet connection.
Specific use cases include:
- Handling non-rigid or unstable objects, such as pouring water without spilling it or folding clothes;
- Navigation in a dynamic environment, with continuous adaptation to changing obstacles;
- Performing complex tasks in a home setting, such as loading a dishwasher, sorting items, or reorganizing a space;
- Applications in industrial or medical settings where network latency is critical (assistance or inspection robots).
According to DeepMind researchers, Gemini Robotics combines motor planning, visual understanding, spatial reasoning, and closed-loop adaptation—in other words, the robot perceives, understands, and adjusts its actions without supervision.
A technical breakthrough: the fusion of language and movement
One of the key features of this embedded version is its ability to interpret natural language commands and translatethem into coordinated physical actions. This is made possible by advanced integration between the Gemini language models and the motor control engines (Motion Planning).
In a demonstration, a robot equipped with Gemini On-Device was able to carry out an instruction as vague as “clean up this mess” and determine the gestures needed to pick up, sort, and put away objects in an unfamiliar environment.
This fusion of language, vision, and action enables us to design a new generation of versatile robots capable of adapting to unscripted tasks in real-world contexts.
Toward more autonomous, efficient, and safe robotics
The decision to run AI without using the cloud opens up several strategic opportunities:
- Reduced latency, which is crucial for real-time micro-adjustments in motor tasks;
- Enhanced security and confidentiality, as data remains stored locally;
- Robustness under extreme conditions (no network, jamming, energy constraints);
- Reducing energy dependence on the cloud: a key challenge for more sustainable AI.
According to a study by Boston Consulting Group (2024), the global market for embedded robotics will reach $92 billion by 2027, driven by the logistics, healthcare, defense, and personal assistance sectors.1.
Monitoring democratization: what are the limits?
While Gemini Robotics’ technical performance has been widely praised, its deployment raises critical questions. What safeguards should be put in place to allow robots to make autonomous decisions without human supervision? How can we prevent the fragmentation of applications based on hardware capabilities? What are the ethical standards for AI operating locally?
For now, Gemini On-Device is available only to select partners and in experimental settings. However, its potential for widespread adoption in the coming years could accelerate the transition to ubiquitous, unobtrusive robotics that are seamlessly integrated into our daily lives.
References
1. Boston Consulting Group. (2024). The Rise of Embedded AI in Robotics.
https://www.bcg.com/publications/2024/embedded-ai-robotics

