A New Frontier for Embedded Artificial Intelligence
What if robots became truly autonomous, without relying on a cloud connection? On June 24, 2025, Google DeepMind unveiled Gemini Robotics On-Device, an on-device version of its Gemini artificial intelligence model, designed to operate directly on robotic machines. This technological advancement marks a strategic breakthrough in the field of adaptive robotics, with one key goal: local responsiveness, without network latency.
This launch is part of a broader trend toward the miniaturization and optimization of foundation models, which can be run locally on hardware platforms with limited resources while maintaining a high level of cognitive performance.
What are the applications of AI embedded in robots?
Gemini Robotics On-Device was designed to power general-purpose robots and enable them to understand, adapt to, and interact seamlessly with their physical environment. Unlike traditional cloud-dependent robotics systems, this AI operates in real time, even without an internet connection.
Among the practical use cases tested:
- Handling non-rigid or unstable objects, such as pouring water without spilling it or folding clothes;
- Navigation in a dynamic environment, with continuous adaptation to moving obstacles;
- Performing complex tasks in the home, such as loading a dishwasher, sorting items, or reorganizing a space;
- Applications in industrial or medical settings where network latency is critical (assistive or inspection robots).
According to researchers at DeepMind, Gemini Robotics combines motor planning, visual understanding, spatial reasoning, and closed-loop adaptation—in other words, the robot perceives, understands, and adjusts its actions without supervision.
A technical breakthrough: the fusion of language and movement
One of the key features of this embedded version is its ability to interpret natural language commandsand translate them into coordinated physical actions. This is made possible by advanced integration between the Gemini language models and the motion planning engines.
During a demonstration, a robot equipped with Gemini On-Device was able to carry out an instruction as vague as “clean up this mess” and determine the necessary actions to pick up, sort, and put away objects in an unfamiliar environment.
This integration of language, vision, and action makes it possible to develop a new generation of versatile robots capable of adapting to unscripted tasks in real-world settings.
Toward more autonomous, energy-efficient, and secure robotics
The decision to run AI without using the cloud opens up several significant opportunities:
- Reduced latency, which is crucial for real-time fine-tuning in motor tasks;
- Enhanced security and privacy, as data remains on-premises;
- Reliability under extreme conditions (no network coverage, interference, power constraints);
- Reducing energy dependence on the cloud is a key challenge for more sustainable AI.
According to a study by the Boston Consulting Group (2024), the global market for embedded robotics will reach $92 billion by 2027, driven by the logistics, healthcare, defense, and personal care sectors1.
A trend toward democratization to watch: what are its limits?
While Gemini Robotics’ technical performance has been widely praised, its deployment raises critical issues. What safeguards are needed to regulate the decision-making autonomy of robots operating without human supervision? How can we prevent fragmentation of use based on hardware capabilities? What ethical standards should apply to AI systems operating locally?
For now, Gemini On-Device is limited to select partners and experimental environments. But its potential for widespread adoption in the coming years could accelerate the transition toward ubiquitous, unobtrusive robotics that are seamlessly integrated into our daily lives.
References
1. Boston Consulting Group. (2024). The Rise of Embedded AI in Robotics.
https://www.bcg.com/publications/2024/embedded-ai-robotics

