Technological Advances in AIAI & Robotics

AI without the cloud: Gemini Robotics is transforming embedded robotics

What if robots became truly autonomous, without relying on a cloud connection? On June 24, 2025, Google DeepMind unveiled Gemini Robotics On-Device, an on-device version of its Gemini artificial intelligence model, designed to operate directly on robotic machines. This technological advancement marks a strategic breakthrough in the field of adaptive robotics, with one key goal: local responsiveness, without network latency.

This launch is part of a broader trend toward the miniaturization and optimization of foundation models, which can be run locally on hardware platforms with limited resources while maintaining a high level of cognitive performance.

Gemini Robotics On-Device was designed to power general-purpose robots and enable them to understand, adapt to, and interact seamlessly with their physical environment. Unlike traditional cloud-dependent robotics systems, this AI operates in real time, even without an internet connection.

Among the practical use cases tested:

  • Handling non-rigid or unstable objects, such as pouring water without spilling it or folding clothes;
  • Navigation in a dynamic environment, with continuous adaptation to moving obstacles;
  • Performing complex tasks in the home, such as loading a dishwasher, sorting items, or reorganizing a space;
  • Applications in industrial or medical settings where network latency is critical (assistive or inspection robots).

According to researchers at DeepMind, Gemini Robotics combines motor planning, visual understanding, spatial reasoning, and closed-loop adaptation—in other words, the robot perceives, understands, and adjusts its actions without supervision.

One of the key features of this embedded version is its ability to interpret natural language commandsand translate them into coordinated physical actions. This is made possible by advanced integration between the Gemini language models and the motion planning engines.

During a demonstration, a robot equipped with Gemini On-Device was able to carry out an instruction as vague as “clean up this mess” and determine the necessary actions to pick up, sort, and put away objects in an unfamiliar environment.

This integration of language, vision, and action makes it possible to develop a new generation of versatile robots capable of adapting to unscripted tasks in real-world settings.

The decision to run AI without using the cloud opens up several significant opportunities:

  • Reduced latency, which is crucial for real-time fine-tuning in motor tasks;
  • Enhanced security and privacy, as data remains on-premises;
  • Reliability under extreme conditions (no network coverage, interference, power constraints);
  • Reducing energy dependence on the cloud is a key challenge for more sustainable AI.

According to a study by the Boston Consulting Group (2024), the global market for embedded robotics will reach $92 billion by 2027, driven by the logistics, healthcare, defense, and personal care sectors1.

While Gemini Robotics’ technical performance has been widely praised, its deployment raises critical issues. What safeguards are needed to regulate the decision-making autonomy of robots operating without human supervision? How can we prevent fragmentation of use based on hardware capabilities? What ethical standards should apply to AI systems operating locally?

For now, Gemini On-Device is limited to select partners and experimental environments. But its potential for widespread adoption in the coming years could accelerate the transition toward ubiquitous, unobtrusive robotics that are seamlessly integrated into our daily lives.

1. Boston Consulting Group. (2024). The Rise of Embedded AI in Robotics.
https://www.bcg.com/publications/2024/embedded-ai-robotics

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Related posts
Technological Advances in AI

Claude Code Voice: Anthropic finally lets you control your code with your voice

Artificial intelligence is gradually transforming the way developers interact with their programming environment. Following the emergence of code assistants capable of suggesting or generating entire functions, a new phase is taking shape: the…
AI & Robotics

Honor at MWC 2026: a smartphone… and an AI robot?

At the 2026 Mobile World Congress in Barcelona, Honor didn’t just unveil a new smartphone. The Chinese brand chose to set the stage for a potential breakthrough by unveiling a device equipped with a motorized camera module capable of…
Technological Advances in AIAI & Robotics

What if an elephant's whiskers could change the future of robots?

How can a five-ton animal handle a peanut with more dexterity than a state-of-the-art robotic arm? The answer lies neither in its strength nor in its size, but…
The AI Clinic

Would you like to submit a project to the AI Clinic and work with our students?

Leave a comment

Your email address will not be published. Required fields are marked with *