
Google DeepMind’s new AI Models give robots reasoning, planning, and online knowledge access.
Motion transfer enables one robot’s skills to work across multiple platforms.
These models could revolutionize industries from manufacturing to home care.
Robotics is entering a new era with the launch of Google DeepMind’s new AI models, Gemini Robotics 1.5 and Gemini Robotics-ER 1.5. These models are designed to help robots think, plan, and act more like humans.
Unlike older systems that could only follow direct commands, these new models allow robots to make decisions, create step-by-step plans, and even use knowledge from the internet to complete complex tasks. This change could transform how robots are used in industries, homes, and public spaces.
DeepMind’s latest AI models are built to go beyond simple instructions. Gemini Robotics 1.5 works as a vision-language-action model. It helps robots see the world, understand natural language instructions, and then plan how to act. For example, if instructed to “pack items for rainy weather,” the model does not just grab random objects but reasons about what makes sense for the situation.
Gemini Robotics-ER 1.5 adds something even more powerful: embodied reasoning. This means the robot can break a complex job into smaller tasks, consult digital tools, and adjust its plan if something unexpected happens. Instead of being stuck when things go wrong, the robot can rethink actions and still reach the goal.
One of the most exciting aspects of Google DeepMind’s AI models is the ability to connect online knowledge with physical actions. Traditional robots only relied on the data given directly to them. However, with Gemini Robotics, robots can search for answers online and use that knowledge in real life.
For example, when sorting recyclables, a robot could check online rules for a particular city to see how waste should be separated. If the robot is packing for a customer in a cold region, it could look up weather conditions and choose items accordingly. This mix of online information and real-world action makes robots far more useful and adaptable.
Also Read - How Robotics is Transforming Medical Device Manufacturing?
A long-standing challenge in robotics has been that skills learned by one robot often cannot be used by another. A robot arm trained to stack boxes in a factory might not transfer those skills to a different machine with slightly different parts. The new AI models from DeepMind solve this with motion transfer.
Motion transfer allows robots to share learned skills across different platforms. A behavior trained on one robot can be reused on another, reducing the need to start training from scratch every time. This makes the technology more scalable and faster to apply across industries, from factories to healthcare robots and even household assistants.
The DeepMind new AI models open doors to real-world applications that were previously too complex for robots. In manufacturing, robots can now carry out multi-step assembly tasks, detect mistakes, and correct them without needing human intervention.
In logistics and warehouses, robots can adapt to changing rules and customer demands. For example, if shipping rules change in one region, the robot can check updated information and adjust the sorting process.
At home, service robots can help with daily chores. Instead of just picking up clothes, they could sort laundry by color and fabric, place items in the correct bins, and even explain what they are doing in simple language to ensure trust and safety. In recycling, robots can follow local laws for waste separation with much higher accuracy, reducing contamination and making recycling more efficient.
Early demonstrations have shown robots separating laundry, packing items while considering the weather, and sorting objects with reasoning. These examples highlight how reasoning-driven planning combined with online knowledge leads to more reliable results.
DeepMind has made these models available through the Gemini API in Google AI Studio. While some advanced versions are limited to partners, more open access is gradually being introduced. This means researchers, startups, and industries can experiment with the technology and build their own solutions using these models.
Wider access to such powerful AI models encourages faster innovation. Developers can build new applications in healthcare, logistics, manufacturing, and home robotics without needing to create everything from scratch. This also creates a growing ecosystem of shared benchmarks, pre-built skills, and safer simulation tools that reduce the risks of real-world testing.
Also Read - Top 10 Robotics Companies Leading the Market in 2025
Despite the progress, challenges still exist. Robots still struggle with fine dexterity, such as folding paper or handling fragile objects like glass. They also require better tactile sensing and control at very small scales.
Safety remains one of the biggest concerns. Since these robots are capable of making their own decisions, it is important to ensure their actions are always predictable and safe, especially when working around humans. Another challenge is managing the balance between online information and real-world tasks. If the data online is inaccurate or biased, the robot could make mistakes.
Energy use, processing costs, and the speed of consulting online tools also need improvements to make these systems efficient enough for everyday use.
The release of Google DeepMind’s new AI models shows the future of robotics is not just about machines that move, but about machines that can think. Robots that can reason, plan, and adapt are closer to becoming general-purpose assistants rather than limited tools.
Industries such as manufacturing, healthcare, logistics, and household assistance are expected to be the first to feel the impact. Over time, as challenges in safety and dexterity are solved, robots powered by these AI models could become as common as smartphones or computers today.
The progress also reflects a broader trend in artificial intelligence known as world models. These models combine perception, simulation, and action in one system, making robots more capable of handling messy, unpredictable environments. This direction could define robotics research and industry growth for the next decade.
Final Thoughts
Google DeepMind’s Gemini Robotics 1.5 and Gemini Robotics-ER 1.5 represent a major step forward in robotics. By combining vision, language, reasoning, and online knowledge, these AI models allow robots to think through problems and act intelligently in the real world.
Although challenges remain in fine dexterity, safety, and real-world testing, the foundation has been set for a future where robots are reliable, adaptable, and able to work side by side with humans in industries, homes, and public spaces. The DeepMind New AI models are not just improvements in code but a fundamental shift in how machines interact with the world.
Q1. What are Google DeepMind’s new AI models for robotics?
They are advanced models called Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, designed to help robots reason, plan, and act using vision, language, and online knowledge.
Q2. How are these AI models different from older robotics systems?
Older systems followed direct commands, but these new models can think ahead, adapt to unexpected changes, and use internet information to guide real-world actions.
Q3. What is motion transfer in robotics?
Motion transfer allows skills learned on one robot to be reused on another, reducing the need to train each robot separately and speeding up deployment.
Q4. Where can developers access DeepMind’s robotics models?
Developers can access parts of these AI models through the Gemini API in Google AI Studio, with broader availability expanding over time.
Q5. What challenges still remain for DeepMind’s robotics AI?
Robots still need improvements in fine dexterity, safety, accuracy of online knowledge use, and efficiency in energy and processing.