How Google built its Gemini robotics models

Share This Post


“We’d trained models to help robots with specific tasks and to understand natural language before, but this was a step change,” Carolina says. “The robot had never seen anything related to basketball, or this specific toy. Yet it understood something complex — ‘slam dunk the ball’ — and performed the action smoothly. On its first try.

This all-rounder robot was powered by a Gemini Robotics model that is part of a new family of multimodal models for robotics. The models build upon Gemini 2.0 through fine-tuning with robot-specific data, adding physical action to Gemini’s multimodal outputs like text, video and audio. “This milestone lays the foundation for the next generation of robotics that can be helpful across a range of applications,” said Google CEO Sundar Pichai when announcing the new models on X.

The Gemini Robotics models are highly dextrous, interactive and general, meaning they can drive robots to react to new objects, environments and instructions without further training. Helpful, given the team’s ambitions.

“Our mission is to build embodied AI to power robots that help you with everyday tasks in the real world,” says Carolina, whose fascination with robotics began with childhood sci-fi cartoons, fueled by dreams of automated chores. “Eventually, robots will be just another surface on which we interact with AI, like our phones or computers — agents in the physical world.”



Source link

Related Posts

How AI Is Upending Politics, Tech, the Media, and More

In an increasingly divided world, one thing that...

Parents Testifying Before US Senate, Saying AI Killed Their Children

Content warning: this story includes discussion of self-harm...

Access Denied

Access Denied You don't have permission to access...

How next-gen laptops use NPUs for massive power savings

Current laptops with Intel Core Ultra Series 2...
- Advertisement -spot_img