Teaching Bots Learn by Watching Human Behavior - Ep. 67

Robots following coded instructions to complete a task? Old school. Robots learning to do things by watching how humans do it? That’s the future. Earlier this year, Stanford’s Animesh Garg and Marynel Vázquez shared their research in a talk on “Generalizable Autonomy for Robotic Mobility and Manipulation” at the GPU Technology Conference last week. We caught up with them to learn more about generalizable autonomy - the idea that a robot should be able to observe human behavior, and learn to imitate it in a way that’s applicable to a variety of tasks and situations. Like learning to cook by watching YouTube videos, or figuring out how to cross a crowded room for another.

Om Podcasten

Explore how the latest technologies are shaping our world, from groundbreaking discoveries to transformative sustainability efforts. The NVIDIA AI Podcast shines a light on the stories and solutions behind the most innovative changes, helping to inspire and educate listeners. Every week, we’ll bring you another tale, another 30-minute interview, as we build a real-time oral history of AI that’s already garnered nearly 6.5 million listens and been acclaimed as one of the best AI and machine learning podcasts. Listen in and get inspired. More information: https://ai-podcast.nvidia.com/