Robots in Clutter: Learning to Understand Environmental Changes
Date and Time
Thursday, March 30, 2017 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Host
Prof. Thomas Funkhouser
Robots today are confined to operate in relatively simple, controlled environments. One reason for this is that current methods for processing visual data tend to break down when faced with occlusions, viewpoint changes, poor lighting, and other challenging but common situations that occur when robots are placed in the real world. I will show that we can train robots to handle these variations by modeling the causes behind visual appearance changes. If robots can learn how the world changes over time, they can be robust to the types of changes that objects often undergo. I demonstrate this idea in the context of autonomous driving, and I will show how we can use this idea to improve performance for every step of the robotic perception pipeline: object segmentation, tracking, and velocity estimation. I will also present some recent work on learning to manipulate objects, using a similar framework of learning environmental changes. By learning how the environment can change over time, we can enable robots to operate in the complex, cluttered environments of our daily lives.
David Held is a post-doctoral researcher at U.C. Berkeley working with Pieter Abbeel on deep reinforcement learning for robotics. He recently completed his Ph.D. in Computer Science at Stanford University with Sebastian Thrun and Silvio Savarese, where he developed methods for perception for autonomous vehicles. David has also worked as an intern on Google’s self-driving car team. Before Stanford, David was a researcher at the Weizmann Institute, where he worked on building a robotic octopus. He received a B.S. and M.S. in Mechanical Engineering at MIT and an M.S. in Computer Science at Stanford, for which he was awarded the Best Master's Thesis Award from the Computer Science Department.