Recent years have seen impressive progress in robot control and perception including adept manipulation, aggressive quadrotor maneuvers, dense metric map reconstruction, and object recognition in real time. The grand challenge in robotics today is to capitalize on these advances in order to enable autonomy at a higher-level of intelligence. It is compelling to envision teams of autonomous robots in environmental monitoring, precision agriculture, construction and structure inspection, security and surveillance, and search and rescue.
In this talk, I will emphasize that many such applications can be addressed by thinking about how to coordinate robots in order to extract useful information about the environment. More precisely, I will formulate a general active estimation problem that captures the common characteristics of the aforementioned scenarios. I will show how to manage the complexity of the problem over metric information spaces with respect to long planning horizons and large robot teams. These results lead to computationally scalable, non-myopic algorithms with quantified performance for problems such as distributed source seeking and active simultaneous localization and mapping (SLAM).
I will then focus on acquiring information using both metric and semantic observations (e.g., object recognition). In this context, there are several new challenges such as missed detections, false alarms, and unknown data association. To address them, I will model semantic observations via random sets and will discuss filtering using such models. A major contribution of our approach is in proving that the complexity of the problem is equivalent to computing the permanent of a suitable matrix. This enables us to develop and experimentally validate algorithms for semantic localization, mapping, and planning on mobile robots, Google's project Tango phone, and the KITTI visual odometry dataset.