Quick links

Princeton Robotics Seminar

Princeton Robotics Seminar: Language as Robot Middleware

Date and Time
Friday, November 11, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Andy Zeng *19, from Google

We'd like to build robots that can help us with just about anything. Central to this is getting robots to build a general-purpose representation of the world from perception, and then use it to inform actions. Should this representation be 2D? or 3D? how do we "anchor" it onto a desired latent space? should it be an implicit representation? object-centric? can it be self-supervised? While many options exist out there, I'll talk about one in particular that's becoming my favorite – natural language. Partly motivated by the advent of large language models, but also motivated by recent work in multi-task learning.

In the context of robots, I'll talk about: (i) why we're starting to think that it might actually be a good idea to revisit "language" as a symbolic representation to glue our systems together to do cool things, and (ii) in the process of building these systems, discovering various "gaps" in grounding language to control that I think we could really use your help in figuring out.

Bio: Andy Zeng is a Senior Research Scientist at Google Brain working on vision and language for robotics. He received his Bachelors in Computer Science and Mathematics at UC Berkeley '15, and his PhD in Computer Science at Princeton University '19. Andy is a recipient of several awards including the Best Paper Award at T-RO '20, Best Systems Paper Awards at RSS '19 and Amazon Robotics '18, 1st Place (Stow) at the Amazon Picking Challenge '17, and has been finalist for Best Paper Awards at conferences CoRL '21, CoRL '20, ICRA '20, RSS '19, IROS '18. His research has been recognized through the Princeton SEAS Award for Excellence '18, NVIDIA Fellowship '18, and Gordon Y.S. Wu Fellowship in Engineering and Wu Prize '16, and his work has been featured in many popular press outlets, including the New York Times, BBC, and Wired. To learn more about Andy's work please visit https://andyzeng.github.io
 

Princeton Robotics Seminar: Collaborative Robots in the Wild: Challenges and Future Directions from a Human-Centric Perspective

Date and Time
Friday, December 2, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Nadia Figueroa, from University of Pennsylvania

Since the 1960’s we have lived with the promise of one day being able to own a robot that would be able to co-exist, collaborate and cooperate with humans in our everyday lives. This promise has motivated a vast amount of research in the last decades on motion planning, machine learning, perception and physical human-robot interaction (pHRI). Nevertheless, we are yet to see a truly collaborative robot navigating and manipulating objects, the environment or physically collaborating with humans and other robots outside of labs and in the human-centric dynamic spaces we inhabit; i.e., “in-the-wild”. This bottleneck is due to a robot-centric set of assumptions of how humans interact and adapt to technology and machines.  In this talk, I will introduce a set of more realistic human-centric assumptions and I posit that for collaborative robots to be truly adopted in such dynamic, ever-changing environments they must possess human-like characteristics of reactivity, compliance, safety, efficiency and transparency. Combining these objectives is challenging as providing a single optimal solution can be intractable and even infeasible due to problem complexity and contradicting goals. Hence, I will present possible avenues to achieve these requirements. I will show that by adopting a Dynamical System (DS) based approach for motion planning we can achieve reactive, safe and provably stable robot behaviors while efficiently teaching the robot complex tasks with a handful of demonstrations. Further, I will show that such an approach can be extended to offer task-level reactivity and can be adopted to efficiently and incrementally learn from failures, as humans do. I will also discuss the role of compliance in collaborative robots, the allowance of soft impacts and the relaxation to the standard definition of safety in pHRI and how this can be achieved with DS-based and optimization-based approaches. I will then talk about the importance of both end-users and designers having a holistic understanding of their robot’s behaviors, capabilities, and limitations and present an approach that uses Bayesian posterior sampling to achieve this. The talk will end with a discussion of open challenges and future directions to achieve truly collaborative robots in-the-wild.

Bio: Nadia Figueroa is the Shalini and Rajeev Misra Presidential Assistant Professor in the Mechanical Engineering and Applied Mechanics (MEAM) Department at the University of Pennsylvania. She holds a secondary appointment in the Computer and Information Science (CIS) department and is a faculty advisor at the General Robotics, Automation, Sensing & Perception (GRASP) laboratory. Before joining the faculty, she was a Postdoctoral Associate in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT), advised by Prof. Julie A. Shah. She completed a Ph.D. (2019) in Robotics, Control and Intelligent Systems at the Swiss Federal Institute of Technology in Lausanne (EPFL), advised by Prof. Aude Billard. Prior to this, she was a Research Assistant (2012-2013) at the Engineering Department of New York University Abu Dhabi (NYU-AD) and in the Institute of Robotics and Mechatronics (2011-2012) at the German Aerospace Center (DLR). She holds a B.Sc. degree in Mechatronics (2007) from Monterrey Tech (ITESM-Mexico) and an M.Sc. degree in Automation and Robotics (2012) from the Technical University of Dortmund, Germany. Her main research interest focuses on developing collaborative human-aware robotic systems: robots that can safely and efficiently interact with humans and other robots in the human-centric dynamic spaces we inhabit. This involves research at the intersection of machine learning, control theory, artificial intelligence, perception, and psychology - with a physical human-robot interaction perspective.
 

Princeton Robotics Seminar: Generalization in Planning and Learning for Robotic Manipulation

Date and Time
Friday, October 28, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Tomas Lozano-Perez, from MIT

An enduring goal of AI and robotics has been to build a robot capable of robustly performing a wide variety of tasks in a wide variety of environments; not by sequentially being programmed (or taught) to perform one task in one environment at a time, but rather by intelligently choosing appropriate actions for whatever task and environment it is facing. This goal remains a challenge. In this talk I’ll describe recent work in our lab aimed at the goal of general-purpose robot manipulation by integrating task-and-motion planning with various forms of model learning. In particular, I’ll describe approaches to manipulating objects without prior shape models, to acquiring composable sensorimotor skills, and to exploiting past experience for more efficient planning.

Bio: Tomas Lozano-Perez is Professor in EECS at MIT, and a member of CSAIL. He was a recipient of the 2011 IEEE Robotics Pioneer Award and a co-recipient of the 2021 IEEE Robotics and Automation Technical Field Award. He is a Fellow of the AAAI, ACM, and IEEE.

Princeton Robotics Seminar: Data-Centric ML for Autonomous Driving

Date and Time
Friday, October 14, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Sarah Tang, from Waymo

Sarah Tang
Waymo is building the "World's Most Experienced Driver". This talk will discuss data challenges that arise when scaling machine learning systems with almost 1500 years worth of real-life human driving.

Bio: Sarah is passionate about building robots that make intelligent decisions in complex, dynamic environments. She is currently a Staff Software Engineer on the Planner ML team at Waymo, where she is working to make driving safer, smarter, and more scalable with high-capacity models. In previous lives, she was the Tech Lead and Manager of the Motion Planning team at Nuro, where she led development of the decision making and trajectory optimization stack for major product milestones, including autonomous operation of their last-mile delivery robot without a driver nor a chase-car-operator in Arizona, California, and Texas. She was recognized in Business Insider’s 2021 list of “Rising stars in autonomous vehicles”. Sarah got her start in robotics as a member of Princeton's Great Class of 2013. 

Princeton Robotics Seminar: Towards Collective A.I.

Date and Time
Friday, September 30, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Radhika Nagpal, from Princeton University

Radhika Nagpal
In nature, groups of thousands of individuals cooperate to create complex structure purely through local interactions — from cells that form complex organisms, to social insects like termites and ants that build nests and self-assemble bridges, to the complex and mesmerizing motion of fish schools and bird flocks. What makes these systems so fascinating to scientists and engineers alike, is that even though each individual has limited ability, as a collective they achieve tremendous complexity. What would it take to create our own artificial collectives of the scale and complexity that nature achieves? In this talk I will discuss some ongoing projects that use inspiration from biological self-assembly to create robotic systems, e.g. the Kilobot swarm inspired by cells, the Termes and EcitonR robots inspired by the 3D assembly of termites and army ants, and the BlueSwarm project inspired by fish schools. There are many challenges for both building and programming robot swarms, and we use these systems to explore decentralized algorithms, embodied intelligence, and methods for synthesizing complex global behavior. Our theme is the same: can we create simple robots that cooperate to achieve collective complexity?

Biography: Radhika Nagpal is the Norman R. Augustine '57 *59 Professor in Engineering at Princeton University, joint between MAE and COS, where she heads the Self-organizing Swarms & Robotics Lab (SSR). Nagpal is a leading researcher in swarm robotics and self-organized collective intelligence. Projects from her lab include bio-inspired multi-robot systems such as the Kilobot thousand-robot swarm (Science 2014) as well as models of collective intelligence in biology (Nature Comms. 2022). In 2017 Nagpal co-founded ROOT Robotics, an educational robotics company acquired by iRobot. Nagpal was named by Nature magazine as one of the top ten influential scientists and engineers of the year (Nature10 award, Dec 2014) and she is also known for her Scientific American blog article (“The Awesomest 7 Year Postdoc”, 2013) on academic cultural change.

Princeton Robotics Seminar: Learning from Limited Data for Robot Vision in the Wild

Date and Time
Friday, September 16, 2022 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Katie Skinner, from MIT

The Princeton Robotics Seminar series continues in Fall 2022 with all events fully in-person. It is scheduled on Fridays at 11am-12pm eastern time. The location is Computer Science Building, Room 105.

If you have a Princeton email, please fill out the form to subscribe to the robotics-seminar mailing list.

Follow us: Facebook Twitter Linkedin