Quick links

Princeton Robotics Seminar

Robotics Seminar - Towards Open World Robot Safety

Date and Time
Friday, December 6, 2024 - 11:00am to 12:00pm
Location
Bowen Hall 222
Type
Princeton Robotics Seminar

Andrea Bajcsy
Robot safety is a nuanced concept. We commonly equate safety with collision-avoidance, but in complex, real-world environments (i.e., the "open world'') it can be much more: for example, a mobile manipulator should understand when it is not confident about a requested task, that areas roped off by caution tape should never be breached, and that objects should be gently pulled from clutter to prevent falling. However, designing robots that have such a nuanced safety understanding---and can reliably generate appropriate actions---is an outstanding challenge.

In this talk, I will describe my group's work on systematically uniting modern machine learning models (such as large vision-language models, deep neural trajectory predictors, and latent world models) with classical formulations of safety in the control literature to generalize safe robot decision-making to increasingly open world interactions. Throughout the talk, I will present experimental instantiations of these ideas in domains like  vision-based navigation, autonomous driving, and robotic manipulation.

Bio: Andrea Bajcsy is an Assistant Professor in the Robotics Institute at Carnegie Mellon University where she leads the Interactive and Trustworthy Robotics Lab (Intent Lab). She broadly works at the intersection of robotics, machine learning, control theory, and human-AI interaction. Prior to joining CMU, Andrea received her Ph.D. in Electrical Engineering & Computer Science from University of California, Berkeley in 2022. She is the recipient of the Google Research Scholar Award (2024), Rising Stars in EECS Award (2021), Honorable Mention for the T-RO Best Paper Award (2020), NSF Graduate Research Fellowship (2016), and worked at NVIDIA Research for Autonomous Driving.

Robotics Seminar - What makes learning to control easy or hard?

Date and Time
Friday, October 25, 2024 - 11:00am to 12:00pm
Location
Bowen Hall 222
Type
Princeton Robotics Seminar
Speaker
Nikolai Matni, from University of Pennsylvania

Nikolai Matni
Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem.  In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine and reinforcement learning.  We will highlight our progress towards developing such a theoretical foundation of robust learning for safe control in the context of the following case studies: (i) characterizing fundamental limits of learning-enabled control, (ii) developing novel robust imitation learning algorithms with finite sample-complexity guarantees, and (if time allows) (iii) leveraging data from diverse but related tasks for efficient multi-task learning for control.  In all cases, we will emphasize the interplay between robust learning, robust control, and robust stability and their consequences on the sample-complexity and generalizability of the resulting learning-based control algorithms.

Bio: Nikolai Matni is an Assistant Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the Department of Computer and Information Sciences (by courtesy), the GRASP Lab, the PRECISE Center, and the Applied Mathematics and Computational Science graduate group.  He has held positions as a Visiting Faculty Researcher at Google Brain Robotics, NYC, as a postdoctoral scholar in EECS at UC Berkeley, and as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds a B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of autonomous systems.  Nikolai is a recipient of the AFOSR YIP (2024), NSF CAREER Award (2021), a Google Research Scholar Award (2021), the 2021 IEEE CSS George S. Axelby Award, and the 2013 IEEE CDC Best Student Paper Award.  He is also a co-author on papers that have won the 2022 IEEE CDC Best Student Paper Award and the 2017 IEEE ACC Best Student Paper Award.

Robotics Seminar - Rethinking digital construction: a collaborative future of humans, machines, and craft

Date and Time
Friday, October 11, 2024 - 11:00am to 12:00pm
Location
Bowen Hall 222
Type
Princeton Robotics Seminar
Speaker
Daniela Mitterberger, from Princeton University

Daniela Mitterberger
The construction industry is experiencing a transformative shift with the integration of digital fabrication technologies. This talk explores the potential for meaningful collaboration between humans and machines in construction workflows, emphasizing the synergy between craft and computational processes. The talk aims to show how emerging technologies can enhance human skills and redefine traditional building techniques by examining research directions such as augmented manual fabrication and augmented human-robot collaboration. These research directions, demonstrated through built case studies, show how digital tools such as extended reality (XR) and collaborative robotics can assist craftspeople, enhancing both precision and creativity while preserving the tactile and sociocultural aspects of construction. The results highlight the potential for new, hybrid construction methods that integrate human agency with machine precision, offering a vision for a more collaborative and sustainable future in construction.

Bio: Daniela Mitterberger is an architect and researcher with a strong interest in new media and the relationship between humans, digital fabrication and emerging technologies. Mitterberger is an Assistant Professor at Princeton University where she develops innovative computational methods that enable human-machine collaborative processes through adaptive digital fabrication and extended reality. She is Co-founder and Director of «MAEID [Büro für Architektur und Transmediale Kunst] », a multidisciplinary architecture practice based in Austria. Mitterberger received her doctoral degree at ETH Zurich (Dr. sc.) in 2023 on "Adaptive digital fabrication and human-machine collaboration for architecture". In 2023 she worked as a postdoctoral researcher at ETH Zurich and within the Design++ initiative (Centre for Augmented Computational Design in AEC). There she was the Co-lead of the Immersive Design Lab, a lab for collaborative research and teaching in the field of extended reality and machine learning in architecture and construction.
Previously, she was a lecturer at several international graduate and postgraduate programs, amongst others at the MSD Melbourne (Australia), UniSA University of Adelaide (Australia), University of Applied Arts Vienna (Austria), Academy of Fine Arts Vienna (Austria), University of Innsbruck (Austria), ETH Zurich (Switzerland), Tongji University (China), and IACC in Barcelona (Spain). Her work has been recognized with several international awards and has been widely exhibited at various galleries, institutions, and events, including Venice Biennale 2021, Princess of Asturia Awards 2021, Seoul Biennale 2020, Ars Electronica Linz, MAK Vienna, Melbourne Triennial, Academy of Fine Arts Vienna, and HdA Graz.

Princeton Robotics Seminar - Enabling Cross-Embodiment Learning

Date and Time
Friday, April 19, 2024 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar

Jeannette Bohg
In this talk, I will investigate the problem of learning manipulation skills across a diverse set of robotic embodiments. Conventionally, manipulation skills are learned separately for every task, environment and robot. However, in domains like Computer Vision and Natural Language Processing we have seen that one of the main contributing factors to generalisable models is large amounts of diverse data. If we were able to have one robot learn a new task even from data recorded with a different robot, then we could already scale up training data to a much larger degree for each robot embodiment. In this talk, I will present a new, large-scale datasets that was put together across multiple industry and academic research labs to make it possible to explore the possibility of cross-embodiment learning in the context of robotic manipulation, alongside experimental results that provide an example of effective cross-robot policies. Given this dataset, I will also present multiple alternative ways to learn cross-embodiment policies. These example approaches will include (1) UniGrasp - a model that allows to synthesise grasps with new hands, (2) XIRL - an approach to automatically discover and learn vision-based reward functions from cross-embodiment demonstration videos and (3) Equivact - an approach that leverages equivariance to learn sensorimotor policies that generalise to scenarios that are traditionally out-of-distribution.

Bio: Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.


PhD students and postdocs can signup to join Jeannette for lunch. There will also be a Robotics Social at 4:00 PM in the F-Wing Cafe Area - all are welcome to attend!

Princeton Robotics - Ensuring Robot Safety Through Safety Index Synthesis

Date and Time
Friday, April 5, 2024 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Changliu Liu, from CMU

Changliu Liu
Safety Index is a special class of high order control barrier functions. Its purpose is to ensure forward invariance within a user-specified safe set and achieve finite time convergence to that set. Synthesizing a valid safety index poses significant challenges, particularly when dealing with control limits, uncertainties, and time-varying dynamics. In this talk, I will introduce a variety of approaches that can be used for safety index synthesis, including a rule-based method, an evolutionary optimization-based approach, a constrained reinforcement learning-based approach, an adversarial optimization-based approach, as well as sum of square programming. The parameterization of the safety index can either take an analytical form or be a neural network. I will conclude the talk by highlighting the limitations of existing work and discuss potential future directions, including integrating formal verification into neural safety index synthesis.

Bio: Dr. Changliu Liu is an assistant professor in the Robotics Institute, School of Computer Science, Carnegie Mellon University (CMU), where she leads the Intelligent Control Lab. Prior to joining CMU, Dr. Liu was a postdoc at Stanford Intelligent Systems Laboratory. She received her Ph.D. in Engineering together with Master degrees in Engineering and Mathematics from University of California at Berkeley and her bachelor degrees in Engineering and Economics from Tsinghua University. Her research interests lie in the design and verification of intelligent systems with applications to manufacturing and transportation. She published the book “Designing robot behavior in human-robot interactions” with CRC Press in 2019. She is the founder of the International Neural Network Verification Competition launched in 2020. Her work has been recognized by NSF Career Award, Amazon Research Award, Ford URP Award, Advanced Robotics for Manufacturing Champion Award, and many best/outstanding paper awards.

Students: Sign-up for lunch with the speaker here (12:00 - 1:30 PM.)

Princeton Robotics Seminar - Unlocking Agility, Safety, and Resilience for Legged Navigation: Addressing Real-world Challenges in Uncertain Environments

Date and Time
Friday, March 22, 2024 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Ye Zhao, from GA Tech

PhD students and postdocs can signup to join Ye for lunch here. There will also be a Robotics Social at 4:00 PM in the F-Wing Cafe Area - all are welcome to attend!


Ye Zhao
While legged robots have made remarkable progress in dynamic balancing and locomotion, there remains substantial room for improvement in terms of safe navigation and decision-making capabilities. One major challenge stems from the difficulty of designing safe, resilient, and real-time planning and decision-making frameworks for these complex legged machines navigating unstructured environments. Symbolic planning and distributed trajectory optimization offer promising yet underexplored solutions. This talk will introduce three perspectives on enhancing safety and resilience in task and motion planning (TAMP) for agile legged navigation. First, we'll discuss hierarchically integrated TAMP for dynamic locomotion in environments susceptible to perturbations, focusing on robust recovery behaviors. Next, we'll cover our recent work on safe and socially acceptable legged navigation planning in environments that are partially observable and crowded with humans. Lastly, we'll delve into distributed contact-aware trajectory optimization methods achieving dynamic consensus for agile locomotion behaviors.

Bio: Ye Zhao is an Assistant Professor at The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology. He received his Ph.D. degree in Mechanical Engineering from The University of Texas at Austin in 2016. After that, he was a Postdoctoral Fellow at Agile Robotics Lab, Harvard University. At Georgia Tech, he leads the Laboratory for Intelligent Decision and Autonomous Robots. His research interest focuses on planning and decision-making algorithms of highly dynamic and contact-rich robots. He received the George W. Woodruff School Faculty Research Award at Georgia Tech in 2023, NSF CAREER Award in 2022, and ONR YIP Award in 2023. He serves as an Associate Editor of T-RO, TMECH, RA-L, and L-CSS. His co-authored work has received multiple paper awards, including the 2021 ICRA Best Automation Paper Award Finalist, the 2023 Best Paper Award at the NeurIPS Workshop on Touch Processing, and the 2016 IEEE-RAS Whole-Body Control Best Paper Award Finalist.

Princeton Robotics Seminar - High-confidence Robot Motion Planning under Uncertainty

Date and Time
Friday, March 8, 2024 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar

Marin Kobilarov
This talk will provide an overview of research activities at the ASCO lab, currently including robot-assisted surgical micro-manipulation and navigation of autonomous vehicles (aerial, underwater, or ground) under state and perception uncertainty. We will specifically focus on motion planning with built-in robustness guarantees, i.e. by aiming to certify expected performance before actual deployment. The core idea is to employ probably-approximately-correct (PAC) bounds on performance which are used as an objective function in control policy optimization. Such robust policies could then provide high-confidence performance guarantees, such as “with 99% chance the robot will reach its goal, while avoiding collisions with 99.9% chance”, and result in improved safety and reliability.

Bio: Marin Kobilarov is an Associate Professor at the Johns Hopkins University and a Principal Engineer at Zoox/Amazon. At JHU he leads the Autonomous Systems, Control and Optimization (ASCO) lab which develops algorithms and software for planning, learning, and control of autonomous robotic systems. Their focus is on computational theory at the intersection of planning and learning, and on the system integration and deployment of robots that can operate safely and efficiently in challenging environments.


PhD students & faculty: If you are interested in meeting with Marin on 3/8, please reach out to nsimon@princeton.edu.

PhD students: If you are interested in joining the speaker for lunch on 3/8 from 12:15 - 1:30 PM, please fill out this form.

Princeton Robotics Seminar - Towards Robotic Construction of Sustainable Structures

Date and Time
Friday, February 23, 2024 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Caitlin Mueller, from MIT

Caitlin Mueller
Within the climate crisis, architecture and the built environment play an outsized role, both in contributions to current emissions and projections for the future.  Standard design and construction practices are often wasteful, expensive, and exploitative, undermining a mission to create spaces of dignity, comfort, and delight.  Within this context, this talk will present an alternative approach, focused on promoting design and construction methods that produce high-quality, low-carbon, and inexpensive outcomes, empowered by emerging advances in computational design and construction robotics.  In addition to general methods, the talk will present a range of recent projects, including robotic planning and assembly of complex yet highly efficient truss structures, robotics-enabled fabrication of low-cost, low-carbon earthen and concrete structures, and algorithmic design and fabrication approaches for unconventional and circular material use.

Bio: Caitlin Mueller is an Associate Professor at MIT's Department of Architecture and Department of Civil and Environmental Engineering, in the Building Technology Program, where she leads the Digital Structures research group.  She works at the creative interface of architecture, structural engineering, and computation, and focuses on new computational design and digital fabrication methods for innovative, high-performance buildings and structures that empower a more sustainable and equitable future. Mueller holds three degrees from MIT in Architecture, Computation, and Building Technology, and one from Stanford in Structural Engineering.  Her research is funded by federal agencies and industry partners, including the National Science Foundation, FEMA, the MIT Tata Center, the Dar Group, Holcim, Robert McNeel & Associates, and Altair Engineering.  Mueller has won best paper awards from the International Association of Shell and Spatial Structures, the Symposium on Geometry Processing, and the Journal of Mechanical Design, and was awarded the ACADIA Innovative Research Award of Excellence by the Association for Computer Aided Design in Architecture in 2021 and the Diversity Achievement Award from the Association of Collegiate Schools of Architecture in 2022.

Princeton Robotics Seminar - The theory of online control and its application to robotics

Date and Time
Friday, February 9, 2024 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Elad Hazan, from Princeton University

Elad Hazan
In this talk we will discuss an emerging paradigm in differentiable reinforcement learning called “online nonstochastic control”. The new approach applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. Time permitting we will discuss recent extensions to nonlinear adaptive control and iterative planning, as well as model free reinforcement learning. 

This theory was, and continues to be, developed here in Princeton with numerous collaborators, including Naman Agarwal, Brian Bullins, Karan Singh, Max Simchowitz, Xinyi Chen, Ani Majumdar, Sham Kakade, Udaya Ghai, Edgar Minasyan, Paula Gradu, and many others.

Bio: Elad Hazan is a professor of computer science at Princeton University. His research focuses on the design and analysis of algorithms for basic problems in machine learning and optimization. Amongst his contributions are the co-invention of the AdaGrad algorithm for deep learning, and the first sublinear-time algorithms for convex optimization. He is the recipient of the Bell Labs prize, the IBM Goldberg best paper award twice, in 2012 and 2008, a European Research Council grant, a Marie Curie fellowship and twice the Google Research Award. He served on the steering committee of the Association for Computational Learning and has been program chair for COLT 2015. In 2017 he co-founded In8 inc. focusing on efficient optimization and control, acquired by Google in 2018. He is the co-founder and director of Google AI Princeton.

Princeton Robotics Seminar - Autonomy in the Human World: Developing Robots that Handle the Diversity of Human Lives

Date and Time
Friday, December 1, 2023 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Princeton Robotics Seminar
Speaker
Sonia Chernova, from Georgia Tech

Sonia Chernova
Reliable operation in everyday human environments – homes, offices, and businesses – remains elusive for today’s robotic systems.  A key challenge is diversity, as no two homes or businesses are exactly alike.  However, despite the innumerable unique aspects of any home, there are many commonalities as well, particularly about how objects are placed and used.  These commonalities can be captured in semantic representations, and then used to improve the autonomy of robotic systems by, for example, enabling robots to infer missing information in human instructions, efficiently search for objects, or manipulate objects more effectively.  In this talk, I will discuss recent advances in semantic reasoning, particularly focusing on semantics of everyday objects, household environments, and the development of robotic systems that intelligently interact with their world.

Bio: Sonia Chernova is an Associate Professor in the College of Computing at Georgia Tech.  She directs the Robot Autonomy and Interactive Learning lab, where her research focuses on the development of intelligent and interactive autonomous systems.  Chernova’s contributions span robotics and artificial intelligence, including semantic reasoning, adaptive autonomy, human-robot interaction, and explainable AI.  She also leads the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING), whose mission is to develop collaborative AI partners-in-care that help support a growing population of older adults, helping them sustain independence, improve quality of life, and increase effectiveness of care coordination across their care network.

Follow us: Facebook Twitter Linkedin