Developing robots that can accept instructions from and collaborate with human users is greatly enhanced by an ability to engage in natural language dialog. Unlike most other dialog scenarios, this requires grounding the semantic analysis of language in perception and action in the world. Although deep-learning has greatly enhanced methods for such grounded language understanding, it is difficult to ensure that the data used to train such models covers all of the concepts that a robot might encounter in practice. Therefore, we have developed methods that can continue to learn from dialog with users during ordinary use by acquiring additional targeted training data from the responses to intentionally designed clarification and active learning queries. These methods use reinforcement learning to automatically acquire dialog strategies that support both effective immediate task completion as well as learning that improves future performance. Using both experiments in simulation and with real robots, we have demonstrated that these methods exhibit life-long learning that improves long-term performance.
Bio: Raymond J. Mooney is a Professor in the Department of Computer Science at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign. He is an author of over 180 published research papers, primarily in the areas of machine learning and natural language processing. He was the President of the International Machine Learning Society from 2008-2011, program co-chair for AAAI 2006, general chair for HLT-EMNLP 2005, and co-chair for ICML 1990. He is a Fellow of AAAI, ACM, and ACL and the recipient of the Classic Paper award from AAAI-19 and best paper awards from AAAI-96, KDD-04, ICML-05 and ACL-07.
This talk will be recorded and live-streamed at https://mediacentrallive.princeton.edu/