Events

Mapping Natural Language Instructions and Observations to Robot Control

Yoav Artzi, Cornell Tech

9/10/19

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: The problem of mapping natural language instruction to robot actions have been studied largely using modular approaches, where different modules are built or trained for different tasks, and are then combined together in a complex integration process to form a complete system. This approach requires significant engineering effort and designing complex symbolic representations, both to represent language meaning and the interaction between the different modules. We propose to tradeoff these challenges with representation learning, and learn to directly map from natural language instruction and raw sensory observations to robot control in a single model. We design an interpretable model that allows the user to visualize the robot’s plan, and a learning approach that utilizes simulation and demonstrations to learn without autonomous robot control. We apply our method to a quadcopter drone for the task of following navigation instructions.

This work was done by Valts Blukis, who is co-advised with Ross Knepper.

Bio: Yoav Artzi is an Assistant Professor in the Department of Computer Science and Cornell Tech at Cornell University. His research focuses on learning expressive models for natural language understanding, most recently in situated interactive scenarios. He received an NSF CAREER award, paper awards in EMNLP 2015, ACL 2017, and NAACL 2018, a Google Focused Research Award, and faculty awards from Google, Facebook, and Workday. Yoav holds a B.Sc. summa cum laude from Tel Aviv University and a Ph.D. from the University of Washington.

Human-guided Task Transfer for Interactive Robots

Tesca Fitzgerald, Georgia Tech

9/3/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract:

Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, most robots do not have this kind of adaptability, and yet, as our expectations of robots’ interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans. 

In this talk I will describe my approaches to the problem of task transfer, enabling a robot to transfer a known task model to address scenarios containing differences in the objects used, object configurations, and task constraints. The primary contribution of my work is a series of algorithms for deriving and modeling domain-specific task information from structured interaction with a human teacher. In doing so, this work enables the robot to leverage the teacher’s domain knowledge of the task (such as the contextual use of an object or tool) in order to address a range of tasks without requiring extensive exploration or re-training of the task. By enabling a robot to ask for help in addressing unfamiliar problems, my work contributes toward a future of adaptive, collaborative robots.

 

Bio:

Tesca Fitzgerald is a Computer Science PhD candidate in the School of Interactive Computing at the Georgia Institute of Technology. In her PhD, she has been developing algorithms and knowledge representations for robots to learn, adapt, and reuse task knowledge through interaction with a human teacher. In doing so, she applies concepts of social learning and cognition to develop a robot which adapts to human environments.

Tesca is co-advised by Dr. Ashok Goel (director of the Design and Intelligence Lab) and Dr. Andrea Thomaz (director of the Socially Intelligent Machines Lab). Before joining Georgia Tech in 2013, she graduated from Portland State University with a B.Sc. in Computer Science. Tesca is an NSF Graduate Research Fellow (2014), Microsoft Graduate Women Scholar (2014), and IBM Ph.D. Fellow (2017).