Mapping Natural Language Instructions and Observations to Robot Control

Yoav Artzi, Cornell Tech

9/10/19

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: The problem of mapping natural language instruction to robot actions have been studied largely using modular approaches, where different modules are built or trained for different tasks, and are then combined together in a complex integration process to form a complete system. This approach requires significant engineering effort and designing complex symbolic representations, both to represent language meaning and the interaction between the different modules. We propose to tradeoff these challenges with representation learning, and learn to directly map from natural language instruction and raw sensory observations to robot control in a single model. We design an interpretable model that allows the user to visualize the robot’s plan, and a learning approach that utilizes simulation and demonstrations to learn without autonomous robot control. We apply our method to a quadcopter drone for the task of following navigation instructions.

This work was done by Valts Blukis, who is co-advised with Ross Knepper.

Bio: Yoav Artzi is an Assistant Professor in the Department of Computer Science and Cornell Tech at Cornell University. His research focuses on learning expressive models for natural language understanding, most recently in situated interactive scenarios. He received an NSF CAREER award, paper awards in EMNLP 2015, ACL 2017, and NAACL 2018, a Google Focused Research Award, and faculty awards from Google, Facebook, and Workday. Yoav holds a B.Sc. summa cum laude from Tel Aviv University and a Ph.D. from the University of Washington.