Abstract: In order to easily and efficiently collaborate with humans, robots must learn to complete tasks specified using natural language. Natural language provides an intuitive interface for a layperson to interact with a robot without the person needing to program a robot, which might require expertise. Natural language instructions can easily specify goal conditions or provide guidances and constraints required to complete a task. Given a natural language command, a robot needs to ground the instruction to a plan that can be executed in the environment. This grounding can be challenging to perform, especially when we expect robots to generalize to novel natural language descriptions and novel task specifications while providing as little prior information as possible. In this talk, I will present a model for grounding instructions to plans. Furthermore, I will present two strategies under this model for language grounding and compare their effectiveness. We will explore the use of approaches using deep learning, semantic parsing, predicate logic and linear temporal logic for task grounding and execution during the talk.
Bio: Nakul Gopalan is a graduate student in the H2R lab at Brown University. His interests are in the problems of language grounding for robotics, and abstractions within reinforcement learning and planning. He has an MSc. in Computer Science from Brown University (2015) and an MSc. in Information and Communication Engineering from T.U. Darmstadt (2013) in Germany. He completed a Bachelor of Engineering from R.V. College of Engineering in Bangalore, India (2008). His team recently won the Brown-Hyundai Visionary Challenge for their proposal to use Mixed Reality and Social Feedback for Human-Robot collaboration.