Model-Based Visual Imitation Learning

Franziska Meier, Facebook AI Research


Location: Zoom

Time: 2:40p.m.

Abstract: How can we teach robots new skills by simply showing them what to do? In this talk I’m going to present our recent work on learning reward functions from visual demonstrations via model-based inverse reinforcement learning. Given the reward function a robot can then learn the demonstrated task autonomously. More concretely, I will show how we can frame model-based IRL, as a bi-level optimization problem, which then allows to learn reward functions by directly minimizing the distance between a demonstrated trajectory and a predicted trajectory. In order to do so from visual demonstrations, a key ingredient is a visual dynamics model, that enables the robot to predict the visual trajectory if it were to execute a policy. I will discuss, the opportunities and challenges of this research directions, and will end with an outlook for future work.