Safety and Generalization Guarantees for Learning-Based Control of Robots

Anirudha Majumdar, Princeton University

12/15/2020

Location: Zoom

Time: 2:55p.m.

Abstract: Imagine an unmanned aerial vehicle that learns to navigate using a thousand different obstacle environments or a robotic manipulator that learns to grasp using a million objects in a dataset. How likely are these systems to succeed on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize well to environments that our robot has not previously encountered? Unfortunately, current state-of-the-art approaches either do not generally provide such guarantees or do so only under very restrictive assumptions. This is a particularly pressing challenge for safety-critical robotic systems with rich sensory inputs (e.g., vision) that employ neural network-based control policies.

In this talk, I will present approaches for learning control policies for robotic systems that provably generalize well with high probability to novel environments. The key technical idea behind our approach is to leverage tools from generalization theory (e.g., PAC-Bayes theory) and the theory of information bottlenecks. We apply our techniques on examples including navigation and grasping in order to demonstrate the potential to provide strong generalization guarantees on robotic systems with complicated (e.g., nonlinear) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.