Krishna Murthy Jatavallabhula, Robotics and Embodied AI Lab (REAL), Mila, Universite de Montreal
Abstract: Modern machine learning has ushered in a new air of excitement in the design of intelligent robots. In particular, gradient-based learning architectures (deep neural networks) have enabled significant strides in robot perception, reasoning, and action. With all of these advancements, one might wonder if “classical” techniques for robot perception and state estimation are relevant in this age. I postulate that a flexible blend of “classical” and “learned” methods is the best foot forward for robot intelligence.
“What is the ideal way to combine “classical” techniques with gradient-based learning architectures?” This is the central question that my research strives to answer. I argue that such a blend should be seamless: we must neither disregard domain-specific inductive biases that influence the design of “classical” robots, nor should we compromise on the representational power that learning-based techniques offer. In particular, I tackle the problem of blending gradient-based learning with visual simultaneous localization and mapping (SLAM), and the new possibilities this opens up. My talk will focus on “gradSLAM”: a fully differentiable dense SLAM system that harnesses the power of computational graphs and automatic differentiation, to enable a new perspective of thinking about deep learning for SLAM.