From Semantics to Localization in LiDAR Maps for Autonomous Vehicles

Abhinav Valada, University of Freiburg


Location: Zoom

Time: 2:40p.m.

Abstract: LiDAR-based scene interpretation and localization play a critical role in enabling autonomous vehicles to safely navigate in the environment. The last decade has witnessed unprecedented progress in these tasks by exploiting learning techniques to improve the performance and robustness. Despite these advances, the unordered spare, and irregular structure of point clouds pose several unique challenges that lead to suboptimal performance while employing standard convolutional neural networks (CNNs). In this talk, I will discuss three efforts targeted at addressing some of these challenges. First, I will present our state-of-the-art approach to LiDAR panoptic segmentation that employs a 2D CNN while explicitly leveraging the unique 3D information provided by point clouds at multiple stages in the network. I will then present our recent work that incorporates a differentiable unbalanced optimal transport algorithm to detect loop closures in LiDAR point clouds and outperforms both existing learning-based as well as hardcrafted methods. Next, to alleviate the need for expensive LiDAR sensors on every robot, I will present the first approach for monocular camera localization in LiDAR maps that effectively generalizes to new environments without any retraining and independent of the camera parameters. Finally, I will conclude the talk with a discussion on opportunities for further scaling up the learning of these tasks.