Perception in Action

Silvia Ferrari, Cornell University

12/8/2020

Location: Zoom

Time: 2:55p.m.

Abstract:Autonomous robots equipped with on-board cameras are becoming crucial to both civilian and military applications because of their ability to assist humans in carrying out dangerous yet vital missions.  Existing computer vision and perception algorithms have limited real-time applicability in agile and autonomous robots, such as micro aerial vehicles, due to their heavy computational requirements and slow reaction times. Event-based cameras have the potential to overcome these limitations but their real-time implementations to date have been limited to obstacle avoidance. This talk presents an approach that departs from the usual paradigm of treating computer vision and robot control as separate processes and present a new class of active perception and motion control algorithms that are closely intertwined. This perception-in-action approach not only accounts for but also exploits the known ego motion of the robot-mounted camera to perform many simultaneous functionalities dynamically, in fast changing environments, without relying on wearable devices, tags, or external motion capture.  Inspired by animal perception and sensory embodiment, our approach enables an agile camera-equipped aerial robot to perceive its surroundings in real time and carry out tasks based on a myriad of visual information, known as exteroceptive stimuli, integrated with proprioceptive feedback about the robot state or ego motion. Our tight integration of perception and control results into a perception-in-action paradigm that allows different people to interact with the robot using only natural language and hand gestures, as they both move in unknown environments populated with people, vehicles, and animals, subject to variable winds and natural or artificial illumination.