Speaker: Mustafa Mukadam
Location: 122 Gates Hall and Zoom
Time: 2:40 p.m.-3:30 p.m.
Robot perception sits at a unique cross roads between computer vision and robotics. The nature of sensing is ego centric and temporal, and can be from contact-rich interactions. Fusing these signals into representations that enable downstream tasks in real-time is a challenge. In this talk, I will cover some of our recent work in building signed distance fields, human pose and shape estimation, and tactile estimation that provide a recipe for thinking about perception problems with a robotics lens by making optimization and prior models compatible with deep learning.
Mustafa Mukadam is a Research Scientist at Meta AI. His work focuses on fundamental and applied research in robotics and machine learning, and structured techniques at their intersection towards practical robot learning. Specifically, his research spans problems from perception to planning for navigation and manipulation. He received a Ph.D. from Georgia Tech where he was part of the Robot Learning Lab and Institute for Robotics and Intelligent Machines. His works have been covered by media outlets like GeekWire, VentureBeat, and TechCrunch, and work on motion planning has received the 2018 IJRR paper of the year award.