Representations in Robot Manipulation: Learning to Manipulate Cables, Fabrics, Bags, and Liquids

Date:  10/20/2022

Speaker:  Daniel Seita

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

The robotics community has seen significant progress in applying machine learning for robot manipulation. However, much manipulation research focuses on rigid objects instead of highly deformable objects such as ropes, fabrics, bags, and liquids, which pose challenges due to their complex configuration spaces, dynamics, and self-occlusions. To achieve greater progress in robot manipulation of such diverse deformable objects, I advocate for an increased focus on learning and developing appropriate representations for robot manipulation. In this talk, I show how novel action-centric representations can lead to better imitation learning for manipulation of diverse deformable objects. I will show how such representations can be learned from color images, depth images, or point cloud observational data. My research demonstrates how novel representations can lead to an exciting new era for 3D robot manipulation of complex objects.

 

Bio:  

Daniel Seita is a postdoctoral researcher at Carnegie Mellon University advised by David Held. His research interests lie in machine learning for robot manipulation, with a focus on developing novel observation and action representations to improve manipulation of challenging deformable objects. Daniel holds a PhD in computer science from the University of California, Berkeley, advised by John Canny and Ken Goldberg. He received his B.A. in math and computer science from Williams College. Daniel’s research has been supported by a six-year Graduate Fellowships for STEM Diversity and by a two-year Berkeley Fellowship. He is the recipient of the Honorable Mention for Best Paper award at UAI 2017, the 2019 Eugene L Lawler Prize from the Berkeley EECS department, and was selected to be an RSS 2022 Pioneer.