Structuring learning for real robots

Date:  2/16/2023

Speaker:  Georgia Chalvatzaki

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: We strive to enable robots to operate in real-world unstructured environments. Robot learning holds the promise of endowing robots with generalizable skills. Nevertheless, current approaches mainly overfit specific task (and reward) specifications. We show that by exploiting the structure of the robotics problems, we can scale robotic performance and introduce algorithmic advances that show promising evidence for further research. In this talk, I will talk about four recent works, in which we couple learning and classical methods in perception, planning, and control and showcase a wide range of applications that could enable broader scalability of complex robotic systems, like mobile manipulation robots, that could learn unsupervised, and learn to act safely even around humans. 

Bio:   Georgia Chalvatzaki is Assistant Professor and research leader of the intelligent robotic systems for assistance (iROSA) group at TU Darmstadt, Germany. She received her Diploma and Ph.D. in Electrical and Computer Engineering at the National Technical University of Athens, Greece. Her research interests lie in the intersection of classical robotics and machine learning to develop behaviors for enabling mobile manipulator robots to solve complex tasks in domestic environments with the human-in-the-loop of the interaction process. She holds an Emmy Noether grant for AI Methods from the German research foundation. She is a co-chair of the IEEE RAS technical committee on Mobile Manipulation, co-chair of the IEEE RAS Women in Engineering committee, and was voted “AI-Newcomer” for 2021 by the German Information Society.

 

Exploring Context for Better Generalization in Reinforcement Learning

Date:  2/2/2023

Speaker:  Amy Zhang

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractThe benefit of multi-task learning over single-task learning relies on the ability to use relations across tasks to improve performance on any single task. While sharing representations is an important mechanism to share information across tasks, its success depends on how well the structure underlying the tasks is captured. In some real-world situations, we have access to metadata, or additional information about a task, that may not provide any new insight in the context of a single task setup alone but inform relations across multiple tasks. While this metadata can be useful for improving multi-task learning performance, effectively incorporating it can be an additional challenge. In this talk, we explore various ways to utilize context to improve positive transfer in multi-task and goal-conditioned reinforcement learning.

Bio: I am an assistant professor at UT Austin in the Chandra Family Department of Electrical and Computer Engineering . My work focuses on improving generalization in reinforcement learning through bridging theory and practice in learning and utilizing structure in real world problems. Previously I was a research scientist at Meta AI and a postdoctoral fellow at UC Berkeley. I obtained my PhD from McGill University and the Mila Institute in 2021, and also previously obtained an M.Eng. in EECS and dual B.Sci. degrees in Mathematics and EECS from MIT.

Websitehttps://amyzhang.github.io/