Exploring Context for Better Generalization in Reinforcement Learning

Date:  2/2/2023

Speaker:  Amy Zhang

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractThe benefit of multi-task learning over single-task learning relies on the ability to use relations across tasks to improve performance on any single task. While sharing representations is an important mechanism to share information across tasks, its success depends on how well the structure underlying the tasks is captured. In some real-world situations, we have access to metadata, or additional information about a task, that may not provide any new insight in the context of a single task setup alone but inform relations across multiple tasks. While this metadata can be useful for improving multi-task learning performance, effectively incorporating it can be an additional challenge. In this talk, we explore various ways to utilize context to improve positive transfer in multi-task and goal-conditioned reinforcement learning.

Bio: I am an assistant professor at UT Austin in the Chandra Family Department of Electrical and Computer Engineering . My work focuses on improving generalization in reinforcement learning through bridging theory and practice in learning and utilizing structure in real world problems. Previously I was a research scientist at Meta AI and a postdoctoral fellow at UC Berkeley. I obtained my PhD from McGill University and the Mila Institute in 2021, and also previously obtained an M.Eng. in EECS and dual B.Sci. degrees in Mathematics and EECS from MIT.

Websitehttps://amyzhang.github.io/