Aligning Robot Representations with Humans

Date:  12/1/2022

Speaker:  Andreea Bobu

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract:  Robots deployed in the real world will interact with many different humans to perform many different tasks in their lifetime, which makes it difficult (perhaps even impossible) for designers to specify all the aspects that might matter ahead of time. Instead, robots can extract these aspects implicitly when they learn to perform new tasks from their users’ input. The challenge is that this often results in representations which pick up on spurious correlations in the data and fail to capture the human’s representation of what matters for the task, resulting in behaviors that do not generalize to new scenarios. Consequently, the representation, or abstraction, of the tasks the human hopes for the robot to perform may be misaligned with what the robot knows. In my work, I explore ways in which robots can align their representations with those of the humans they interact with so that they can more effectively learn from their input. In this talk I focus on a divide and conquer approach to the robot learning problem: explicitly focus human input on teaching robots good representations before using them for learning downstream tasks. We accomplish this by investigating how robots can reason about the uncertainty in their current representation, explicitly query humans for feature-specific feedback to improve it, then use task-specific input to learn behaviors on top of the new representation.

Bio: Andreea Bobu is a Ph.D. candidate at the University of California Berkeley in the Electrical Engineering and Computer Science Department advised by Professor Anca Dragan. Her research focuses on aligning robot and human representations for more seamless interaction between them. In particular, Andreea studies how robots can learn more efficiently from human feedback by explicitly focusing on learning good intermediate human-guided representations before using them for task learning. Prior to her Ph.D. she earned her Bachelor’s degree in Computer Science and Engineering from MIT in 2017. She is the recipient of the Apple AI/ML Ph.D. fellowship, is an R:SS and HRI Pioneer, has won best paper award at HRI 2020, and has worked at NVIDIA Research.

Websitehttps://people.eecs.berkeley.edu/~abobu/

Robust perception algorithms for fast and agile navigation

Date:  11/17/2022

Speaker:   Varun Murali

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractIn this talk we explore algorithms for robust visual navigation at operational speeds. Visual inertial navigation at operational speeds is a challenging problem for robotic vehicles.With a camera, inertial measurement unit (IMU) pairing being ubiquitous to most consumer electronics, they form an ideal pairing for applications on the edge and have found applications ranging from large-scale search and rescue, autonomous driving to home robots such as robotic vacuum cleaners. In general, the navigation problem for robots can be written in the form of the sense-think-act framework for autonomy. The “sensing” part is typically performed in this context as bearing measurements to visually salient locations in the environment; the “planning” part then uses the estimate of the ego-state from the sensors and produces a (compactly represented) trajectory from the current location to the goal. Finally, the “act” or controller follows the plan. This division leaves several interesting problems at the intersection of the parts of the framework. For instance, consider the problem of navigating in a relatively unknown environment; if the future percepts are not carefully planned, it is possible to enter a room with very few visual features that degrade the quality of state estimation, which in turn can result in poor closed-loop performance. To this end, we explore the joint problem of perception and planning in a unified setting and show that this framework results in robust trajectory tracking.

Bio: Varun is currently a PhD candidate at MIT working on decision making under uncertainty for agile navigation. Previously, he was a Computer Scientist with the Computer Vision Technology group at SRI International in Princeton, New Jersey, USA working on GPS denied localization algorithms using low cost sensors. Varun received his bachelor’s degree in Electronics and Communications Engineering from the University of Kent at Canterbury, UK. He also received master’s degrees in Electrical and Computer Engineering and Computer Science with a specialization in computational perception and robotics from the Georgia Institute of Technology. He has also held positions at Dynamic Load Monitoring, Southampton, UK and BMW, Munich, Germany. He enjoys research roles and has been involved in different areas of research in robotics and computer vision, including work on joint perception and planning, semantic localization, guaranteed safe navigation and smooth control for wearable robotics.

Intuitive Robot Shared-Control Interfaces via Real-time Motion Planning and Optimization

Date:  11/10/2022

Speaker:   Daniel Rakita

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractMy research focuses on making robots intuitive to control and work alongside for as many people as possible, specifically in areas where people are understaffed or overworked such as nursing, homecare, and manufacturing.  In this talk, I will overview numerous robot shared-control interfaces I have developed to be intuitive and easy-to-use, even for novice users, by blending users’ inputs with robot autonomy on-the-fly.  I will highlight novel motion planning and motion optimization methods that enable these interfaces by quickly synthesizing smooth, feasible, and safe motions that effectively reflect objectives specified by the user and robot autonomy signals in real-time.  I will comment on my ongoing and future work that will push the potential of these technical methods and physical robot systems, all striving towards broad and motivating applications such as remote homecare, tele-nursing, and assistive technologies.

Bio:  Daniel Rakita is an Assistant Professor in the Department of Computer Science at Yale University. His research involves creating motion optimization and planning approaches that allow robot manipulators to move smoothly, safely, and accurately in real-time.  Using these motion algorithms as core components, he subsequently develops and evaluates robot systems and interfaces that are intuitive and easy to use, even for novice users.  Previously, he received his Ph.D. in Computer Science from the University of Wisconsin-Madison, Master’s Degree in Computer Science from the University of Wisconsin-Madison, and a Bachelors of Music Performance from the Indiana University Jacobs School of Music.  His work has been supported by a Microsoft PhD Fellowship (2019-2021) and a Cisco Graduate Student Fellowship (2021-2022).

Human-centered approaches in assistive robotics

Date:  11/03/2022

Speaker:   Maru Cabrera 

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: There is almost a symbiotic relationship between designing useful collaborative robots, developing methods for effective interactions between humans and robots, and configuring the environment in which these interactions take place. In this talk I aim to cover the general topic of interaction methods using human expression and context, and their potential applications in assistive robotics; the two domains I will elaborate are surgical applications and service robots at home. I will present some of my work with assistive robotic platforms and applications with different levels of autonomy considering both the users and the tasks at hand.  I will showcase algorithms and technologies that leverage human context to adjust the way a robot executes a handover task. I will also address how this line of research contributes to the HRI field in general, and the broader goals of the AI community.

Bio:  Maru Cabrera is an Assistant Professor in the Rich Miner School of Computer and Information Sciences at UMass Lowell. Before that, I was a postdoctoral researcher at the University of Washington working with Maya Cakmak in the Human-Centered Robotics Lab. I received my PhD from Purdue University advised by Juan P. Wachs. My research interests aim to develop robotic systems that work alongside humans, collaborating in tasks performed in home environments; these systems explore different levels of robot autonomy and multiple ways for human interaction in less structured environments, with an emphasis on inclusive design to assist people with disabilities or older adults aging in place.; this approach draws from an interdisciplinary intersection between robotics, artificial intelligence, machine learning, computer vision, assistive technologies and human-centered design.

From One to Another: Sequential Human Interaction with Multiple Robots

Date:  10/27/2022

Speaker:  Xiang Zhi Tan

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract:   As more robots are deployed in the world, human-robot interaction will not be limited to one-to-one interactions between users and robots. Instead, users will likely have to interact with multiple robots and other embodied intelligences, simultaneously or sequentially, throughout their day to receive services and complete different tasks. In this talk, I will describe work, in collaboration with my colleagues, that broadens the knowledge on a crucial aspect of multi-robot human interaction: person transfer, or the act of transferring users between multiple service robots. We first investigated rationales for transfer and important aspects of transferring users. We then explored how person transfers should be designed and implemented in laboratory and field settings. We used a combination of design, behavioral, and technical methods to increase our understanding of this crucial phase and inform developers and designers about appropriate robot behaviors when a human is being transferred from one robot to another.

Bio:  Xiang Zhi Tan, PhD, is a postdoctoral fellow working with Prof. Sonia Chernova in the Robot Autonomy and Interactive Learning (RAIL) Lab at Georgia Institute of Technology. His research focuses on designing algorithms and deploying robotic systems to facilitate a better understanding of how multiple robots can seamlessly interact with people. He received his PhD in Robotics in 2021 from Carnegie Mellon University’s Robotics Institute, where he was advised by Prof. Aaron Steinfeld. He holds a Bachelor of Science degree from University of Wisconsin-Madison and a Master of Science degree from Carnegie Mellon University. Outside of research, he has been trying to figure out how to be ambidextrous.

Website: zhi.fyi

Representations in Robot Manipulation: Learning to Manipulate Cables, Fabrics, Bags, and Liquids

Date:  10/20/2022

Speaker:  Daniel Seita

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

The robotics community has seen significant progress in applying machine learning for robot manipulation. However, much manipulation research focuses on rigid objects instead of highly deformable objects such as ropes, fabrics, bags, and liquids, which pose challenges due to their complex configuration spaces, dynamics, and self-occlusions. To achieve greater progress in robot manipulation of such diverse deformable objects, I advocate for an increased focus on learning and developing appropriate representations for robot manipulation. In this talk, I show how novel action-centric representations can lead to better imitation learning for manipulation of diverse deformable objects. I will show how such representations can be learned from color images, depth images, or point cloud observational data. My research demonstrates how novel representations can lead to an exciting new era for 3D robot manipulation of complex objects.

 

Bio:  

Daniel Seita is a postdoctoral researcher at Carnegie Mellon University advised by David Held. His research interests lie in machine learning for robot manipulation, with a focus on developing novel observation and action representations to improve manipulation of challenging deformable objects. Daniel holds a PhD in computer science from the University of California, Berkeley, advised by John Canny and Ken Goldberg. He received his B.A. in math and computer science from Williams College. Daniel’s research has been supported by a six-year Graduate Fellowships for STEM Diversity and by a two-year Berkeley Fellowship. He is the recipient of the Honorable Mention for Best Paper award at UAI 2017, the 2019 Eugene L Lawler Prize from the Berkeley EECS department, and was selected to be an RSS 2022 Pioneer.

Towards Robust Human-Robot Interaction: A Quality Diversity Approach

Date:  10/13/2022

Speaker: Stefanos Nikolaidis

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract:  The growth of scale and complexity of interactions between humans and robots highlights the need for new computational methods to automatically evaluate novel algorithms and applications. Exploring the diverse scenarios of interaction between humans and robots in simulation can improve understanding of complex human-robot interaction systems and avoid potentially costly failures in real-world settings.

In this talk, I propose formulating the problem of automatic scenario generation in human-robot interaction as a quality diversity problem, where the goal is not to find a single global optimum, but a diverse range of failure scenarios that explore both environments and human actions. I show how standard quality diversity algorithms can discover surprising and unexpected failure cases in the shared autonomy domain. I then discuss the development of a new class of quality diversity algorithms that significantly improve the search of the scenario space and the integration of these algorithms with generative models, which enables the generation of complex and realistic scenarios. Finally, I discuss applications in procedural content generation and human preference learning.

Bio: Stefanos Nikolaidis is an Assistant Professor in Computer Science and the Fluor Early Career Chair at the University of Southern California, where he leads the Interactive and Collaborative Autonomous Robotics Systems (ICAROS) lab. His research draws upon expertise on artificial intelligence, procedural content generation and quality diversity optimization and leads to end-to-end solutions that enable deployed robotic systems to act robustly when interacting with people in practical, real-world applications. Stefanos completed his PhD at Carnegie Mellon’s Robotics Institute and received an MS from MIT, a MEng from the University of Tokyo and a BS from the National Technical University of Athens. In 2022, Stefanos was the sole recipient of the Agilent Early Career Professor Award for his work on human-robot collaboration, as well as the recipient of an NSF CAREER award for his work on “Enhancing the Robustness of Human-Robot Interactions via Automatic Scenario Generation.” His research has also been recognized with an oral presentation at the Conference on Neural Information Processing Systems and best paper awards and nominations from the IEEE/ACM International Conference on Human-Robot Interaction, the International Conference on Intelligent Robots and Systems, and the International Symposium on Robotics.

 

Connections between Reinforcement Learning and Representation Learning

Date:  10/6/2022

Speaker: Benjamin Eysenbach

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractIn reinforcement learning (RL), it is easier to solve a task if given a good representation. Deep RL promises to simultaneously solve an RL problem and a representation learning problem; it promises simpler methods with fewer objective functions and fewer hyperparameters. However, prior work often finds that these end-to-end approaches tend to be unstable, and instead addresses the representation learning problem with additional machinery (e.g., auxiliary losses, data augmentation). How can we design RL algorithms that directly acquire good representations?

In this talk, I’ll share how we approached this problem in an unusual way: rather than using RL to solve a representation learning problem, we showed how (contrastive) representation learning can be used to solve some RL problems. The key idea will be to treat the value function as a classifier, which distinguishes between good and bad outcomes, similar to how contrastive learning distinguishes between positive and negative examples. By carefully choosing the inputs to a (contrastive) representation learning algorithm, we learn representations that (provably) encode a value function. We use this idea to design a new RL algorithm that is much simpler than prior work while achieving equal or better performance on simulated benchmarks. On the theoretical side, this work uncovers connections between contrastive learning, hindsight relabeling, successor features and reward learning.

Bio:  Benjamin Eysenbach a 5th year PhD student at Carnegie Mellon University, advised by Ruslan Salakhutdinov and Sergey Levine. His research focuses on algorithms for decision-making (reinforcement learning). Much of the research is about revealing connections between seemingly-disparate algorithms and ideas, leading to new algorithms that are typically simpler, carry stronger theoretical guarantees, and work better in practice. Ben is the recipient of the NSF and Hertz graduate fellowships. Prior to the PhD, he was a resident at Google Research and received his B.S. in math from MIT.

Website: http://ben-eysenbach.github.io/

 

Acquiring Motor Skills with Motion Imitation and Reinforcement Learning

Date: 9/29/2022

 Speaker: Xue Bin (Jason) Peng

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

Humans are capable of performing awe-inspiring feats of agility by drawing from a vast repertoire of diverse and sophisticated motor skills. This dynamism is in sharp contrast to the narrowly specialized and rigid behaviors commonly exhibited by artificial agents in both simulated and real-world domains. How can we create agents that are able to replicate the agility, versatility, and diversity of human motor behaviors? In this talk, we present motion imitation techniques that enable agents to learn large repertoires of highly dynamic and athletic behaviors by mimicking demonstrations. We begin by presenting a motion imitation framework that enables simulated agents to imitate complex behaviors from reference motion clips, ranging from common locomotion skills such as walking and running, to more athletic behaviors such as acrobatics and martial arts. The agents learn to produce robust and life-like behaviors that are nearly indistinguishable in appearance from motions recorded from real-life actors. We then develop adversarial imitation learning techniques that can imitate and compose skills from large motion datasets in order to fulfill high-level task objectives. In addition to developing controllers for simulated agents, our approach can also synthesize controllers for robots operating in the real world. We demonstrate the effectiveness of our approach by developing controllers for a large variety of agile locomotion skills for bipedal and quadrupedal robots.

Bio: 

Xue Bin (Jason) Peng is an Assistant Professor at Simon Fraser University and a Research Scientist at NVIDIA. He received a Ph.D. from the University of California, Berkeley, supervised by Prof. Sergey Levine and Prof. Pieter Abbeel, and an M.Sc. from the University of British Columbia under the supervision of Michiel van de Panne. His work focuses on developing techniques that enable simulated and real-world agents to reproduce the motor capabilities of humans and other animals. He was recipient of the SIGGRAPH 2022 Outstanding Doctoral Dissertation Award, RSS 2020 best paper award, and the SCA 2017 best student paper award.

 

Learning Preferences for Interactive Autonomy

Date: 9/22/2022

 Speaker:  Erdem Biyik

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

In human-robot interaction or more generally multi-agent systems, we often have decentralized agents that need to perform a task together. In such settings, it is crucial to have the ability to anticipate the actions of other agents. Without this ability, the agents are often doomed to perform very poorly. Humans are usually good at this, and it is mostly because we can have good estimates of what other agents are trying to do. We want to give such an ability to robots through reward learning and partner modeling. In this talk, I am going to talk about active learning approaches to this problem and how we can leverage preference data to learn objectives. I am going to show how preferences can help reward learning in the settings where demonstration data may fail, and how partner-modeling enables decentralized agents to cooperate efficiently.

 

Bio: 

Erdem Bıyık is a postdoctoral researcher at the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. He has received his B.Sc. degree from Bilkent University, Turkey, in 2017; and Ph.D. degree from Stanford University in 2022. His research interests lie in the intersection of robotics, artificial intelligence, machine learning and game theory. He is interested in enabling robots to actively learn from various forms of human feedback and designing robot policies to improve the efficiency of multi-agent systems both in cooperative and competitive settings. He also worked at Google as a research intern in 2021 where he adapted his active robot learning algorithms to recommender systems. He will join the University of Southern California as an assistant professor in 2023.