Human-centered approaches in assistive robotics

Date:  11/03/2022

Speaker:   Maru Cabrera 

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: There is almost a symbiotic relationship between designing useful collaborative robots, developing methods for effective interactions between humans and robots, and configuring the environment in which these interactions take place. In this talk I aim to cover the general topic of interaction methods using human expression and context, and their potential applications in assistive robotics; the two domains I will elaborate are surgical applications and service robots at home. I will present some of my work with assistive robotic platforms and applications with different levels of autonomy considering both the users and the tasks at hand.  I will showcase algorithms and technologies that leverage human context to adjust the way a robot executes a handover task. I will also address how this line of research contributes to the HRI field in general, and the broader goals of the AI community.

Bio:  Maru Cabrera is an Assistant Professor in the Rich Miner School of Computer and Information Sciences at UMass Lowell. Before that, I was a postdoctoral researcher at the University of Washington working with Maya Cakmak in the Human-Centered Robotics Lab. I received my PhD from Purdue University advised by Juan P. Wachs. My research interests aim to develop robotic systems that work alongside humans, collaborating in tasks performed in home environments; these systems explore different levels of robot autonomy and multiple ways for human interaction in less structured environments, with an emphasis on inclusive design to assist people with disabilities or older adults aging in place.; this approach draws from an interdisciplinary intersection between robotics, artificial intelligence, machine learning, computer vision, assistive technologies and human-centered design.

Representations in Robot Manipulation: Learning to Manipulate Cables, Fabrics, Bags, and Liquids

Date:  10/20/2022

Speaker:  Daniel Seita

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

The robotics community has seen significant progress in applying machine learning for robot manipulation. However, much manipulation research focuses on rigid objects instead of highly deformable objects such as ropes, fabrics, bags, and liquids, which pose challenges due to their complex configuration spaces, dynamics, and self-occlusions. To achieve greater progress in robot manipulation of such diverse deformable objects, I advocate for an increased focus on learning and developing appropriate representations for robot manipulation. In this talk, I show how novel action-centric representations can lead to better imitation learning for manipulation of diverse deformable objects. I will show how such representations can be learned from color images, depth images, or point cloud observational data. My research demonstrates how novel representations can lead to an exciting new era for 3D robot manipulation of complex objects.

 

Bio:  

Daniel Seita is a postdoctoral researcher at Carnegie Mellon University advised by David Held. His research interests lie in machine learning for robot manipulation, with a focus on developing novel observation and action representations to improve manipulation of challenging deformable objects. Daniel holds a PhD in computer science from the University of California, Berkeley, advised by John Canny and Ken Goldberg. He received his B.A. in math and computer science from Williams College. Daniel’s research has been supported by a six-year Graduate Fellowships for STEM Diversity and by a two-year Berkeley Fellowship. He is the recipient of the Honorable Mention for Best Paper award at UAI 2017, the 2019 Eugene L Lawler Prize from the Berkeley EECS department, and was selected to be an RSS 2022 Pioneer.

Learning Preferences for Interactive Autonomy

Date: 9/22/2022

 Speaker:  Erdem Biyik

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

In human-robot interaction or more generally multi-agent systems, we often have decentralized agents that need to perform a task together. In such settings, it is crucial to have the ability to anticipate the actions of other agents. Without this ability, the agents are often doomed to perform very poorly. Humans are usually good at this, and it is mostly because we can have good estimates of what other agents are trying to do. We want to give such an ability to robots through reward learning and partner modeling. In this talk, I am going to talk about active learning approaches to this problem and how we can leverage preference data to learn objectives. I am going to show how preferences can help reward learning in the settings where demonstration data may fail, and how partner-modeling enables decentralized agents to cooperate efficiently.

 

Bio: 

Erdem Bıyık is a postdoctoral researcher at the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. He has received his B.Sc. degree from Bilkent University, Turkey, in 2017; and Ph.D. degree from Stanford University in 2022. His research interests lie in the intersection of robotics, artificial intelligence, machine learning and game theory. He is interested in enabling robots to actively learn from various forms of human feedback and designing robot policies to improve the efficiency of multi-agent systems both in cooperative and competitive settings. He also worked at Google as a research intern in 2021 where he adapted his active robot learning algorithms to recommender systems. He will join the University of Southern California as an assistant professor in 2023.

 

 

 

Learning to Address Novel Situations Through Human-Robot Collaboration

Date: 9/15/2022

 Speaker:  Tesca Fitzgerald

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

As our expectations for robots’ adaptive capacities grow, it will be increasingly important for them to reason about the novel objects, tasks, and interactions inherent to everyday life. Rather than attempt to pre-train a robot for all potential task variations it may encounter, we can develop more capable and robust robots by assuming they will inevitably encounter situations that they are initially unprepared to address. My work enables a robot to address these novel situations by learning from a human teacher’s domain knowledge of the task, such as the contextual use of an object or tool. Meeting this challenge requires robots to be flexible not only to novelty, but to different forms of novelty and their varying effects on the robot’s task completion. In this talk, I will focus on (1) the implications of novelty, and its various causes, on the robot’s learning goals, (2) methods for structuring its interaction with the human teacher in order to meet those learning goals, and (3) modeling and learning from interaction-derived training data to address novelty. 

 

Bio: 

Dr. Tesca Fitzgerald is an Assistant Professor in the Department of Computer Science at Yale University. Her research is centered around interactive robot learning, with the aim of developing robots that are adaptive, robust, and collaborative when faced with novel situations. Before joining Yale, Dr. Fitzgerald was a Postdoctoral Fellow at Carnegie Mellon University, received her PhD in Computer Science at Georgia Tech, and completed her B.Sc at Portland State University. She is an NSF Graduate Research Fellow (2014), Microsoft Graduate Women Scholar (2014), and IBM Ph.D. Fellow (2017).

SMALL MULTI-ROBOT AGRICULTURE SYSTEM (SMRAS)

Date: 9/8/2022

 Speaker:  Petar Durdevic

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

Agriculture is an important part of our society, enabling us to produce enough food to feed an ever-growing population. One challenge is that some farming tasks are very labor intensive, and in some parts of the world this labor force is becoming scarce. Our mission is to develop robots which can fill the growing gap of labor shortages in farming. In addition, we focus on weeding with the goal of reducing the use of pesticides. Our focus is on visual control of robots as cameras have a high information destiny in comparison to their cost. In this talk I will introduce the project and discuss design of the robot. In addition, since the visual control is a big part our system, the topic of integration of deep learning and control techniques and the analysis of the effect that timing interactions between controllers and deep neural networks have will be discussed.

 

Bio: 

Petar Durdevic is an Associate Professor at Aalborg University, Energy Department, Denmark, and has investigated the application of advanced control, deep learning, and reinforcement learning in control systems since his PhD studies. He has developed several robotic systems with visual servo navigation. For the past 5 years he has extensively been working with inspections and condition monitoring of offshore energy systems, focusing predominantly on wind turbines. He has set up the Robotics lab at AAU-E and leads the Offshore Drones and Robots research group. He is also a board member of the research center Aalborg Robotics at AAU, which promotes research, innovation, education, and dissemination within robotics.