Acquiring Motor Skills with Motion Imitation and Reinforcement Learning

Date: 9/29/2022

 Speaker: Xue Bin (Jason) Peng

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

Humans are capable of performing awe-inspiring feats of agility by drawing from a vast repertoire of diverse and sophisticated motor skills. This dynamism is in sharp contrast to the narrowly specialized and rigid behaviors commonly exhibited by artificial agents in both simulated and real-world domains. How can we create agents that are able to replicate the agility, versatility, and diversity of human motor behaviors? In this talk, we present motion imitation techniques that enable agents to learn large repertoires of highly dynamic and athletic behaviors by mimicking demonstrations. We begin by presenting a motion imitation framework that enables simulated agents to imitate complex behaviors from reference motion clips, ranging from common locomotion skills such as walking and running, to more athletic behaviors such as acrobatics and martial arts. The agents learn to produce robust and life-like behaviors that are nearly indistinguishable in appearance from motions recorded from real-life actors. We then develop adversarial imitation learning techniques that can imitate and compose skills from large motion datasets in order to fulfill high-level task objectives. In addition to developing controllers for simulated agents, our approach can also synthesize controllers for robots operating in the real world. We demonstrate the effectiveness of our approach by developing controllers for a large variety of agile locomotion skills for bipedal and quadrupedal robots.

Bio: 

Xue Bin (Jason) Peng is an Assistant Professor at Simon Fraser University and a Research Scientist at NVIDIA. He received a Ph.D. from the University of California, Berkeley, supervised by Prof. Sergey Levine and Prof. Pieter Abbeel, and an M.Sc. from the University of British Columbia under the supervision of Michiel van de Panne. His work focuses on developing techniques that enable simulated and real-world agents to reproduce the motor capabilities of humans and other animals. He was recipient of the SIGGRAPH 2022 Outstanding Doctoral Dissertation Award, RSS 2020 best paper award, and the SCA 2017 best student paper award.

 

Learning Preferences for Interactive Autonomy

Date: 9/22/2022

 Speaker:  Erdem Biyik

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

In human-robot interaction or more generally multi-agent systems, we often have decentralized agents that need to perform a task together. In such settings, it is crucial to have the ability to anticipate the actions of other agents. Without this ability, the agents are often doomed to perform very poorly. Humans are usually good at this, and it is mostly because we can have good estimates of what other agents are trying to do. We want to give such an ability to robots through reward learning and partner modeling. In this talk, I am going to talk about active learning approaches to this problem and how we can leverage preference data to learn objectives. I am going to show how preferences can help reward learning in the settings where demonstration data may fail, and how partner-modeling enables decentralized agents to cooperate efficiently.

 

Bio: 

Erdem Bıyık is a postdoctoral researcher at the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. He has received his B.Sc. degree from Bilkent University, Turkey, in 2017; and Ph.D. degree from Stanford University in 2022. His research interests lie in the intersection of robotics, artificial intelligence, machine learning and game theory. He is interested in enabling robots to actively learn from various forms of human feedback and designing robot policies to improve the efficiency of multi-agent systems both in cooperative and competitive settings. He also worked at Google as a research intern in 2021 where he adapted his active robot learning algorithms to recommender systems. He will join the University of Southern California as an assistant professor in 2023.

 

 

 

Learning to Address Novel Situations Through Human-Robot Collaboration

Date: 9/15/2022

 Speaker:  Tesca Fitzgerald

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

As our expectations for robots’ adaptive capacities grow, it will be increasingly important for them to reason about the novel objects, tasks, and interactions inherent to everyday life. Rather than attempt to pre-train a robot for all potential task variations it may encounter, we can develop more capable and robust robots by assuming they will inevitably encounter situations that they are initially unprepared to address. My work enables a robot to address these novel situations by learning from a human teacher’s domain knowledge of the task, such as the contextual use of an object or tool. Meeting this challenge requires robots to be flexible not only to novelty, but to different forms of novelty and their varying effects on the robot’s task completion. In this talk, I will focus on (1) the implications of novelty, and its various causes, on the robot’s learning goals, (2) methods for structuring its interaction with the human teacher in order to meet those learning goals, and (3) modeling and learning from interaction-derived training data to address novelty. 

 

Bio: 

Dr. Tesca Fitzgerald is an Assistant Professor in the Department of Computer Science at Yale University. Her research is centered around interactive robot learning, with the aim of developing robots that are adaptive, robust, and collaborative when faced with novel situations. Before joining Yale, Dr. Fitzgerald was a Postdoctoral Fellow at Carnegie Mellon University, received her PhD in Computer Science at Georgia Tech, and completed her B.Sc at Portland State University. She is an NSF Graduate Research Fellow (2014), Microsoft Graduate Women Scholar (2014), and IBM Ph.D. Fellow (2017).

SMALL MULTI-ROBOT AGRICULTURE SYSTEM (SMRAS)

Date: 9/8/2022

 Speaker:  Petar Durdevic

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

Agriculture is an important part of our society, enabling us to produce enough food to feed an ever-growing population. One challenge is that some farming tasks are very labor intensive, and in some parts of the world this labor force is becoming scarce. Our mission is to develop robots which can fill the growing gap of labor shortages in farming. In addition, we focus on weeding with the goal of reducing the use of pesticides. Our focus is on visual control of robots as cameras have a high information destiny in comparison to their cost. In this talk I will introduce the project and discuss design of the robot. In addition, since the visual control is a big part our system, the topic of integration of deep learning and control techniques and the analysis of the effect that timing interactions between controllers and deep neural networks have will be discussed.

 

Bio: 

Petar Durdevic is an Associate Professor at Aalborg University, Energy Department, Denmark, and has investigated the application of advanced control, deep learning, and reinforcement learning in control systems since his PhD studies. He has developed several robotic systems with visual servo navigation. For the past 5 years he has extensively been working with inspections and condition monitoring of offshore energy systems, focusing predominantly on wind turbines. He has set up the Robotics lab at AAU-E and leads the Offshore Drones and Robots research group. He is also a board member of the research center Aalborg Robotics at AAU, which promotes research, innovation, education, and dissemination within robotics.

 

 

Taking off: autonomy for insect-scale robots

Date: 9/1/2022

Speaker: Farrell Helbling

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: Countless science fiction works have set our expectations for small, mobile, autonomous robots for use in a broad range of applications. The ability to move through highly dynamic and complex environments can expand capabilities in search and rescue operations and safety inspection tasks. These robots can also form a diverse collective to provide more flexibility than a multifunctional robot. Advances in multi-scale manufacturing and the proliferation of small electronic devices have paved the way to realizing this vision with centimeter-scale robots. However, there remain significant challenges in making these highly-articulated mechanical devices fully autonomous due to the severe mass and power constraints. My research takes a holistic approach to navigating the inherent tradeoffs in each component in terms of their size, mass, power, and computation requirements. In this talk I will present strategies for creating an autonomous vehicle, the RoboBee – an insect-scale flapping-wing robot with unprecedented mass, power, and computation constraints. I will present my work on the analysis of control and power requirements for this vehicle, as well as results on the integration of onboard sensors. I also will discuss recent results that culminate nearly two decades of effort to create a power autonomous insect-scale vehicle. Lastly, I will outline how this design strategy can be readily applied to other micro and bioinspired autonomous robots.

Bio: Farrell Helbling is an assistant professor in Electrical and Computer Engineering at Cornell University, where she focuses on the systems-level design of insect-scale vehicles. Her graduate and post-doctoral work at the Harvard Microrobotics Lab focused on the Harvard RoboBee, an insect-scale flapping-wing robot, and HAMR, a bio-inspired crawling robot. Her research looks at the integration of the control system, sensors, and power electronics within the strict weight and power constraints of these vehicles. Her work on the first autonomous flight of a centimeter-scale vehicle was recently featured on the cover of Nature. She is a 2018 Rising Star in EECS, the recipient of a NSF Graduate Research Fellowship, and co-author on the IROS 2015 Best Student Paper for an insect-scale, hybrid aerial-aquatic vehicle. Her work on the RoboBee project can be seen at the Boston Museum of Science, World Economic Forum, London Science Museum, and the Smithsonian, as well as in the popular press (The New York Times, PBS NewsHour, Science Friday, and the BBC). She is interested in the codesign of mechanical and electrical systems for mass-, power-, and computation-constrained robots.

 

 

 

Welcome to the Fall 2022 Robotics Seminar!

Tapomayukh Bhattacharjee and Sanjiban Choudhury

8/25/2022

Location: 122 Gates Hall

Time: 2:40p.m.

Hey everyone! Welcome back for the semester. The first seminar is just an informal meet and greet. We will cover the logistics of what to expect from this semester’s seminar/class as well as give an introduction to Cornell Robotics as a community. The Robotics Graduate Student Organization will also cover some of what is to come for graduate students. If you’re new to the Cornell Robotics community, be sure to come for this week’s seminar! We will also have snacks!