Acquiring Motor Skills with Motion Imitation and Reinforcement Learning

Date: 9/29/2022

 Speaker: Xue Bin (Jason) Peng

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

Humans are capable of performing awe-inspiring feats of agility by drawing from a vast repertoire of diverse and sophisticated motor skills. This dynamism is in sharp contrast to the narrowly specialized and rigid behaviors commonly exhibited by artificial agents in both simulated and real-world domains. How can we create agents that are able to replicate the agility, versatility, and diversity of human motor behaviors? In this talk, we present motion imitation techniques that enable agents to learn large repertoires of highly dynamic and athletic behaviors by mimicking demonstrations. We begin by presenting a motion imitation framework that enables simulated agents to imitate complex behaviors from reference motion clips, ranging from common locomotion skills such as walking and running, to more athletic behaviors such as acrobatics and martial arts. The agents learn to produce robust and life-like behaviors that are nearly indistinguishable in appearance from motions recorded from real-life actors. We then develop adversarial imitation learning techniques that can imitate and compose skills from large motion datasets in order to fulfill high-level task objectives. In addition to developing controllers for simulated agents, our approach can also synthesize controllers for robots operating in the real world. We demonstrate the effectiveness of our approach by developing controllers for a large variety of agile locomotion skills for bipedal and quadrupedal robots.

Bio: 

Xue Bin (Jason) Peng is an Assistant Professor at Simon Fraser University and a Research Scientist at NVIDIA. He received a Ph.D. from the University of California, Berkeley, supervised by Prof. Sergey Levine and Prof. Pieter Abbeel, and an M.Sc. from the University of British Columbia under the supervision of Michiel van de Panne. His work focuses on developing techniques that enable simulated and real-world agents to reproduce the motor capabilities of humans and other animals. He was recipient of the SIGGRAPH 2022 Outstanding Doctoral Dissertation Award, RSS 2020 best paper award, and the SCA 2017 best student paper award.

 

Learning Preferences for Interactive Autonomy

Date: 9/22/2022

 Speaker:  Erdem Biyik

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

In human-robot interaction or more generally multi-agent systems, we often have decentralized agents that need to perform a task together. In such settings, it is crucial to have the ability to anticipate the actions of other agents. Without this ability, the agents are often doomed to perform very poorly. Humans are usually good at this, and it is mostly because we can have good estimates of what other agents are trying to do. We want to give such an ability to robots through reward learning and partner modeling. In this talk, I am going to talk about active learning approaches to this problem and how we can leverage preference data to learn objectives. I am going to show how preferences can help reward learning in the settings where demonstration data may fail, and how partner-modeling enables decentralized agents to cooperate efficiently.

 

Bio: 

Erdem Bıyık is a postdoctoral researcher at the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. He has received his B.Sc. degree from Bilkent University, Turkey, in 2017; and Ph.D. degree from Stanford University in 2022. His research interests lie in the intersection of robotics, artificial intelligence, machine learning and game theory. He is interested in enabling robots to actively learn from various forms of human feedback and designing robot policies to improve the efficiency of multi-agent systems both in cooperative and competitive settings. He also worked at Google as a research intern in 2021 where he adapted his active robot learning algorithms to recommender systems. He will join the University of Southern California as an assistant professor in 2023.

 

 

 

Learning to Address Novel Situations Through Human-Robot Collaboration

Date: 9/15/2022

 Speaker:  Tesca Fitzgerald

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

As our expectations for robots’ adaptive capacities grow, it will be increasingly important for them to reason about the novel objects, tasks, and interactions inherent to everyday life. Rather than attempt to pre-train a robot for all potential task variations it may encounter, we can develop more capable and robust robots by assuming they will inevitably encounter situations that they are initially unprepared to address. My work enables a robot to address these novel situations by learning from a human teacher’s domain knowledge of the task, such as the contextual use of an object or tool. Meeting this challenge requires robots to be flexible not only to novelty, but to different forms of novelty and their varying effects on the robot’s task completion. In this talk, I will focus on (1) the implications of novelty, and its various causes, on the robot’s learning goals, (2) methods for structuring its interaction with the human teacher in order to meet those learning goals, and (3) modeling and learning from interaction-derived training data to address novelty. 

 

Bio: 

Dr. Tesca Fitzgerald is an Assistant Professor in the Department of Computer Science at Yale University. Her research is centered around interactive robot learning, with the aim of developing robots that are adaptive, robust, and collaborative when faced with novel situations. Before joining Yale, Dr. Fitzgerald was a Postdoctoral Fellow at Carnegie Mellon University, received her PhD in Computer Science at Georgia Tech, and completed her B.Sc at Portland State University. She is an NSF Graduate Research Fellow (2014), Microsoft Graduate Women Scholar (2014), and IBM Ph.D. Fellow (2017).

SMALL MULTI-ROBOT AGRICULTURE SYSTEM (SMRAS)

Date: 9/8/2022

 Speaker:  Petar Durdevic

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

Agriculture is an important part of our society, enabling us to produce enough food to feed an ever-growing population. One challenge is that some farming tasks are very labor intensive, and in some parts of the world this labor force is becoming scarce. Our mission is to develop robots which can fill the growing gap of labor shortages in farming. In addition, we focus on weeding with the goal of reducing the use of pesticides. Our focus is on visual control of robots as cameras have a high information destiny in comparison to their cost. In this talk I will introduce the project and discuss design of the robot. In addition, since the visual control is a big part our system, the topic of integration of deep learning and control techniques and the analysis of the effect that timing interactions between controllers and deep neural networks have will be discussed.

 

Bio: 

Petar Durdevic is an Associate Professor at Aalborg University, Energy Department, Denmark, and has investigated the application of advanced control, deep learning, and reinforcement learning in control systems since his PhD studies. He has developed several robotic systems with visual servo navigation. For the past 5 years he has extensively been working with inspections and condition monitoring of offshore energy systems, focusing predominantly on wind turbines. He has set up the Robotics lab at AAU-E and leads the Offshore Drones and Robots research group. He is also a board member of the research center Aalborg Robotics at AAU, which promotes research, innovation, education, and dissemination within robotics.

 

 

Taking off: autonomy for insect-scale robots

Date: 9/1/2022

Speaker: Farrell Helbling

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: Countless science fiction works have set our expectations for small, mobile, autonomous robots for use in a broad range of applications. The ability to move through highly dynamic and complex environments can expand capabilities in search and rescue operations and safety inspection tasks. These robots can also form a diverse collective to provide more flexibility than a multifunctional robot. Advances in multi-scale manufacturing and the proliferation of small electronic devices have paved the way to realizing this vision with centimeter-scale robots. However, there remain significant challenges in making these highly-articulated mechanical devices fully autonomous due to the severe mass and power constraints. My research takes a holistic approach to navigating the inherent tradeoffs in each component in terms of their size, mass, power, and computation requirements. In this talk I will present strategies for creating an autonomous vehicle, the RoboBee – an insect-scale flapping-wing robot with unprecedented mass, power, and computation constraints. I will present my work on the analysis of control and power requirements for this vehicle, as well as results on the integration of onboard sensors. I also will discuss recent results that culminate nearly two decades of effort to create a power autonomous insect-scale vehicle. Lastly, I will outline how this design strategy can be readily applied to other micro and bioinspired autonomous robots.

Bio: Farrell Helbling is an assistant professor in Electrical and Computer Engineering at Cornell University, where she focuses on the systems-level design of insect-scale vehicles. Her graduate and post-doctoral work at the Harvard Microrobotics Lab focused on the Harvard RoboBee, an insect-scale flapping-wing robot, and HAMR, a bio-inspired crawling robot. Her research looks at the integration of the control system, sensors, and power electronics within the strict weight and power constraints of these vehicles. Her work on the first autonomous flight of a centimeter-scale vehicle was recently featured on the cover of Nature. She is a 2018 Rising Star in EECS, the recipient of a NSF Graduate Research Fellowship, and co-author on the IROS 2015 Best Student Paper for an insect-scale, hybrid aerial-aquatic vehicle. Her work on the RoboBee project can be seen at the Boston Museum of Science, World Economic Forum, London Science Museum, and the Smithsonian, as well as in the popular press (The New York Times, PBS NewsHour, Science Friday, and the BBC). She is interested in the codesign of mechanical and electrical systems for mass-, power-, and computation-constrained robots.

 

 

 

Welcome to the Fall 2022 Robotics Seminar!

Tapomayukh Bhattacharjee and Sanjiban Choudhury

8/25/2022

Location: 122 Gates Hall

Time: 2:40p.m.

Hey everyone! Welcome back for the semester. The first seminar is just an informal meet and greet. We will cover the logistics of what to expect from this semester’s seminar/class as well as give an introduction to Cornell Robotics as a community. The Robotics Graduate Student Organization will also cover some of what is to come for graduate students. If you’re new to the Cornell Robotics community, be sure to come for this week’s seminar! We will also have snacks!

Human-centered Robotics: How to bridge the gap between humans and robots?

Date: 5/5/2022

Head shot of Daehyung Park
Daehyung Park

Speaker: Daehyung Park

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: There are now successful stand-alone or coexistence robotic systems in human environment. Yet robots are not intelligent enough to directly collaborate with humans, particularly potential non-expert users. In this talk, I will discuss how to develop highly-capable robotic teammates by bridging the knowledge gap between humans and robots. I will particularly show our cognitive architecture with learned knowledge models can produce three core capabilities: natural language grounding, transferable skill learning, and robust task planning-and-execution. I will show how to provide highly scalable and reliable assistance when situated in novel environments.

Bio: 

Daehyung Park is an assistant professor at the School of Computing, KAIST, Korea, leading the Robust Intelligence and Robotics Laboratory (RIRO Lab). His research lies at the intersection of mobile manipulation, artificial intelligence, and human-robot interaction to advance collaborative robot technologies.
Prior to joining KAIST, he had been a postdoctoral associate in Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. He received a Ph.D. in Robotics at Georgia Institute of Technology, an M.S. from the University of Southern California, and a B.S. from Osaka University. Prior to joining his Ph.D., he served as a Robotics Researcher at Samsung Electronics Inc from 2008-2012. He is a recipient of Google Research Scholar Award, 2022.

 

 

 

 

 

Making Soft Robotics Less Hard: Towards a Unified Modeling, Design, and Control Framework

Date: 4/28/2022

Photo of Daniel Bruder
Daniel Bruder head-shot.

 

Speaker: Daniel Bruder

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractSoft robots are able to safely interact with delicate objects, absorb impacts without damage, and adapt to the shape of their environment, making them ideal for applications that require safe robot-human interaction. However, despite their potential advantages, their use in real-world applications has been limited due to the difficulty involved in modeling and controlling soft robotic systems. In this talk, I’ll describe two modeling approaches aimed at overcoming the limitations of previous methods. The first is a physics-based approach for fluid-driven actuators that offers predictions in terms of tunable geometrical parameters, making it a valuable tool in the design of soft fluid-driven robotic systems. The second is a data-driven approach that leverages Koopman operator theory to construct models that are linear, which enables the utilization of linear control techniques for nonlinear dynamical systems like soft robots. Using this Koopman-based approach, a pneumatically actuated soft arm was able to autonomously perform manipulation tasks such as trajectory following and pick-and-place with a variable payload without undergoing any task-specific training. In the future, these approaches could offer a paradigm for designing and controlling all soft robotic systems, leading to their more widespread adoption in real-world applications.

Bio: 

Daniel Bruder received a B.S. degree in engineering sciences from Harvard University in 2013, and a Ph.D. degree in mechanical engineering from the University of Michigan in 2020. He is currently a postdoctoral fellow in the Harvard Microrobotics Lab supervised by Prof. Robert Wood. He is a recipient of the NSF Graduate Research Fellowship and the Richard and Eleanor Towner Prize for Outstanding Ph.D. Research. His research interests include the design, modeling, and control of robotic systems, especially soft robots.

 

 

 

 

 

Project Punyo: The challenges and opportunities when softness and tactile sensing meet

Date: 4/21/2022

Head shot of Naveen Kuppuswamy

Speaker: Naveen Kuppuswamy

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: Manipulation in cluttered environments like homes requires stable grasps, precise placement, sensitivity to, and robustness against unexpected contact and the ability to manipulate a wide range of objects. Tactile-driven manipulation exploiting softness can be an effective mitigation strategy for these hard challenges. In this talk, I will first present the highly compliant TRI ‘Soft-bubble’ sensor/gripper – the utility presented by the combination of highly perceptive sensing and variable passive compliance is demonstrated in a variety of manipulation tasks. I will then outline Project Punyo: our vision for a soft, tactile-sensing, bimanual whole-body manipulation platform and present some recent results in achieving whole-body rich-contact strategies for manipulating large domestic objects.

Bio: Naveen Kuppuswamy is a Senior Research Scientist and Tactile Perception and Control Lead in the Dextrous Manipulation department of the Toyota Research Institute. He holds a Bachelor of Engineering from Anna University, Chennai, India, MS in Electrical Engineering from the Korea Advanced Institute for Science and Technology (KAIST), Daejon, South Korea, and a PhD in Artificial Intelligence from the University of Zurich, Switzerland. He has also spent some time as a Postdoctoral Researcher at the Italian Institute of Technology, Genova, Italy and as a Visiting Scientist with the Robotics and Perception Group at the University of Zurich. Naveen holds several years of academic and industry experience in working on themes of tactile sensing, soft robotics and robot controls on a wide variety of platforms and has authored several publications in leading peer-reviewed journals and conferences. His research has been recognized through multiple publication and grant awards. He is also keenly interested in STEM education of under-represented communities around the world. Naveen is deeply passionate about using robots to assist people and improving the quality-of-life of those in need.

 

 

 

 

 

Design and Perception of Wearable Multi-Contact Haptic Devices for Social Communication

Date: 4/14/2022

Speaker: Cara Nunez

Head Shot of Cara Nunez
Cara Nunez

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

During social interactions, people use auditory, visual, and haptic (touch) cues to convey their thoughts, emotions, and intentions. Current technology allows humans to convey high-quality visual and auditory information but has limited ability to convey haptic expressions remotely. However, as people interact more through digital means rather than in person, it becomes important to have a way to be able to effectively communicate emotions through digital means as well. As online communication becomes more prevalent, systems that convey haptic signals could allow for improved distant socializing and empathetic remote human-human interaction.

Due to hardware constraints and limitations in our knowledge regarding human haptic perception, it is difficult to create haptic devices that completely capture the complexity of human touch. Wearable haptic devices allow users to receive haptic feedback without being tethered to a set location and while performing other tasks, but have stricter hardware constraints regarding size, weight, comfort, and power consumption. In this talk, I will present how I address these challenges through a cyclic process of (1) developing novel designs, models, and control strategies for wearable haptic devices, (2) evaluating human haptic perception using these devices, and (3) using prior results and methods to further advance design methodologies and understanding of human haptic perception.

Bio: Cara M. Nunez is a Postdoctoral Research Fellow within the Biorobotics Laboratory, Microrobotics Laboratory, and Move Lab at the Harvard John A. Paulson School of Engineering and Applied Sciences. She is also a Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University and will begin as an Assistant Professor in July 2023. She received a Ph.D. in Bioengineering and a M.S. in Mechanical Engineering from Stanford University working in the Collaborative Haptics and Robotics in Medicine Lab in 2021 and 2018, respectively. She was a visiting researcher in the Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in 2019-2020. She received a B.S. in Biomedical Engineering and a B.A. in Spanish as a part of the International Engineering Program from the University of Rhode Island in 2016. She was a recipient of the National Science Foundation Graduate Research Fellowship, the Deutscher Akademischer Austauschdienst Graduate Research Fellowship, the Stanford Centennial Teaching Assistant Award, and the Stanford Community Impact Award and served as the Student Activities Committee Chair for the IEEE Robotics and Automation Society from 2020-2022. Her research interests include haptics and robotics, with a specific focus on haptic perception, cutaneous force feedback techniques, and wearable devices, for medical applications, human-robot interaction, virtual reality, and STEM education.