Learning How to Plan for Multi-Step Manipulation in Collaborative Robotics

Claudia Pérez D’Arpino, MIT


Abstract: The use of robots for complex manipulation tasks is currently challenged by the limited ability of robots to construct a rich representation of the activity at both the motion and tasks levels in ways that are both functional and apt for human-supervised execution. For instance, the operator of a remote robot would benefit from planning assistance, as opposed to the currently used method of joint-by-joint direct teleoperation. In manufacturing, robots are increasingly expected to execute manipulation tasks in shared workspace with humans, which requires the robot to be able to predict the human actions and plan around these predictions. In both cases, it is beneficial to deploy systems that are capable of learning skills from observed demonstrations, as this would enable the application of robotics by users without programming skills. However, previous work on learning from demonstrations is limited in the range of tasks that can be learned and generalized across different skills and different robots. I this talk, I present C-LEARN, a method of learning from demonstrations that supports the use of hard geometric constraints for planning multi-step functional manipulation tasks with multiple end effectors in quasi-static settings, and show the advantages of using the method in a shared autonomy framework.

Speaker Bio: Claudia Pérez D’Arpino is a PhD Candidate in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, advised by Prof. Julie A. Shah in the Interactive Robotics Group since 2012. She received her degrees in Electronics Engineering (2008) and Masters in Mechatronics (2010) from the Simon Bolivar University in Caracas, Venezuela, where she served as Assistant Professor in the Electronics and Circuits Department (2010-2012) with a focus on Robotics. She participated in the DARPA Robotics Challenge with Team MIT (2012-2015). Her research at CSAIL combines machine learning and planning techniques to empower humans through the use of robotics and AI. Her PhD research centers in enabling robots to learn and create strategies for multi-step manipulation tasks by observing demonstrations, and develop efficient methods for robots to employ these skills in collaboration with humans, either for shared workspace collaboration, such as assembly in manufacturing, or for remote robot control in shared autonomy, such as emergency response scenarios.

Web: http://people.csail.mit.edu/cdarpino/

Autonomous and Intelligent Robots in Unstructured Field Environments

Dr. Girish Chowdhary, UIUC, Co-Founder EarthSense Inc.


Abstract: What if a team of collaborative autonomous robots grew your food for you? In this talk, I will demonstrate some key theoretical and algorithm advances in adaptive control, reinforcement learning, collaborative autonomy, and robot-based analytics my group is working to bring this future a lot nearer! I will discuss my group’s theoretical and practical work towards the challenges in making autonomous, persistent, and collaborative field robotics a reality. I will discuss new algorithms that are laying the foundation for robust long-duration autonomy in harsh, changing, and uncertain environments, including deep learning for robot embedded vision, deep adversarial reinforcement learning for large state-action spaces, and transfer learning for deep reinforcement learning domains. I will also describe the new breed of lightweight, compact, and highly autonomous field robots that my group is creating and deploying in fields across the US. I will show several videos of the TerraSentia robot, which is being widely hailed as opening the doors to an exciting revolution in agricultural robotics by popular media, including Chicago Tribune, the MIT Technology Review, Discovery Canada and leading technology blogs. I will also discuss several technological and socio-economic challenges of making autonomous field-robotic applications with small robots a reality, including opportunities in high-throughput phenotyping, mechanical weeding, and robots for defense applications.

Speaker Bio: Girish Chowdhary is an assistant professor at the University of Illinois at Urbana-Champaign, and the director of the Distributed Autonomous Systems laboratory at UIUC. He holds a PhD (2010) from Georgia Institute of Technology in Aerospace Engineering. He was a postdoc at the Laboratory for Information and Decision Systems (LIDS) of the Massachusetts Institute of Technology (2011-2013), and an assistant professor at Oklahoma State University’s Mechanical and Aerospace Engineering department (2013-2016). He also worked with the German Aerospace Center’s (DLR’s) Institute of Flight Systems for around three years (2003-2006). Girish’s ongoing research interest is in theoretical insights and practical algorithms for adaptive autonomy, with a particular focus on field-robotics. He has authored over 90 peer reviewed publications in various areas of adaptive control, robotics, and autonomy. On the practical side, Girish has led the development and flight-testing of over 10 research UAS platform. UAS autopilots based on Girish’s work have been designed and flight-tested on six UASs, including by independent international institutions. Girish is an investigator on NSF, AFOSR, NASA, ARPA-E, and DOE grants. He is the winner of the Air Force Young Investigator Award, and the Aerospace Guidance and Controls Systems Committee Dave Ward Memorial award. He is the co-founder of EarthSense Inc., working to make ultralight agricultural robotics a reality.

Design and Analysis of a Wearable Robotic Forearm

Vignesh Vatsal, Cornell  University


Human augmentations that can enhance a user’s capabilities in terms of strength, power, safety, and task efficiency have been a persistent area of research. Historically, most efforts in this field have focused on prostheses and exoskeletons, which serve either to replace and rehabilitate lost capabilities, or enhance already existing ones by adhering to human limb structures. More recently, we are witnessing devices that add capabilities beyond those found in nature, such as additional limbs and fingers. However, most of these devices have been designed for specific tasks and applications, at far ends on a spectrum of power, size, and weight. Additionally, they are not considered to be agents for collaborative activities, with interaction modes typically involving teleoperation or demonstration-based programmable motions. We envision a more general-purpose wearable robot, on the scale of a human forearm, which enhances the reach of a user, and acts as a truly collaborative autonomous agent. We aim to connect the fields of wearable robot design, control systems, and computational human-robot interaction (HRI). We report on an iterative process for user-centered design of the robot, followed by an analysis of its kinematics, dynamics and biomechanics. The collaboration aspect involves collecting data from human-human teleoperation studies to build models for human intention recognition and robot behavior generation in joint human-robot tasks.

Where will our cars take us? The history, challenges, and potential impact of self driving cars

Mark Campbell, Cornell University


Autonomous, self-driving cars have the potential to impact society in many ways, including taxi/bus service; shipping and delivery; and commuting to/from work. This talk will give an overview of the history, technological work to date and challenges, and potential future impact of self-driving cars. A key challenge is the ability to perceive the environment from the cars sensors, i.e. how can a car convert pixels from a camera, to knowledge of a scene with cars, cyclist, and pedestrians. Perception in self-driving cars is particularly challenging, given the fast viewpoint changes and close proximity of other objects. This perceived information is typically uncertain, constantly being updated, yet must also be used for important decisions by the car, ranging from a simple change to lanes, or stopping and queuing at a traffic light. Videos, examples, and insights will be given of Cornell’s autonomous car, as well as key performers such as Google/Waymo and car companies.

Can you teach me?: Leveraging and Managing Interaction to Enable Concept Grounding

Kalesha Bullard, Georgia Tech


Abstract: When a robotic agent is given a recipe for a task, it must perceptually ground each entity and concept within the recipe (e.g., items, locations) in order to perform the task. Assuming no prior knowledge, this is particularly challenging in newly situated or dynamic environments, where the robot has limited representative training data. This research examines the problem of enabling a social robotic agent to leverage interaction with a human partner for learning to efficiently ground task-relevant concepts in its situated environment. Our prior work has investigated Learning from Demonstration approaches for the acquisition of (1) training instances as examples of task-relevant concepts and (2) informative features for appropriately representing and discriminating between task-relevant concepts. In ongoing work, we examine the design of algorithms that enable the social robot learner to autonomously manage the interaction with its human partner, towards actively gathering both instance and feature information for learning the concept groundings. This is motivated by the way that humans learn, by combining information rather than simply focusing on one type. In this talk, I present insights and findings from our initial work on learning from demonstration for grounding of task-relevant concepts and ongoing work on interaction management to improve the learning of grounded concepts.

Bio: Kalesha Bullard is a PhD candidate in Computer Science at Georgia Institute of Technology. Her thesis research lies at the intersection of Human-Robot Interaction and Machine Learning: enabling a social robot to learn groundings for task-relevant concepts, through leveraging and managing interaction with a human teacher. She is co-advised by Sonia Chernova, associate professor in the school of Interactive Computing at Georgia Tech, and Andrea L. Thomaz, associate professor in the department of Electrical and Computer Engineering at The University of Texas in Austin. Before coming to Georgia Tech, Kalesha received her undergraduate degree in Mathematics Education from The University of Georgia and subsequently participated in the Teach For America national service corps as a high school mathematics teacher. Over the course of her research career, Kalesha has served as a Program Committee co-chair for three different workshops and symposia, completed research internships at IBM Watson and NASA Jet Propulsion Laboratory, and was awarded an NSF Graduate Research Fellowship and a Google Generation Scholarship. Kalesha’s broader personal research vision is to enable social robots with the cognitive reasoning abilities and social intelligence necessary to engage in meaningful dialogue with their human partners, over long-term time horizons. Towards that end, she is particularly interested in grounded and embodied dialogue whereby the agent can communicate autonomously, intuitively, and expressively.