Constraints and Planning for Forceful Robotic Manipulation

Date:  5/4/23

Speaker:  Rachel Holladay, EECS PhD Student at MIT

Location:  Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract:  In this talk I’ll primarily focus on enabling robots to perform multi-step forceful manipulation tasks, such as twisting a nut on a bolt or pulling a nail with a hammer claw, which requires enabling reasoning over interlocking force and motion constraints over discrete and continuous choices. I categorize forceful manipulation as tasks where exerting substantial forces is necessary to complete the task. While all actions with contact involve forces, I focus on tasks where generating and transmitting forces is a limiting factor of the task that must be reasoned over and planned for. I’ll first formalize constraints for forceful manipulation tasks where the goal is to exert force, often through a tool, on an object or the environment. These constraints define a task and motion planning problem that we solve to search for both the sequences of discrete actions, or strategy, and the continuous parameters of those actions. I’ll also briefly discuss our active work on multi-step manipulation that involves highly uncertain, briefly dynamic actions.

Bio:  Rachel Holladay is an EECS PhD Student at MIT, where she is a member of the LIS (Learning and Intelligent Systems) Group and the MCube Lab (Manipulation and Mechanisms at MIT). She is interested in developing algorithms for dexterous and long horizon robotic manipulation and planning. In particular, her doctoral research focuses on enabling robots to complete multi-step manipulation tasks that require reasoning over forces and contact mechanics. She received her Bachelor’s degree in Computer Science and Robotics from Carnegie Mellon. 

Website: http://people.csail.mit.edu/rholladay/

 

Learning from Limited Data for Robot Vision in the Wild

Date:  4/27/23

Speaker:  Assistant Professor Dr. Katherine (Katie) Skinner, University of Michigan

Location:  Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractField robotics refers to the deployment of robots and autonomous systems in unstructured or dynamic environments across air, land, sea, and space. Robust sensing and perception can enable these systems to perform tasks such as long-term environmental monitoring, mapping of unexplored terrain, and safe operation in remote or hazardous environments. In recent years, deep learning has led to impressive advances in robot perception. However, state-of-the-art methods still rely on gathering large datasets with hand-annotated labels for network training. For many applications across field robotics, dynamic environmental conditions or operational challenges hinder efforts to collect and manually label large training sets that are representative of all possible environmental conditions a robot might encounter. This limits the performance and generalizability of existing learning-based approaches for robot vision in field applications.

In this talk, I will discuss unique challenges for robot perception in dynamic, unstructured, and remote environments often encountered in field robotics applications. I will present my recent research to overcome these challenges to advance perceptual capabilities of robotic systems across sea, land, and space. Lastly, I will share my insight on opportunities to integrate learning-based approaches into field robotic systems for practical deployment.

 

Bio:  Dr. Katherine (Katie) Skinner is an Assistant Professor in the Department of Robotics at the University of Michigan. Prior to this appointment, she was a Postdoctoral Fellow in the Daniel Guggenheim School of Aerospace Engineering and the School of Earth and Atmospheric Sciences at Georgia Institute of Technology. She received an M.S. and Ph.D. from the Robotics Institute at the University of Michigan, and a B.S.E. in Mechanical and Aerospace Engineering with a Certificate in Applications of Computing from Princeton University.

Website: https://robotics.umich.edu/profile/katherine-skinner/

 

Towards robots that navigate seamlessly next to people

Date:  4/20/23

Speaker:  Assistant Professor Christoforos Mavrogiannis, University of Michigan

Location:  Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: Robots have the potential to enhance human productivity by taking over tedious and laborious tasks across important domains like fulfilment, manufacturing, and healthcare. These domains are highly dynamic and unstructured, requiring robots to operate close to users who are occupied with demanding and possibly safety-critical tasks. This level of complexity is challenging for existing systems which largely treat users as moving obstacles. Such systems often fail to adapt to the dynamic context, producing behaviors that distract human activity and hinder productivity. In this talk, I will share insights from my work on robot navigation in crowds, highlighting how mathematical abstractions grounded on our understanding of pedestrian navigation may empower simple models and interpretable architectures to produce safe, efficient, and positively perceived robot motion under close interaction settings. I will close with field-deployment challenges, emphasizing the importance of handling autonomy failures and scaling performance across diverse environments.

Bio: ChristoforosMavrogiannis is an incoming Assistant Professor of Robotics at the University of Michigan and a postdoc at the University of Washington, working on human-robot collaboration and multiagent systems. He has been recognized as an outstanding young scientist by the Heidelberg Laureate Forum, a best-paper finalist at the HRI conference, and a Pioneer at the HRI and RSS conferences. He has been a Hackaday Prize finalist and a winner of the Robotdalen International Innovation Award for his open-source initiative OpenBionics, and currently serves as a mentor for MuSHR, the open-source racecar project of the University of Washington. Christoforos holds a Ph.D. from Cornell University and a Diploma from the National Technical University of Athens.

Website: https://robotics.umich.edu/profile/christoforos-mavrogiannis/

 

Life-long and Robust Learning from Robotic Fleets

Date:  4/13/23

Speaker: Prof. Sandeep Chinchali, The University of Texas at Austin

Location:  Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: Today’s robotic fleets collect terabytes of rich video and LiDAR data that can be used to continually re-train machine learning (ML) models in the cloud. While these fleets should ideally upload all their data to train robust ML models, this is often infeasible due to prohibitive network bandwidth, data labeling, and cloud costs. In this talk, I will present my group’s papers at CORL 2022 that aim to learn robust perception models from geo-distributed robotic fleets. First, I will present a cooperative data sampling strategy for autonomous vehicles (AVs) to collect a diverse ML training dataset in the cloud. Since the AVs have a shared objective but minimal information about each other’s local data distributions, we can naturally cast cooperative data collection as a mathematical game. I will theoretically characterize the convergence and communication benefits of game-theoretic data sampling and show state-of-the-art performance on standard AV datasets. Then, I will transition to our work on synthesizing robust perception models tailored
to robotic control tasks. The key insight is that today’s methods to train robust perception models are largely task-agnostic – they augment a dataset using random image transformations or adversarial examples targeted at a vision model in isolation. However, I will show that accounting for the structure of
an ultimate robotic task, such as differentiable model predictive control, can improve the generalization of perception models. Finally, I will conclude by tying these threads together into a broader vision on robust, continual learning from networked robotic fleets. 

Bio: Sandeep Chinchali is an assistant professor in UT Austin’s ECE department. He completed his PhD in computer science at Stanford and undergrad at Caltech, where he researched at NASA JPL. Previously,
he was the first principal data scientist at Uhana, a Stanford startup working on data-driven optimization of cellular networks, now acquired by VMWare. Sandeep’s research on cloud robotics, edge computing, and 5G was recognized with the Outstanding Paper Award at MLSys 2022 and was a finalist for Best Systems Paper at Robotics: Science and Systems 2019. His group is funded by companies such as Lockheed Martin, Honda, Viavi, Cisco, and Intel and actively collaborates with local Austin startups.

 

 

The TrimBot2020 gardening robot

Date:  4/6/23

Speaker: Professor Robert B. Fisher

Location: 122 Gates Hall or Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: The TrimBot2020 gardening robot was developed as a prototype in the EC-funded TrimBot2020 research project. The device was designed as a mobile, largely autonomous robot for pruning bushes and rose plants. As an outdoor robot, it had to deal with changing lighting, targets moving in the wind, navigation problems, and natural plants with limited shape models. But the robot could successfully prune. This talk will overview the technologies enabling the robot. Prof. Fisher will also present some work on aerial classification of forests needing thinning (or not).

Bio:  Prof. Robert B. Fisher FIAPR, FBMVA received a BS (Mathematics,  California Institute of Technology, 1974), MS (Computer Science, Stanford, 1978) and a PhD (Edinburgh, 1987). Since then, Bob has been an academic at Edinburgh University, including being College Dean of Research. He has been the Education Committee and Industrial Liaison Committee chairs for the Int. Association for Pattern Recognition, of which he is currently the association Treasurer. His research covers topics mainly in high level computer vision and 3D and 3D video analysis, focusing on reconstructing geometric models from existing examples, which contributed to a spin-off company, Dimensional Imaging. The research has led to 5 authored books and 300+ peer-reviewed scientific articles or book chapters. He has developed several on-line computer vision resources, with over 1 million hits. Most recently, he has been the coordinator of EC projects 1) acquiring and analysing video data of 1.4 billion fish from over about 20 camera-years of undersea video of tropical coral reefsand 2) developing a gardening robot (hedge-trimming and rose pruning). He is a Fellow of the Int. Association for Pattern Recognition (2008) and the British Machine Vision Association (2010).

 

 

Developing and Deploying Platforms for Real-World Impact: FlowIO Platform

Date:  3/30/23

Speaker:  Ali Shtarbanov

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: The fields of Human-Computer Interaction (HCI), Haptics, and Robotics are currently undergoing a paradigm shift from rigid materials toward more compliant, soft, and actuated materials, giving rise to areas often referred to as soft robotics or programmable materials. However, there is a significant lack of tools and development platforms in this field, which makes prototyping difficult and inaccessible to most creators. In this talk, I will present the FlowIO Platform and many of the projects it has enabled over the past two years. FlowIO is a fully integrated general-purpose solution for control, actuation, and sensing of soft programmable materials – enabling researchers, artists, and makers to unleash their creativity and to realize their ideas quickly and easily. It has been deployed in 12 countries and has enabled numerous art projects, research papers, and master theses around the world. I will also present a generalized framework of the essential technological and non technological characteristics that any development platform must offer – in order to be suitable for diverse users and to achieve mass adoption. I will address questions such as: What does it really take to create and deploy development platforms for achieving real-world impact? Why do we need platforms and how can they democratize emerging fields and accelerate innovation? Why are tools the enabler of progress, and how do they shape our world? Why do most platform attempts fail and only very few succeed in terms of impact and widespread adoption?

Bio: Ali Shtarbanov – a final year Ph.D. student at MIT Media Lab – has the mission to make prototyping and innovation in emerging fields more rapid and easily accessible for everyone…through the design and deployment of novel development platforms that are highly versatile, general purpose, and simple to use. Ali is a Bulgarian-American system designer, engineer, and HCI researcher best known as the inventor of the FlowIO Platform and the founder of the SoftRobotics.IO community ecosystem. His research areas include modular systems design, interactive interfaces, soft robotics, haptics, and community building. Ali’s works have been published at leading academic venues (CHI, UIST, SIGGRAPH, IROS, ASCEND) and have won multiple 1st place awards at some of the world’s largest design, engineering, and research competitions including Hackaday Grand Prize, TechBriefs Grand Prize, ACM Student Research Competition, Core77, IfDesign, FastCompany, and iDA.  Prior to his PhD studies, Ali earned bachelor degrees in Physics and Electrical Engineering from Lehigh University with highest honors and a Master’s Degree in Media Arts and Sciences from MIT Media Lab with focus on haptic feedback interfaces.

 

 

Multi-sensory programs for physical understanding – Modeling and Inference

Date:  3/23/23

Speaker:  Krishna Murthy

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

Modern machine learning has unlocked a new level of embodied perception and reasoning abilities by leveraging internet-scale training data. However, such systems fail in unpredictable and unintuitive ways when deployed in real-world applications. These advances have underplayed many classical techniques developed over the past few decades. I postulate that a flexible blend of classical and learned methods is the most promising path to developing flexible, interpretable, and actionable models of the world: a necessity for intelligent embodied agents.

My research intertwines classical and learning-based techniques to bring the best of both worlds, by building multi-sensory models of the 3D world. In this talk, I will share some recent efforts (by me and collaborators) on building world models and inference techniques geared towards spatial and physical understanding. In particular, I will talk about two themes:

  1. leveraging differentiable programs for physical understanding in a dynamic world
  2. integrating features from large learned models for open-set and multimodal perception

Bio:   Krishna Murthy is a postdoc at MIT with Josh Tenenbaum and Antonio Torralba. His research focuses on building multi-sensory world models to help embodied agents perceive, reason about, and act in the world around them. He has organized multiple workshops at ICLR, Neurips, ICCV on themes spanning differentiable programming, physical reasoning, 3D vision and graphics, and ML research dissemination.

His research has been recognized with graduate fellowship awards from NVIDIA and Google (2021); a best paper award from Robotics and Automation letters (2019); and an induction to the RSS Pioneers cohort (2020).

Website: https://krrish94.github.io/

 

Safe Control from Value Functions: Blending Control Barrier Functions and Hamilton-Jacobi Reachability Analysis

Date:  3/16/23

Speaker:  Sylvia Herbert

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract:  Value functions have been used extensively for generating safe control policies for robots and other nonlinear systems. The output of the function provides the current “safety level” of the system, and its gradient informs the allowable control inputs to maintain safety.Two common approaches for value functions are control barrier functions (CBFs) and Hamilton-Jacobi (HJ) reachability value functions.  Each method has its own advantages and challenges.  HJ reachability analysis is a constructive and general method that struggles from computational complexity.  CBFs are typically much simpler, but are challenging to find, often resulting in conservative or invalid hand-tuned or data-driven approximations.In this talk I will discuss our work in exploring the connections between these two approaches in order to blend the theory and tools from each.  I’ll introduce the “control barrier-value function,” and show how we can refine CBF approximations to recover the maximum safe set and corresponding control policy for a system.

Bio:   Sylvia Herbert started as an Assistant Professor in Mechanical and Aerospace Engineering at UC San Diego in 2021. She runs the Safe Autonomous Systems Lab within the Contextual Robotics Institute.

Previously she was a PhD student with Prof. Claire Tomlin at UC Berkeley.  She is the recipient of the ONR Young Investigator Award, NSF GRFP, a UC Berkeley Outstanding Graduate Student Instructor Award, and the UC Berkeley Demetri Angelakos Memorial Achievement Award for Altruism.

 

 

Scaling Sim2Real Learning For Robotic Rearrangement

Date:  3/9/23

Speaker:  Adithyavairavan Murali

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractAdithya Murali is a scientist at the NVIDIA Robotics research team. He received his PhD at the Robotics Institute, Carnegie Mellon University, where he was supported by the Uber Presidential Fellowship. During his PhD, he also spent time at Meta AI Research where he led the development of the pyrobot.org and low-cost robot projects. His work has been a Best Paper finalist at ICRA 2015 and 2020 and has been covered by WIRED, the New York Times, etc. His general interests are in robotic manipulation, 3D vision, synthetic content generation and learning.

Bio:  Rearrangement is a fundamental task in robotic manipulation and which when solved, will help us to achieve the dream of robot butlers working seamlessly in human spaces like homes, factories and hospitals. In this talk I’ll be presenting some recent work in 3D synthetic content generation and new approaches for neural motion planning. Training models from this large-scale simulated data allows us to generalize directly to rearrangement in the real world from just raw camera observations as input, without training on any real data.

 

Enabling Humans and Robots to Predict the Other’s Behavior from Small Datasets

Date:  3/2/2023

Speaker:  Vaibhav Unhelkar 

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractWe are steadily moving towards a future where humans work with robotic assistants, robot teammates, and even robotic tutors. Towards realizing this future, it is essential to train both robots and humans to work with each other. My research develops computational foundations for enabling this human-robot training. This talk will begin with the problem of training robots to work with humans. To address this problem, I will summarize recent imitation learning techniques – FAMM and BTIL – that explicitly model partial observability of human behavior. Coupled with POMDP solvers, these techniques enable robots to predict and adapt to human behavior during collaborative task execution. Second, I will summarize AI Teacher: an explainable AI framework for training humans to work with robots. By leveraging human’s natural ability to model others (Theory of Mind), the AI Teacher framework reduces the number of interactions it takes for humans to arrive at predictive models of robot behavior. The talk will conclude with implications of these techniques for human-robot collaboration.

Bio:  Vaibhav Unhelkar is an Assistant Professor of Computer Science at Rice University, where he leads the Human-Centered AI and Robotics (HCAIR) research group. Unhelkar has developed algorithms to enable fluent human-robot collaboration and, with industry collaborators, deployed robots among humans. Ongoing research in his group includes development of algorithms and systems to model human behavior, train human-robot teams, and improve transparency of AI systems. Unhelkar received his doctorate in Autonomous Systems at MIT (2020) and completed his undergraduate education at IIT Bombay (2012). He serves as an Associate Editor for IEEE Robotics and Automation Letters and is the recipient of AAMAS 2022 Best Program Committee Member Award. Before joining Rice, Unhelkar worked as a robotics researcher at Google X, the Moonshot Factory