Seminars

Join the Robotics Listserv

To subscribe to event updates, send an email to robotics-l-request@cornell.edu with “join” in the subject line.


 

Safety and Generalization Guarantees for Learning-Based Control of Robots

Anirudha Majumdar, Princeton University

12/15/2020

Location: Zoom

Time: 2:55p.m.

Abstract: Imagine an unmanned aerial vehicle that learns to navigate using a thousand different obstacle environments or a robotic manipulator that learns to grasp using a million objects in a dataset. How likely are these systems to succeed on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize well to environments that our robot has not previously encountered? Unfortunately, current state-of-the-art approaches either do not generally provide such guarantees or do so only under very restrictive assumptions. This is a particularly pressing challenge for safety-critical robotic systems with rich sensory inputs (e.g., vision) that employ neural network-based control policies.

In this talk, I will present approaches for learning control policies for robotic systems that provably generalize well with high probability to novel environments. The key technical idea behind our approach is to leverage tools from generalization theory (e.g., PAC-Bayes theory) and the theory of information bottlenecks. We apply our techniques on examples including navigation and grasping in order to demonstrate the potential to provide strong generalization guarantees on robotic systems with complicated (e.g., nonlinear) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.

Perception in Action

Silvia Ferrari, Cornell University

12/8/2020

Location: Zoom

Time: 2:55p.m.

Abstract:Autonomous robots equipped with on-board cameras are becoming crucial to both civilian and military applications because of their ability to assist humans in carrying out dangerous yet vital missions.  Existing computer vision and perception algorithms have limited real-time applicability in agile and autonomous robots, such as micro aerial vehicles, due to their heavy computational requirements and slow reaction times. Event-based cameras have the potential to overcome these limitations but their real-time implementations to date have been limited to obstacle avoidance. This talk presents an approach that departs from the usual paradigm of treating computer vision and robot control as separate processes and present a new class of active perception and motion control algorithms that are closely intertwined. This perception-in-action approach not only accounts for but also exploits the known ego motion of the robot-mounted camera to perform many simultaneous functionalities dynamically, in fast changing environments, without relying on wearable devices, tags, or external motion capture.  Inspired by animal perception and sensory embodiment, our approach enables an agile camera-equipped aerial robot to perceive its surroundings in real time and carry out tasks based on a myriad of visual information, known as exteroceptive stimuli, integrated with proprioceptive feedback about the robot state or ego motion. Our tight integration of perception and control results into a perception-in-action paradigm that allows different people to interact with the robot using only natural language and hand gestures, as they both move in unknown environments populated with people, vehicles, and animals, subject to variable winds and natural or artificial illumination.

Learning Communication for Decentralized Coordination in Multi-Agent Systems.

Amanda Prorok, University of Cambridge

12/1/2020

Location: Zoom

Time: 12:00p.m.

Abstract: Effective communication is key to successful, decentralized, multi-agent coordination. Yet, it is far from obvious what information is crucial to the task at hand, and how and when it must be shared among agents. In this talk, I discuss our recent work on using Graph Neural Networks (GNNs) to solve multi-agent coordination problems. In my first case-study, I show how we use GNNs to find a decentralized solution to the multi-agent path finding problem, which is known to be NP-hard. I demonstrate how our GNN-based policy is able to achieve near-optimal performance, at a fraction of the real-time computational cost. Secondly, I show how GNN-based reinforcement learning can be leveraged to learn inter-agent communication policies. In this case-study, I demonstrate how non-shared optimization objectives can lead to adversarial communication strategies. Finally, I address the challenge of learning policies for autonomous agents operating in a shared physical workspace, where the absence of collisions cannot be guaranteed. I conclude the talk by presenting a multi-vehicle mixed reality framework that facilitates the process of safely learning multi-agent navigation behaviors.

Towards Compositional Generalization in Robot Learning

Danfei Xu, Stanford University

11/24/2020

Location: Zoom

Time: 2:55p.m.

Abstract: As robot hardwares become more capable, we will want robots to assist us with wide ranges of long-horizon tasks in open-world environments, such as cooking in a messy kitchen. This requires robots to generalize to new tasks and situations they have never seen before. Despite substantial progress, much of today’s data-driven robot learning systems are limited to optimizing for a single environment and task. On the other hand, long-horizon tasks are composable by nature: short primitives such as grasp-mug and open-drawer constitute long manipulation sequences; a composite goal such as cook-meal can be broken down to simpler subgoals such as preparing individual ingredients. However, there are multitudes of challenges in extracting these structures from the unstructured world, organizing them with coherent task structures, and composing them to solve new tasks. In this talk, I will present some of my Ph.D. works on developing compositional representations and structured learning algorithms to enable robots to generalize across long-horizon manipulation tasks.

An “Additional View” on Human-Robot Interaction and Autonomy in Robot-Assisted Surgery

Alaa Eldin Abdelaal, University of British Columbia

11/17/2020

Location: Zoom

Time: 2:55p.m.

Abstract: Robot-assisted surgery (RAS) has gained momentum over the last few decades with nearly 1,200,000 RAS procedures performed in 2019 alone using the da Vinci Surgical System, the most widely used surgical robotics platform. The current state-of-the-art surgical robotic systems use only a single endoscope to view the surgical field. In this talk, we present a novel design of an additional “pickup” camera that can be integrated into the da Vinci Surgical System. We then explore the benefits of our design for human-robot interaction (HRI) and autonomy in RAS. On the HRI side, we show how this “pickup” camera improves depth perception as well as how its additional view can lead to better surgical training. On the autonomy side, we show how automating the motion of this camera provides better visualization of the surgical scene. Finally, we show how this automation work inspires the design of novel execution models of the automation of surgical subtasks, leading to superhuman performance.

Robot Learning in the Wild

Lerrel Pinto, NYU

11/3/2020

Location: Zoom

Time: 2:55p.m.

Abstract: While robotics has made tremendous progress over the last few decades, most success stories are still limited to carefully engineered and precisely modeled environments. Interestingly, one of the most significant successes in the last decade of AI has been the use of Machine Learning (ML) to generalize and robustly handle diverse situations. So why don’t we just apply current learning algorithms to robots? The biggest reason is a complicated relationship between data and robotics. In other fields of AI such as computer vision, we were able to collect diverse real-world, large-scale data with lots of supervision. These three key ingredients which fueled the success of deep learning in other fields are the key bottlenecks in robotics. We do not have millions of training examples in robots; it is unclear how to supervise robots and most importantly, simulation/lab data is not real-world and diverse. My research has focused on rethinking the relationship between data and robotics to fuel the success of robot learning. Specifically, in this talk, I will discuss three aspects of data that will bring us closer to generalizable robotics: (a) size of data we can collect, (b) amount of supervisory signal we can extract, and (c) diversity of data we can get from robots.

Robots in education – how robots can help learn math, science and engineering

Harshal Chhaya and Ayesha Mayhugh, TI

10/27/2020

Location: Zoom

Time: 2:55p.m.

Abstract: Robots are a fun and engaging way to learn a variety of subjects and concepts – from middle school math to autonomous driving using sensors. In this talk, we will discuss two of TI’s educational robotic products – TI-Rover and TI-RSLK – and how they are being used to teach students across all grades. We will share lessons we learned along the way. We will also share some of the engineering trade-offs we had to make in the design for these robots.

Information Theoretical Regret Bounds for Online Nonlinear Control

Wen Sun, Cornell University

10/21/2020

Location: Zoom

Time: 2:55p.m.

Abstract: This work studies the problem of sequential control in an unknown, nonlinear dynamical system, where we model the underlying system dynamics as an unknown function in a known Reproducing Kernel Hilbert Space. This framework yields a general setting that permits discrete and continuous control inputs as well as non-smooth, non-differentiable dynamics. Our main result, the Lower Confidence-based Continuous Control algorithm, enjoys a near-optimal O(\sqrt{T}) regret bound against the optimal controller in episodic settings, where T is the number of episodes. The bound has no explicit dependence on dimension of the system dynamics, which could be infinite, but instead only depends on information theoretic quantities. We empirically show its application to a number of nonlinear control tasks and demonstrate the benefit of exploration for learning model dynamics.

Joint work with Sham Kakade, Akshay Krishnamurthy, Kendall Lowrey, Motoya Ohnishi. https://arxiv.org/pdf/2006.12466.pdf

Challenges & Opportunities in Maritime Robotics

Matthew Bays, Naval Surface Warfare Center, Panama City Division (NSWC PCD)

10/13/2020

Location: Zoom

Time: 2:55p.m.

Abstract: Interest in unmanned systems has increased considerably within the maritime domain and specifically the U.S. Navy over the last several decades. However, the littoral (shallow water) and undersea environments offer unique challenges resulting in the need for more autonomous, more reliable, and more modular unmanned systems than is often found in other domains. In this talk, we will provide an overview of the particular challenges the U.S. Navy is attempting to solve or mitigate within the littoral environment and solutions currently in development. These challenges include the unique communication constraints of the underwater domain, the difficult maritime sensing environment, and reliability needs of undersea systems.

RoboGami

10/6/2020

Location: Zoom

Time: 2:55p.m.

Abstract: GSGIC (Graduate Students for Gender Inclusion in Computing) + RGSO (Robotics Graduate Student Organization/Robotics Seminar) invite you to join us for an afternoon of origami to build community and build paper decorations for your home/office.

For those who requested materials you should be receiving them in the mail. If you did not previously RSVP feel free to join the event with your own paper anyways!