Join the Robotics Listserv

To subscribe to event updates, send an email to with “join” in the subject line.


Building Distributed Robot Teams that Search, Track, and Explore: Systems in Theory and in a Dark and Muddy Limestone Mine

Micah Corah, NASA Jet Propulsion Laboratory


Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Processes of observing unknown and uncertain objects and environments are pervasive in robotics applications spanning autonomous mapping, tracking, and inspection. Further, autonomy is especially important to applications such as search and rescue as autonomous operation is critical to enabling mobile robots to penetrate through rubble beyond the communication range of human operators.

This talk will be split into two parts: The first part will focus on methods for informative planning and active perception for one or more robots, and the latter will discuss the DARPA Subterranean Challenge and my experience competing with team CoSTAR.

Autonomous perception tasks such as mapping a building or tracking targets often produce optimization problems that are difficult to solve (NP-Hard) and yet highly structured. Taking advantage of structure can greatly simplify these problems such as by providing efficient and accurate objective evaluation or efficient distributed algorithms with strong suboptimality guarantees. The methods we will discuss enable individual robots and teams to navigate and observe unknown environments at high speeds while quickly and collectively adapting to information from new observations.

Still, there are significant gaps between laboratory (and theoretic) settings and the field. The multi-robot systems deployed in the DARPA Subterranean Challenge Finals represent incredible advances in the ability of teams of robots to operate autonomously and intelligently in harsh and unstructured underground environments. Yet, the development of these systems is focused primarily on reliability and redundancy, and simple methods that work well enough in practice often prevail over seemingly advanced methods. This talk will focus on the performance of these multi-robot aerial and ground systems in the competition and speculate on what to expect in the near future.

Bio: Micah is a postdoc at the NASA Jet Propulsion Laboratory with Dr. Ali Agha where he competed with team CoSTAR in the DARPA Subterranean Challenge. Previously, Micah completed a Ph.D. in Robotics at Carnegie Mellon University advised by Prof. Nathan Michael focusing on distributed perception planning and multi-robot exploration. Micah is deeply interested in problems related to navigation, perception planning, and control for mobile robots and especially aerial robot teams.


Soft Structures: The Backbone of Soft Robots

Gina Olson, Carnegie Mellon University


Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Soft robots use geometric and material deformation to absorb impacts, mimic natural motions, mechanically adapt to motion or unevenness and to store and reuse energy. Soft robots, by virtue of these traits, offer potential for robots that grasp robustly, adapt to unstructured environments and work safely alongside, or are even worn by, humans. However, compliance breaks many of the assumptions underpinning traditional approaches to robot design, dynamics, control, sensing and planning, and new or modified approaches are required. During this talk, I will introduce the concept of soft robots as soft structures, with capabilities and behaviors derived from the type and organization of their active and passive elements. I will present my current and prior work on the development and analysis of soft robotic structures, with a particular focus on the mechanics of soft arms. I will briefly discuss ongoing work on a modular soft architecture actuated by nitinol wire.

Bio: Dr. Gina Olson is a postdoctoral research scientist working in Prof. Carmel Majidi’s Soft Machines Lab at Carnegie Mellon University. She earned her doctorate in Robotics and Mechanical Engineering at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, where she was advised by Dr. Yiğit Mengüç and Prof. Julie A. Adams. Her current research interests are the development and study of the soft and compliant structures within soft robots, and her past research interests lie in the area of deployable space structures for small satellites. She previously worked as a Technical Lead Engineer at Meggitt Polymers and Composites, where she led the development and certification of fire seals for aircraft engines.


Design, Modeling, and Control of Micro-scale and Meso-scale Tendon-driven surgical robots

Yash Chitalia, Harvard Medical School and Boston Children’s Hospital


Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Manual manipulation of passive surgical tools is time consuming with uncertain results. Steerable robotic micro-catheters and miniature endoscopes are essential to the operating room of the future. This talk introduces the design of a micro-scale (Outer Diameter: 0.4 mm) COaxially Aligned STeerable (COAST) guidewire/catheter robot for cardiovascular surgeries. This robot demonstrates variable and independently controlled bending length and curvature of the distal end allowing for follow-the-leader motion. The design, kinematics, statics models, and a controller for this robot are presented. The capability of the robot to accurately navigate through phantom anatomical bifurcations and tortuous angles is also demonstrated in phantom vascular structures. This talk also introduces the design, analysis and control of a meso-scale (Outer Diameter: 1.93 mm) two degree-of-freedom robotic bipolar electrocautery tool for the treatment of pediatric hydrocephalus. A static model and disturbance-observer based controller are developed for this tool to provide precise force control and compensate for joint hysteresis.

Bio: Yash Chitalia is a Research Fellow in Cardiac Surgery at the Harvard Medical School and Boston Children’s Hospital, where he works in the Pediatric Cardiac Bioengineering Lab. His research revolves around the design, modeling, and control of minimally invasive surgical robots. This talk details his doctoral research in the Medical Robotics and Automation (RoboMed) laboratory at the Georgia Institute of Technology.


Lightning Talks

Jonathan Chang, David Goedicke, Natalie Friedman, Travers Rhodes, PhD students, Cornell Tech


Location: 122 Gates Hall

Time: 2:40p.m.


Jonathan Chang: Mitigating Covariate Shift in Imitation Learning

Covariate shift is a core issue in Imitation Learning (IL). Traditional IL methods like behavior cloning (BC) (Pomerlau, 1989), while simple, suffer from covariate shift, learning a policy that can make arbitrary mistakes in parts of the state space not covered by the expert dataset. This leads to compounding errors in the agent’s performance (Ross and Bagnell, 2010), hurting the generalization capabilities in practice.

In this talk, I will present our recent work studying offline Imitation Learning (IL) where an agent learns to imitate an expert demonstrator without additional online environment interactions. Instead, the learner is presented with a static offline dataset of state-action-next state transition triples from a potentially less proficient behavior policy. We introduce Model-based IL from Offline data (MILO): an algorithmic framework that utilizes the static dataset to solve the offline IL problem efficiently and mitigate this covariate shift phenomenon.

Natalie Friedman: The Functions of Clothes For Robots

Most robots are unclothed. I believe that robot clothes present an underutilized opportunity for the field of designing interactive systems. Clothes can help robots become better robots––by helping them be useful in a new, wider array of contexts, or better adapt and function in the contexts they are already in. To make clothes for robots, I am learning how to drape fabric onto robots from Kari Love, a Broadway costumer. In this lightning talk I will share our process, including swatching and draping on a Kinova Gen 3 and Blossom.

David Goedicke: Imagining Future Automations with VR

I build specialized Virtual Reality Simulators that allow us to assess specific interactions between people and machines. Many of these focus on Autonomous Vehicles; recent projects started to focus on integrating ROS2 into these simulations to test and validate programmed robotic behaviors in VR before deploying them on any robot.

Travers Rhodes: Local Disentanglement in Variational Auto-Encoders Using Jacobian L1 Regularization

There have been many recent advances in representation learning; however, unsupervised representation learning can still struggle with model identification issues. Variational Auto-Encoders (VAEs) and their extensions such as Beta-VAEs have been shown to locally align latent variables with PCA directions, which can help to improve model disentanglement under some conditions. We propose adding an L1 loss (sparsity cost) to the VAE’s generative Jacobian during training to encourage local latent variable alignment with independent factors of variation in the data. I’ll present qualitative and quantitative results that show our added L1 cost encourages local axis alignment of the latent representation with individual factors of variation.

What we talk about when we talk about tooling

Cornell Robotics Grad Students


Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Join RGSO for our first homegrown seminar: a discussion on tooling. We have a few of our very own students ready to talk about their workflows and tips-and-tricks for how they get stuff done, whether it’s programming or research reviews. Come ready to listen, learn, and, if you have a cool workflow to communicate to others, share.

Certifiable Outlier-Robust Geometric Perception: Robots that See through the Clutter with Confidence

Heng Yang, Massachusetts Institute of Technology


Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Geometric perception is the task of estimating geometric models from sensor measurements and priors. The ubiquitous existence of outliers —measurements that tell no or little information about the models to be estimated— makes it theoretically intractable to perform estimation with guaranteed optimality. Despite this theoretical intractability, safety-critical robotic applications still demand trustworthiness and performance guarantees on perception algorithms. In this talk, I present certifiable outlier-robust geometric perception, a new paradigm to design tractable algorithms that enjoy rigorous performance guarantees, i.e., they commonly return an optimal estimate with a certificate of optimality, but declare failure and provide a measure of suboptimality on worst-case instances. Particularly, I present three algorithms in the certifiable perception toolbox: (i) a pruner that uses graph theory to filter out gross outliers and boost robustness to against over 95% outliers; (ii) an estimator that leverages graduated non-convexity to compute the optimal estimate with high probability of success; and (iii) a certifier that employs sparse semidefinite programming (SDP) relaxation and a novel SDP solver to endow the estimator with an optimality certificate or escape local minima otherwise. I showcase certifiable outlier-robust perception on real robotic applications such as scan matching, satellite pose estimation, and vehicle pose and shape estimation.

Bio: Heng Yang is a Ph.D. candidate in the Department of Mechanical Engineering and the Laboratory for Information & Decision Systems at the Massachusetts Institute of Technology, working with Prof. Luca Carlone. His research interests include large-scale convex optimization, semidefinite relaxation, robust estimation, and machine learning, applied to robotics and trustworthy autonomy. His work includes developing certifiable outlier-robust machine perception algorithms, large-scale semidefinite programming solvers, and self-supervised geometric perception frameworks. Heng Yang is a recipient of the Best Paper Award in Robot Vision at the 2020 IEEE International Conference on Robotics and Automation (ICRA), a Best Paper Award Honorable Mention from the 2020 IEEE Robotics and Automation Letters (RA-L), and a Best Paper Award Finalist at the 2021 Robotics: Science and Systems (RSS) conference. He is a Class of 2021 RSS Pioneer.


Formalizing the Structure of Multiagent Domains for Autonomous Robot Navigation in Human Spaces

Christoforos Mavrogiannis, University of Washington


Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Pedestrian scenes pose great challenges for robots due to the lack of formal rules regulating traffic, the lack of explicit coordination among agents, and the high dimensionality of the underlying space of outcomes. However, humans navigate with ease and comfort through a variety of complex multiagent environments, such as busy train stations, crowded malls or academic buildings. Human effectiveness in such domains can be largely attributed to cooperation, which introduces structure to multiagent behavior. In this talk, I will discuss how we can formalize this structure through the use of representations from low-dimensional topology. I will describe how these representations can be used to build prediction and planning algorithms for socially compliant robot navigation in pedestrian domains and show how their machinery may transfer to additional challenging environments such as uncontrolled street intersections.

Bio: Christoforos (Chris) Mavrogiannis is a postdoctoral research associate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, working with Prof. Siddhartha Srinivasa. His interests lie at the intersection of motion planning, multiagent systems, and human-robot interaction. He is particularly interested in the design and evaluation of algorithms for multiagent domains in human environments. To this end, he employs tools from motion planning and machine learning, and often seeks insights from (algebraic) topology and social sciences. Chris has been a best-paper award finalist at the ACM/IEEE International Conference on Human-Robot Interaction (HRI), and selected as a Pioneer at the HRI and RSS conferences. He has also led open-source initiatives (Openbionics, MuSHR), for which he has been a finalist for the Hackaday Prize and a winner of the Robotdalen International Innovation Award. Chris holds M.S. and Ph.D. degrees from Cornell University, and a Diploma in mechanical engineering from the National Technical University of Athens.

Welcome to the Fall 2021 Robotics Seminar!

Tapomayukh Bhattacharjee and Claire Liang


Location: 122 Gates Hall

Time: 2:40p.m.

Hey everyone! Welcome back for the semester. Robotics seminar is starting a new era and is (officially) a class again. The first seminar will cover the logistics of what to expect from this semester’s seminar/class as well as serve as an introduction to Cornell Robotics as a community. We will be announcing some new resources available (such as the new Robot Library) and taking feedback for what everyone would like to see in the future. The Robotics Graduate Student Organization will also cover some of what is to come for graduate students. If you’re new to the Cornell Robotics community, be sure to come for this week’s seminar!

P.S. Unfortunately, since Cornell is at a yellow COVID level, we will not have snacks for the foreseeable future.