Design, Modeling, and Control of Micro-scale and Meso-scale Tendon-driven surgical robots

Yash Chitalia, Harvard Medical School and Boston Children’s Hospital

10/7/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Manual manipulation of passive surgical tools is time consuming with uncertain results. Steerable robotic micro-catheters and miniature endoscopes are essential to the operating room of the future. This talk introduces the design of a micro-scale (Outer Diameter: 0.4 mm) COaxially Aligned STeerable (COAST) guidewire/catheter robot for cardiovascular surgeries. This robot demonstrates variable and independently controlled bending length and curvature of the distal end allowing for follow-the-leader motion. The design, kinematics, statics models, and a controller for this robot are presented. The capability of the robot to accurately navigate through phantom anatomical bifurcations and tortuous angles is also demonstrated in phantom vascular structures. This talk also introduces the design, analysis and control of a meso-scale (Outer Diameter: 1.93 mm) two degree-of-freedom robotic bipolar electrocautery tool for the treatment of pediatric hydrocephalus. A static model and disturbance-observer based controller are developed for this tool to provide precise force control and compensate for joint hysteresis.

Bio: Yash Chitalia is a Research Fellow in Cardiac Surgery at the Harvard Medical School and Boston Children’s Hospital, where he works in the Pediatric Cardiac Bioengineering Lab. His research revolves around the design, modeling, and control of minimally invasive surgical robots. This talk details his doctoral research in the Medical Robotics and Automation (RoboMed) laboratory at the Georgia Institute of Technology.

 

Lightning Talks

Jonathan Chang, David Goedicke, Natalie Friedman, Travers Rhodes, PhD students, Cornell Tech

9/30/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstracts:

Jonathan Chang: Mitigating Covariate Shift in Imitation Learning

Covariate shift is a core issue in Imitation Learning (IL). Traditional IL methods like behavior cloning (BC) (Pomerlau, 1989), while simple, suffer from covariate shift, learning a policy that can make arbitrary mistakes in parts of the state space not covered by the expert dataset. This leads to compounding errors in the agent’s performance (Ross and Bagnell, 2010), hurting the generalization capabilities in practice.

In this talk, I will present our recent work studying offline Imitation Learning (IL) where an agent learns to imitate an expert demonstrator without additional online environment interactions. Instead, the learner is presented with a static offline dataset of state-action-next state transition triples from a potentially less proficient behavior policy. We introduce Model-based IL from Offline data (MILO): an algorithmic framework that utilizes the static dataset to solve the offline IL problem efficiently and mitigate this covariate shift phenomenon.

Natalie Friedman: The Functions of Clothes For Robots

Most robots are unclothed. I believe that robot clothes present an underutilized opportunity for the field of designing interactive systems. Clothes can help robots become better robots––by helping them be useful in a new, wider array of contexts, or better adapt and function in the contexts they are already in. To make clothes for robots, I am learning how to drape fabric onto robots from Kari Love, a Broadway costumer. In this lightning talk I will share our process, including swatching and draping on a Kinova Gen 3 and Blossom.

David Goedicke: Imagining Future Automations with VR

I build specialized Virtual Reality Simulators that allow us to assess specific interactions between people and machines. Many of these focus on Autonomous Vehicles; recent projects started to focus on integrating ROS2 into these simulations to test and validate programmed robotic behaviors in VR before deploying them on any robot.

Travers Rhodes: Local Disentanglement in Variational Auto-Encoders Using Jacobian L1 Regularization

There have been many recent advances in representation learning; however, unsupervised representation learning can still struggle with model identification issues. Variational Auto-Encoders (VAEs) and their extensions such as Beta-VAEs have been shown to locally align latent variables with PCA directions, which can help to improve model disentanglement under some conditions. We propose adding an L1 loss (sparsity cost) to the VAE’s generative Jacobian during training to encourage local latent variable alignment with independent factors of variation in the data. I’ll present qualitative and quantitative results that show our added L1 cost encourages local axis alignment of the latent representation with individual factors of variation.

What we talk about when we talk about tooling

Cornell Robotics Grad Students

9/23/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Join RGSO for our first homegrown seminar: a discussion on tooling. We have a few of our very own students ready to talk about their workflows and tips-and-tricks for how they get stuff done, whether it’s programming or research reviews. Come ready to listen, learn, and, if you have a cool workflow to communicate to others, share.

Certifiable Outlier-Robust Geometric Perception: Robots that See through the Clutter with Confidence

Heng Yang, Massachusetts Institute of Technology

9/16/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Geometric perception is the task of estimating geometric models from sensor measurements and priors. The ubiquitous existence of outliers —measurements that tell no or little information about the models to be estimated— makes it theoretically intractable to perform estimation with guaranteed optimality. Despite this theoretical intractability, safety-critical robotic applications still demand trustworthiness and performance guarantees on perception algorithms. In this talk, I present certifiable outlier-robust geometric perception, a new paradigm to design tractable algorithms that enjoy rigorous performance guarantees, i.e., they commonly return an optimal estimate with a certificate of optimality, but declare failure and provide a measure of suboptimality on worst-case instances. Particularly, I present three algorithms in the certifiable perception toolbox: (i) a pruner that uses graph theory to filter out gross outliers and boost robustness to against over 95% outliers; (ii) an estimator that leverages graduated non-convexity to compute the optimal estimate with high probability of success; and (iii) a certifier that employs sparse semidefinite programming (SDP) relaxation and a novel SDP solver to endow the estimator with an optimality certificate or escape local minima otherwise. I showcase certifiable outlier-robust perception on real robotic applications such as scan matching, satellite pose estimation, and vehicle pose and shape estimation.

Bio: Heng Yang is a Ph.D. candidate in the Department of Mechanical Engineering and the Laboratory for Information & Decision Systems at the Massachusetts Institute of Technology, working with Prof. Luca Carlone. His research interests include large-scale convex optimization, semidefinite relaxation, robust estimation, and machine learning, applied to robotics and trustworthy autonomy. His work includes developing certifiable outlier-robust machine perception algorithms, large-scale semidefinite programming solvers, and self-supervised geometric perception frameworks. Heng Yang is a recipient of the Best Paper Award in Robot Vision at the 2020 IEEE International Conference on Robotics and Automation (ICRA), a Best Paper Award Honorable Mention from the 2020 IEEE Robotics and Automation Letters (RA-L), and a Best Paper Award Finalist at the 2021 Robotics: Science and Systems (RSS) conference. He is a Class of 2021 RSS Pioneer.

 

Formalizing the Structure of Multiagent Domains for Autonomous Robot Navigation in Human Spaces

Christoforos Mavrogiannis, University of Washington

9/9/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Pedestrian scenes pose great challenges for robots due to the lack of formal rules regulating traffic, the lack of explicit coordination among agents, and the high dimensionality of the underlying space of outcomes. However, humans navigate with ease and comfort through a variety of complex multiagent environments, such as busy train stations, crowded malls or academic buildings. Human effectiveness in such domains can be largely attributed to cooperation, which introduces structure to multiagent behavior. In this talk, I will discuss how we can formalize this structure through the use of representations from low-dimensional topology. I will describe how these representations can be used to build prediction and planning algorithms for socially compliant robot navigation in pedestrian domains and show how their machinery may transfer to additional challenging environments such as uncontrolled street intersections.

Bio: Christoforos (Chris) Mavrogiannis is a postdoctoral research associate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, working with Prof. Siddhartha Srinivasa. His interests lie at the intersection of motion planning, multiagent systems, and human-robot interaction. He is particularly interested in the design and evaluation of algorithms for multiagent domains in human environments. To this end, he employs tools from motion planning and machine learning, and often seeks insights from (algebraic) topology and social sciences. Chris has been a best-paper award finalist at the ACM/IEEE International Conference on Human-Robot Interaction (HRI), and selected as a Pioneer at the HRI and RSS conferences. He has also led open-source initiatives (Openbionics, MuSHR), for which he has been a finalist for the Hackaday Prize and a winner of the Robotdalen International Innovation Award. Chris holds M.S. and Ph.D. degrees from Cornell University, and a Diploma in mechanical engineering from the National Technical University of Athens.

Welcome to the Fall 2021 Robotics Seminar!

Tapomayukh Bhattacharjee and Claire Liang

9/2/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Hey everyone! Welcome back for the semester. Robotics seminar is starting a new era and is (officially) a class again. The first seminar will cover the logistics of what to expect from this semester’s seminar/class as well as serve as an introduction to Cornell Robotics as a community. We will be announcing some new resources available (such as the new Robot Library) and taking feedback for what everyone would like to see in the future. The Robotics Graduate Student Organization will also cover some of what is to come for graduate students. If you’re new to the Cornell Robotics community, be sure to come for this week’s seminar!

P.S. Unfortunately, since Cornell is at a yellow COVID level, we will not have snacks for the foreseeable future.

Integrating Robots and Ecology in Pollen-Limited Crops

3/10/2020

Location: Upson 106 Conference Room Next to the Lounge

Time: 2:55p.m.

Abstract: Although the majority of the human diet stems from staple grains, approximately 75% of agricultural crops needs some amount of pollination. Global reliance on crop pollination is expanding, yet farmers in many parts of the world suffer from increasingly unpredictable yields stemming from dwindling populations of wild pollinators and unsustainable losses of managed bees. We are launching a research project in which we aim to integrate robotic and bio-hybrid agents into the crop ecosystem, to enable observation, estimation, and optimization of yield in pollen-limited crops. The core research challenge is to enable scalable and robust coordination of ubiquitous swarms composed of intelligent entities with varying degrees of capability, controllability, and cost. The project will revolve around a scheduling framework that assimilates and dispatches information and tasks at different levels of granularity, to permit robust progress by diverse agents in the face of failures and dynamic environments; we are also examining autonomous robots (drones and rovers) for managed pollination.

Academic Paper Writing Clinic: Principles and Practice

Guy Hoffman, Cornell University

2/18/2020

Location: Upson 106 Conference Room Next to the Lounge

Time: 2:45p.m.

Abstract: How does one write a good academic paper? What makes some papers easier to read than others? Are there techniques that can easily be applied to improve your paper? How do you overcome “blank-page syndrome”? In this workshop, I will share some of the lessons I have learned over years of writing academic and non-academic texts. I will analyze published papers and, if there is interest, propose strategies for students’ existing papers-in-process. Please send examples your own writing that you would like us to discuss at least 48 hours before the seminar.

Teaser: Here are two of Donella Meadows’s [https://en.wikipedia.org/wiki/Donella_Meadows] tips for writing an op-ed column:

1. Be clear, not fancy: Use everyday language. Be specific, not abstract.  Offer easily imaginable examples. Be sure your words make pictures in people’s heads. Be sure the pictures are the ones you intend.

2. Use most of your column for evidence: Tell stories, give statistics, show the impact of the problem or the solution on the real world. People can form their own conclusions if you give them the evidence. Don’t take much space for grand, abstract conclusions; let the reader form the conclusions.

Enabling Local-To-Global Behaviors Through a Scalable, Deformable Collective

12/10/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: Modular self-reconfigurable robots are typically composed of homogeneous units executing a set of programmed interactions with their neighbors based on a deterministic rule set. Some stochastic modular robotic systems take advantage of their physical design to arrive at a desired state and offer great potential in scalability. We propose combining an innovative hardware design with a control algorithm that enables intermodule interactions with some inherent randomness to ensure successful attachment through permanent magnets. We present the FOAMbots, a scalable, planar, modular robot composed of inflatable units, capable of onboard processing, actuation, sensing, and communication. Each module contains a poro-elastic foam that enables structural integrity while allowing fluid to flow through its volume. Pairs of permanent magnets along the modules’ perimeters enable attachment with adjacent modules, and low-cost, stretchable strain sensors allow modules to communicate and sense their surroundings. This presentation will introduce the hardware, various characterizations of the modular robot’s mechanical and locomotion properties, and a discussion of the algorithms that are currently being implemented to achieve local-to-global changes in the collective’s mechanical properties.

Heterogeneous Team of Robots: Sampling in aquatic environments

Alberto Quattrini Li, Dartmouth College

12/3/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: How can robots effectively explore, monitor, and sample in large scale aquatic environments? This talk presents a recent interdisciplinary project funded by the National Science Foundation on monitoring cyanobacterial blooms in lakes with a team of heterogeneous robots. I will present a sample of solutions that involve the development and deployment of aquatic robotic systems for data collection. First, I show our efficient multirobot algorithms for a team of Autonomous Surface Vehicles governed by Dubins vehicle dynamics to cover of large areas of interest. Field trials with custom-modified motorized kayak are presented, providing insights for improvements.

Second, I discuss the use of a heterogeneous team of robots to exploit their complementary capabilities to reduce the operational cost and increase the mission time for environmental monitoring and water sampling. Using machine learning techniques to model the distribution of the observed phenomena, we developed adaptive exploration and sampling strategies that accounts for reduction in uncertainty. Experimental results from several field experiments together with some lessons learned will be presented.

The talk will conclude with a discussion on some of the open problems that still need to be fully addressed for a robust multirobot system useful for addressing environmental problems and current work, such as ensuring high-quality data and recovery mechanisms, towards the long-term goal of a ubiquitous collaborative multiagent/multirobot system for accomplishing large scale real world tasks.

Bio: Alberto Quattrini Li is an assistant professor in the Department of Computer Science at Dartmouth College and co-director of the Dartmouth Reality and Robotics Lab. He was a postdoctoral fellow and research assistant professor in the Autonomous Field Robotics Laboratory (AFRL), led by Professor Ioannis Rekleitis, in University of South Carolina from 2015 to 2018. During 2014, he was a visiting PhD student in the Robotic Sensor Networks Lab, directed by Professor Volkan Isler, at the Department of Computer Science and Engineering, University of Minnesota. He received a M.Sc. (2011) and a Ph.D. (2015) in Computer Science and Engineering from Politecnico di Milano, working with Professor Francesco Amigoni. His main research (currently funded by the National Science Foundation) include autonomous mobile robotics and active perception, applied to the aquatic domain, dealing with problems that span from multirobot exploration and coverage to multisensor fusion based state estimation. He has worked with many ground and marine robots, including Autonomous Surface Vehicles and Autonomous Underwater Vehicles.