Seminars

These seminars are made possible through sponsorship by Moog.

 

Spring 2023

Thursdays, 2:30-3:30 PM EST

Location: Virtually on Zoom

Zoom Link   (Passcode: 359735)

Past seminars


 

Join the Robotics Listserv

To subscribe to event updates, send an email to robotics-l-request@cornell.edu with “join” in the subject line.


 

The TrimBot2020 gardening robot

Date:  4/6/23

Speaker: Professor Robert B. Fisher

Location: 122 Gates Hall or Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: The TrimBot2020 gardening robot was developed as a prototype in the EC-funded TrimBot2020 research project. The device was designed as a mobile, largely autonomous robot for pruning bushes and rose plants. As an outdoor robot, it had to deal with changing lighting, targets moving in the wind, navigation problems, and natural plants with limited shape models. But the robot could successfully prune. This talk will overview the technologies enabling the robot. Prof. Fisher will also present some work on aerial classification of forests needing thinning (or not).

Bio:  Prof. Robert B. Fisher FIAPR, FBMVA received a BS (Mathematics,  California Institute of Technology, 1974), MS (Computer Science, Stanford, 1978) and a PhD (Edinburgh, 1987). Since then, Bob has been an academic at Edinburgh University, including being College Dean of Research. He has been the Education Committee and Industrial Liaison Committee chairs for the Int. Association for Pattern Recognition, of which he is currently the association Treasurer. His research covers topics mainly in high level computer vision and 3D and 3D video analysis, focusing on reconstructing geometric models from existing examples, which contributed to a spin-off company, Dimensional Imaging. The research has led to 5 authored books and 300+ peer-reviewed scientific articles or book chapters. He has developed several on-line computer vision resources, with over 1 million hits. Most recently, he has been the coordinator of EC projects 1) acquiring and analysing video data of 1.4 billion fish from over about 20 camera-years of undersea video of tropical coral reefsand 2) developing a gardening robot (hedge-trimming and rose pruning). He is a Fellow of the Int. Association for Pattern Recognition (2008) and the British Machine Vision Association (2010).

 

 

Developing and Deploying Platforms for Real-World Impact: FlowIO Platform

Date:  3/30/23

Speaker:  Ali Shtarbanov

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: The fields of Human-Computer Interaction (HCI), Haptics, and Robotics are currently undergoing a paradigm shift from rigid materials toward more compliant, soft, and actuated materials, giving rise to areas often referred to as soft robotics or programmable materials. However, there is a significant lack of tools and development platforms in this field, which makes prototyping difficult and inaccessible to most creators. In this talk, I will present the FlowIO Platform and many of the projects it has enabled over the past two years. FlowIO is a fully integrated general-purpose solution for control, actuation, and sensing of soft programmable materials – enabling researchers, artists, and makers to unleash their creativity and to realize their ideas quickly and easily. It has been deployed in 12 countries and has enabled numerous art projects, research papers, and master theses around the world. I will also present a generalized framework of the essential technological and non technological characteristics that any development platform must offer – in order to be suitable for diverse users and to achieve mass adoption. I will address questions such as: What does it really take to create and deploy development platforms for achieving real-world impact? Why do we need platforms and how can they democratize emerging fields and accelerate innovation? Why are tools the enabler of progress, and how do they shape our world? Why do most platform attempts fail and only very few succeed in terms of impact and widespread adoption?

Bio: Ali Shtarbanov – a final year Ph.D. student at MIT Media Lab – has the mission to make prototyping and innovation in emerging fields more rapid and easily accessible for everyone…through the design and deployment of novel development platforms that are highly versatile, general purpose, and simple to use. Ali is a Bulgarian-American system designer, engineer, and HCI researcher best known as the inventor of the FlowIO Platform and the founder of the SoftRobotics.IO community ecosystem. His research areas include modular systems design, interactive interfaces, soft robotics, haptics, and community building. Ali’s works have been published at leading academic venues (CHI, UIST, SIGGRAPH, IROS, ASCEND) and have won multiple 1st place awards at some of the world’s largest design, engineering, and research competitions including Hackaday Grand Prize, TechBriefs Grand Prize, ACM Student Research Competition, Core77, IfDesign, FastCompany, and iDA.  Prior to his PhD studies, Ali earned bachelor degrees in Physics and Electrical Engineering from Lehigh University with highest honors and a Master’s Degree in Media Arts and Sciences from MIT Media Lab with focus on haptic feedback interfaces.

 

 

Multi-sensory programs for physical understanding – Modeling and Inference

Date:  3/23/23

Speaker:  Krishna Murthy

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

Modern machine learning has unlocked a new level of embodied perception and reasoning abilities by leveraging internet-scale training data. However, such systems fail in unpredictable and unintuitive ways when deployed in real-world applications. These advances have underplayed many classical techniques developed over the past few decades. I postulate that a flexible blend of classical and learned methods is the most promising path to developing flexible, interpretable, and actionable models of the world: a necessity for intelligent embodied agents.

My research intertwines classical and learning-based techniques to bring the best of both worlds, by building multi-sensory models of the 3D world. In this talk, I will share some recent efforts (by me and collaborators) on building world models and inference techniques geared towards spatial and physical understanding. In particular, I will talk about two themes:

  1. leveraging differentiable programs for physical understanding in a dynamic world
  2. integrating features from large learned models for open-set and multimodal perception

Bio:   Krishna Murthy is a postdoc at MIT with Josh Tenenbaum and Antonio Torralba. His research focuses on building multi-sensory world models to help embodied agents perceive, reason about, and act in the world around them. He has organized multiple workshops at ICLR, Neurips, ICCV on themes spanning differentiable programming, physical reasoning, 3D vision and graphics, and ML research dissemination.

His research has been recognized with graduate fellowship awards from NVIDIA and Google (2021); a best paper award from Robotics and Automation letters (2019); and an induction to the RSS Pioneers cohort (2020).

Website: https://krrish94.github.io/

 

Safe Control from Value Functions: Blending Control Barrier Functions and Hamilton-Jacobi Reachability Analysis

Date:  3/16/23

Speaker:  Sylvia Herbert

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract:  Value functions have been used extensively for generating safe control policies for robots and other nonlinear systems. The output of the function provides the current “safety level” of the system, and its gradient informs the allowable control inputs to maintain safety.Two common approaches for value functions are control barrier functions (CBFs) and Hamilton-Jacobi (HJ) reachability value functions.  Each method has its own advantages and challenges.  HJ reachability analysis is a constructive and general method that struggles from computational complexity.  CBFs are typically much simpler, but are challenging to find, often resulting in conservative or invalid hand-tuned or data-driven approximations.In this talk I will discuss our work in exploring the connections between these two approaches in order to blend the theory and tools from each.  I’ll introduce the “control barrier-value function,” and show how we can refine CBF approximations to recover the maximum safe set and corresponding control policy for a system.

Bio:   Sylvia Herbert started as an Assistant Professor in Mechanical and Aerospace Engineering at UC San Diego in 2021. She runs the Safe Autonomous Systems Lab within the Contextual Robotics Institute.

Previously she was a PhD student with Prof. Claire Tomlin at UC Berkeley.  She is the recipient of the ONR Young Investigator Award, NSF GRFP, a UC Berkeley Outstanding Graduate Student Instructor Award, and the UC Berkeley Demetri Angelakos Memorial Achievement Award for Altruism.

 

 

Scaling Sim2Real Learning For Robotic Rearrangement

Date:  3/9/23

Speaker:  Adithyavairavan Murali

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractAdithya Murali is a scientist at the NVIDIA Robotics research team. He received his PhD at the Robotics Institute, Carnegie Mellon University, where he was supported by the Uber Presidential Fellowship. During his PhD, he also spent time at Meta AI Research where he led the development of the pyrobot.org and low-cost robot projects. His work has been a Best Paper finalist at ICRA 2015 and 2020 and has been covered by WIRED, the New York Times, etc. His general interests are in robotic manipulation, 3D vision, synthetic content generation and learning.

Bio:  Rearrangement is a fundamental task in robotic manipulation and which when solved, will help us to achieve the dream of robot butlers working seamlessly in human spaces like homes, factories and hospitals. In this talk I’ll be presenting some recent work in 3D synthetic content generation and new approaches for neural motion planning. Training models from this large-scale simulated data allows us to generalize directly to rearrangement in the real world from just raw camera observations as input, without training on any real data.

 

Enabling Humans and Robots to Predict the Other’s Behavior from Small Datasets

Date:  3/2/2023

Speaker:  Vaibhav Unhelkar 

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractWe are steadily moving towards a future where humans work with robotic assistants, robot teammates, and even robotic tutors. Towards realizing this future, it is essential to train both robots and humans to work with each other. My research develops computational foundations for enabling this human-robot training. This talk will begin with the problem of training robots to work with humans. To address this problem, I will summarize recent imitation learning techniques – FAMM and BTIL – that explicitly model partial observability of human behavior. Coupled with POMDP solvers, these techniques enable robots to predict and adapt to human behavior during collaborative task execution. Second, I will summarize AI Teacher: an explainable AI framework for training humans to work with robots. By leveraging human’s natural ability to model others (Theory of Mind), the AI Teacher framework reduces the number of interactions it takes for humans to arrive at predictive models of robot behavior. The talk will conclude with implications of these techniques for human-robot collaboration.

Bio:  Vaibhav Unhelkar is an Assistant Professor of Computer Science at Rice University, where he leads the Human-Centered AI and Robotics (HCAIR) research group. Unhelkar has developed algorithms to enable fluent human-robot collaboration and, with industry collaborators, deployed robots among humans. Ongoing research in his group includes development of algorithms and systems to model human behavior, train human-robot teams, and improve transparency of AI systems. Unhelkar received his doctorate in Autonomous Systems at MIT (2020) and completed his undergraduate education at IIT Bombay (2012). He serves as an Associate Editor for IEEE Robotics and Automation Letters and is the recipient of AAMAS 2022 Best Program Committee Member Award. Before joining Rice, Unhelkar worked as a robotics researcher at Google X, the Moonshot Factory

 

Structuring learning for real robots

Date:  2/16/2023

Speaker:  Georgia Chalvatzaki

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: We strive to enable robots to operate in real-world unstructured environments. Robot learning holds the promise of endowing robots with generalizable skills. Nevertheless, current approaches mainly overfit specific task (and reward) specifications. We show that by exploiting the structure of the robotics problems, we can scale robotic performance and introduce algorithmic advances that show promising evidence for further research. In this talk, I will talk about four recent works, in which we couple learning and classical methods in perception, planning, and control and showcase a wide range of applications that could enable broader scalability of complex robotic systems, like mobile manipulation robots, that could learn unsupervised, and learn to act safely even around humans. 

Bio:   Georgia Chalvatzaki is Assistant Professor and research leader of the intelligent robotic systems for assistance (iROSA) group at TU Darmstadt, Germany. She received her Diploma and Ph.D. in Electrical and Computer Engineering at the National Technical University of Athens, Greece. Her research interests lie in the intersection of classical robotics and machine learning to develop behaviors for enabling mobile manipulator robots to solve complex tasks in domestic environments with the human-in-the-loop of the interaction process. She holds an Emmy Noether grant for AI Methods from the German research foundation. She is a co-chair of the IEEE RAS technical committee on Mobile Manipulation, co-chair of the IEEE RAS Women in Engineering committee, and was voted “AI-Newcomer” for 2021 by the German Information Society.

 

Exploring Context for Better Generalization in Reinforcement Learning

Date:  2/2/2023

Speaker:  Amy Zhang

Location: Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractThe benefit of multi-task learning over single-task learning relies on the ability to use relations across tasks to improve performance on any single task. While sharing representations is an important mechanism to share information across tasks, its success depends on how well the structure underlying the tasks is captured. In some real-world situations, we have access to metadata, or additional information about a task, that may not provide any new insight in the context of a single task setup alone but inform relations across multiple tasks. While this metadata can be useful for improving multi-task learning performance, effectively incorporating it can be an additional challenge. In this talk, we explore various ways to utilize context to improve positive transfer in multi-task and goal-conditioned reinforcement learning.

Bio: I am an assistant professor at UT Austin in the Chandra Family Department of Electrical and Computer Engineering . My work focuses on improving generalization in reinforcement learning through bridging theory and practice in learning and utilizing structure in real world problems. Previously I was a research scientist at Meta AI and a postdoctoral fellow at UC Berkeley. I obtained my PhD from McGill University and the Mila Institute in 2021, and also previously obtained an M.Eng. in EECS and dual B.Sci. degrees in Mathematics and EECS from MIT.

Websitehttps://amyzhang.github.io/