Seminars

Spring 2022

Thursdays, 2:40-3:30 PM EST
Location: Gates Hall 122 and Virtually on Zoom

Zoom link (Passcode: 346159)

Past seminars


 

Join the Robotics Listserv

To subscribe to event updates, send an email to robotics-l-request@cornell.edu with “join” in the subject line.


 

Human-centered Robotics: How to bridge the gap between humans and robots?

Date: 5/5/2022

Head shot of Daehyung Park
Daehyung Park

Speaker: Daehyung Park

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: There are now successful stand-alone or coexistence robotic systems in human environment. Yet robots are not intelligent enough to directly collaborate with humans, particularly potential non-expert users. In this talk, I will discuss how to develop highly-capable robotic teammates by bridging the knowledge gap between humans and robots. I will particularly show our cognitive architecture with learned knowledge models can produce three core capabilities: natural language grounding, transferable skill learning, and robust task planning-and-execution. I will show how to provide highly scalable and reliable assistance when situated in novel environments.

Bio: 

Daehyung Park is an assistant professor at the School of Computing, KAIST, Korea, leading the Robust Intelligence and Robotics Laboratory (RIRO Lab). His research lies at the intersection of mobile manipulation, artificial intelligence, and human-robot interaction to advance collaborative robot technologies.
Prior to joining KAIST, he had been a postdoctoral associate in Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. He received a Ph.D. in Robotics at Georgia Institute of Technology, an M.S. from the University of Southern California, and a B.S. from Osaka University. Prior to joining his Ph.D., he served as a Robotics Researcher at Samsung Electronics Inc from 2008-2012. He is a recipient of Google Research Scholar Award, 2022.

 

 

 

 

 

Making Soft Robotics Less Hard: Towards a Unified Modeling, Design, and Control Framework

Date: 4/28/2022

Photo of Daniel Bruder
Daniel Bruder head-shot.

 

Speaker: Daniel Bruder

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

AbstractSoft robots are able to safely interact with delicate objects, absorb impacts without damage, and adapt to the shape of their environment, making them ideal for applications that require safe robot-human interaction. However, despite their potential advantages, their use in real-world applications has been limited due to the difficulty involved in modeling and controlling soft robotic systems. In this talk, I’ll describe two modeling approaches aimed at overcoming the limitations of previous methods. The first is a physics-based approach for fluid-driven actuators that offers predictions in terms of tunable geometrical parameters, making it a valuable tool in the design of soft fluid-driven robotic systems. The second is a data-driven approach that leverages Koopman operator theory to construct models that are linear, which enables the utilization of linear control techniques for nonlinear dynamical systems like soft robots. Using this Koopman-based approach, a pneumatically actuated soft arm was able to autonomously perform manipulation tasks such as trajectory following and pick-and-place with a variable payload without undergoing any task-specific training. In the future, these approaches could offer a paradigm for designing and controlling all soft robotic systems, leading to their more widespread adoption in real-world applications.

Bio: 

Daniel Bruder received a B.S. degree in engineering sciences from Harvard University in 2013, and a Ph.D. degree in mechanical engineering from the University of Michigan in 2020. He is currently a postdoctoral fellow in the Harvard Microrobotics Lab supervised by Prof. Robert Wood. He is a recipient of the NSF Graduate Research Fellowship and the Richard and Eleanor Towner Prize for Outstanding Ph.D. Research. His research interests include the design, modeling, and control of robotic systems, especially soft robots.

 

 

 

 

 

Project Punyo: The challenges and opportunities when softness and tactile sensing meet

Date: 4/21/2022

Head shot of Naveen Kuppuswamy

Speaker: Naveen Kuppuswamy

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: Manipulation in cluttered environments like homes requires stable grasps, precise placement, sensitivity to, and robustness against unexpected contact and the ability to manipulate a wide range of objects. Tactile-driven manipulation exploiting softness can be an effective mitigation strategy for these hard challenges. In this talk, I will first present the highly compliant TRI ‘Soft-bubble’ sensor/gripper – the utility presented by the combination of highly perceptive sensing and variable passive compliance is demonstrated in a variety of manipulation tasks. I will then outline Project Punyo: our vision for a soft, tactile-sensing, bimanual whole-body manipulation platform and present some recent results in achieving whole-body rich-contact strategies for manipulating large domestic objects.

Bio: Naveen Kuppuswamy is a Senior Research Scientist and Tactile Perception and Control Lead in the Dextrous Manipulation department of the Toyota Research Institute. He holds a Bachelor of Engineering from Anna University, Chennai, India, MS in Electrical Engineering from the Korea Advanced Institute for Science and Technology (KAIST), Daejon, South Korea, and a PhD in Artificial Intelligence from the University of Zurich, Switzerland. He has also spent some time as a Postdoctoral Researcher at the Italian Institute of Technology, Genova, Italy and as a Visiting Scientist with the Robotics and Perception Group at the University of Zurich. Naveen holds several years of academic and industry experience in working on themes of tactile sensing, soft robotics and robot controls on a wide variety of platforms and has authored several publications in leading peer-reviewed journals and conferences. His research has been recognized through multiple publication and grant awards. He is also keenly interested in STEM education of under-represented communities around the world. Naveen is deeply passionate about using robots to assist people and improving the quality-of-life of those in need.

 

 

 

 

 

Design and Perception of Wearable Multi-Contact Haptic Devices for Social Communication

Date: 4/14/2022

Speaker: Cara Nunez

Head Shot of Cara Nunez
Cara Nunez

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract

During social interactions, people use auditory, visual, and haptic (touch) cues to convey their thoughts, emotions, and intentions. Current technology allows humans to convey high-quality visual and auditory information but has limited ability to convey haptic expressions remotely. However, as people interact more through digital means rather than in person, it becomes important to have a way to be able to effectively communicate emotions through digital means as well. As online communication becomes more prevalent, systems that convey haptic signals could allow for improved distant socializing and empathetic remote human-human interaction.

Due to hardware constraints and limitations in our knowledge regarding human haptic perception, it is difficult to create haptic devices that completely capture the complexity of human touch. Wearable haptic devices allow users to receive haptic feedback without being tethered to a set location and while performing other tasks, but have stricter hardware constraints regarding size, weight, comfort, and power consumption. In this talk, I will present how I address these challenges through a cyclic process of (1) developing novel designs, models, and control strategies for wearable haptic devices, (2) evaluating human haptic perception using these devices, and (3) using prior results and methods to further advance design methodologies and understanding of human haptic perception.

Bio: Cara M. Nunez is a Postdoctoral Research Fellow within the Biorobotics Laboratory, Microrobotics Laboratory, and Move Lab at the Harvard John A. Paulson School of Engineering and Applied Sciences. She is also a Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University and will begin as an Assistant Professor in July 2023. She received a Ph.D. in Bioengineering and a M.S. in Mechanical Engineering from Stanford University working in the Collaborative Haptics and Robotics in Medicine Lab in 2021 and 2018, respectively. She was a visiting researcher in the Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in 2019-2020. She received a B.S. in Biomedical Engineering and a B.A. in Spanish as a part of the International Engineering Program from the University of Rhode Island in 2016. She was a recipient of the National Science Foundation Graduate Research Fellowship, the Deutscher Akademischer Austauschdienst Graduate Research Fellowship, the Stanford Centennial Teaching Assistant Award, and the Stanford Community Impact Award and served as the Student Activities Committee Chair for the IEEE Robotics and Automation Society from 2020-2022. Her research interests include haptics and robotics, with a specific focus on haptic perception, cutaneous force feedback techniques, and wearable devices, for medical applications, human-robot interaction, virtual reality, and STEM education.

 

 

 

 

 

Learning Autonomous Navigation by Inferring Semantic Context and Logical Structure

Date: 3/31/2022

Speaker: Tianyu Wang

Head shot of Tianyu Wang
Tianyu Wang

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract:  Autonomous systems operating in unstructured, partially observed, and changing real-world environments need a semantic and logical understanding of the task. Designing a cost function that encodes complex rules by hand is infeasible and it is desirable to learn policies from expert demonstrations. However, it remains challenging to infer the proper cost function from rich semantic information while following the underlying task structure in demonstrations. In this talk, I will first give an overview of an inverse reinforcement learning method that learns to navigate from semantic observations. The efficacy of our method is demonstrated by an autonomous driving task in the CARLA simulator. Second, I will present an automata-based approach to learn the sequential task logic from demonstrations. 

Bio: Tianyu Wang is a Ph.D. candidate in Electrical and Computer Engineering at UC San Diego. His research interests include reinforcement learning, inverse reinforcement learning and autonomous driving. He received an M.S. degree in Electrical and Computer Engineering from UC San Diego and a B.S. degree in Physics from Haverford College.

 

 

 

 

 

Interactive Learning for Robust Autonomy

Date: 3/24/22

Speaker: Igor Gilitschenski

Location: 122 Gates Hall and Zoomigor Gilitschenski Head Shot

Time: 2:40 p.m.-3:30 p.m.

Abstract:

In recent years, we have seen an exploding interest in real-world deployment of autonomous systems, such as autonomous drones or vehicles. This interest was sparked by major advances in robot perception, planning, and control. However, robust operation in the “wild” remains a challenging goal. Correct consideration of the broad variety of real-world conditions requires both, better understanding of the learning process and robustifying the deployment of autonomous robots. In this talk, I will discuss several of our recent works in that space. This involves, first, discussing the challenges associated with severe weather conditions. Second, approaches for reducing real-world data requirements for safe navigation. Finally, enabling safe learning for control in interactive settings.

 

Bio: Igor Gilitschenski is an Assistant Professor of Computer Science at the University of Toronto where he leads the Toronto Intelligent Systems Lab. He is also a (part-time) Research Scientist at the Toyota Research Institute. Prior to that, Dr. Gilitschenski was a Research Scientist at MIT’s Computer Science and Artificial Intelligence Lab and the Distributed Robotics Lab (DRL) where he was the technical lead of DRL’s autonomous driving research team. He joined MIT from the Autonomous Systems Lab of ETH Zurich where he worked on robotic perception, particularly localization and mapping. Dr. Gilitschenski obtained his doctorate in Computer Science from the Karlsruhe Institute of Technology and a Diploma in Mathematics from the University of Stuttgart. His research interests involve developing novel robotic perception and decision-making methods for challenging dynamic environments. He is the recipient of several best paper awards including at the American Control Conference, the International Conference of Information Fusion, and the Robotics and Automation Letters.

 

 

 

 

 

Revisiting Robot Perception with Tools Old and New

Date: 3/17/22

Speaker: Mustafa Mukadam

Location: 122 Gates Hall and Zoom

Head Shot of Mustafa Mukadam
Photo of Mustafa Mukadam

Time: 2:40 p.m.-3:30 p.m.

Abstract:

Robot perception sits at a unique cross roads between computer vision and robotics. The nature of sensing is ego centric and temporal, and can be from contact-rich interactions. Fusing these signals into representations that enable downstream tasks in real-time is a challenge. In this talk, I will cover some of our recent work in building signed distance fields, human pose and shape estimation, and tactile estimation that provide a recipe for thinking about perception problems with a robotics lens by making optimization and prior models compatible with deep learning.

Bio:

Mustafa Mukadam is a Research Scientist at Meta AI. His work focuses on fundamental and applied research in robotics and machine learning, and structured techniques at their intersection towards practical robot learning. Specifically, his research spans problems from perception to planning for navigation and manipulation. He received a Ph.D. from Georgia Tech where he was part of the Robot Learning Lab and Institute for Robotics and Intelligent Machines. His works have been covered by media outlets like GeekWire, VentureBeat, and TechCrunch, and work on motion planning has received the 2018 IJRR paper of the year award.

 

 

 

 

 

Interactive Imitation Learning: Planning Alongside Humans

Date: 3/3/22

Speaker: Sanjiban Choudhury

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract:

Advances in machine learning have fueled progress towards deploying real-world robots from assembly lines to self-driving. However, if robots are to truly work alongside humans in the wild, they need to solve fundamental challenges that go beyond collecting large-scale datasets. Robots must continually improve and learn online to adapt to individual human preferences. How do we design robots that both understand and learn from natural human interactions?

In this talk, I will dive into two core challenges. First, I will discuss learning from natural human interactions where we look at the recurring problem of feedback-driven covariate shift. We will tackle this problem from a unified framework of distribution matching. Second, I will discuss learning to predict human intent where we look at the chicken-or-egg problem of planning with learned forecasts. I will present a graph neural network approach that tractably reasons over latent intents of multiple actors in the scene. Finally, we will demonstrate how these methods come together to result in a self-driving product deployed at scale.

Bio: Sanjiban Choudhury is a Research Scientist at Aurora Innovation and soon-to-be Assistant Professor at Cornell University. His research goal is to enable robots to work seamlessly alongside human partners in the wild. To this end, his work focuses on imitation learning, decision making and human-robot interaction. He obtained his Ph.D. in Robotics from Carnegie Mellon University and was a Postdoctoral fellow at the University of Washington. His research has received best paper awards at ICAPS 2019, finalist for IJRR 2018, and AHS 2014, and winner of the 2018 Howard Hughes award. He is a Siebel Scholar, class of 2013.

 

 

 

 

 

Designing Emotionally Intelligent Social Robots for Applications Involving Children

Date: 2/24/22

Speaker: De’aira Bryant

 

 

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: Social robots are robots designed to interact and communicate directly with humans. Yet, many current robots operate in restrictive social environments. In order for these machines to operate effectively in the real world, they must be capable of understanding the many factors that contribute to human social interaction. One such factor is emotional intelligence. Emotional intelligence (EI) allows one to consider the emotional state of another in order to motivate, plan, and achieve their goals. This presentation will first highlight current techniques in artificial intelligence that incorporate aspects of EI in human-robot interactions. This ability is especially important for applications involving children who are often still learning social skills. However, many approaches in artificial EI have not critically considered children in their target populations. The latter portion of this presentation will feature current research projects ethically and responsibly designing EI for robots capable of interacting with children.

 

 

 

 

Rethinking Representations for Robotics

Date: 2/17/2022

Speaker: Lerrel Pinto

Headshot of Lerrel Pinto

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: Even with the substantial progress we have seen in Robot Learning, we are nowhere near general purpose robots that can operate in the real world that we live in. There are two fundamental reasons for this. First, robots need to build concise representations in high-dimensional sensory observations often without access to explicit sources of supervision. Second, unlike standard supervised learning, they will need to solve long-horizon decision making problems. In this talk, I’ll propose a recipe for general purpose robot learning that combines ideas of self-supervision for representation learning with ideas in RL, adaptation, and imitation for decision making.

About the Speaker: Lerrel Pinto is an Assistant Professor of Computer Science at NYU. His research interests focus on machine learning and computer vision for robots. He received a PhD degree from CMU in 2019; prior to that he received an MS degree from CMU in 2016, and a B.Tech in Mechanical Engineering from IIT-Guwahati. His work on large-scale robot learning received the Best Student Paper award at ICRA 2016 and a Best Paper finalist award at IROS 2019. Several of his works have been featured in popular media such as The Wall Street Journal, TechCrunch, MIT Tech Review, Wired, and BuzzFeed among others. His recent work can be found on www.lerrelpinto.com.

 

 

 

Planning for Human-Robot Systems under Augmented Partial Observability

Date: 2/10/2022

Speaker: Shiqi Zhang

Shiqi Zhang Head Shot

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: The real world is partially observable to both people and robots. To estimate the world state, a robot needs a perception model to interpret sensory data. How does a robot plan its behaviors without such perception models? I will present our recent research on learning algorithms to help robots perceive and plan in stochastic worlds. With humans in the loop, robot planning becomes more difficult, because people and robots need to estimate not only the world state but also each other’s state. My second half of the talk will be about frameworks for human-robot communication and collaboration. I will share our work on leveraging AR/VR visualization strategies for transparent human-robot teaming toward effective collaboration.

About the Speaker: Dr. Shiqi Zhang is an Assistant Professor with the Department of Computer Science, the State  University of New York (SUNY) at Binghamton. Before that, he was an Assistant Professor at Cleveland State University after working as a Postdoc at UT Austin. He received his Ph.D. in Computer Science (2013) from Texas Tech University, and received his M.S. and B.S. degrees from Harbin Institute of Technology. He is leading an NSF NRI project on knowledge-based robot decision making. He received the Best Robotics Paper Award from AAMAS in 2018, a Ford URP Award from 2019-2022, and an OPPO Faculty Research Award in 2020.

 

 

REGROUP: A Robot-Centric Group Detection and Tracking System

Date: 2/3/2022

Speaker: Angelique Taylor

Location: 122 Gates Hall

Time: 2:40 p.m.-3:30 p.m.

Abstract: To facilitate the field of Human-Robot Interaction (HRI) to transition from dyadic to group interaction with robots, new methods are needed for robots to sense and understand human team behavior. We introduce the Robot-Centric Group Detection and Tracking System (REGROUP), a new method that enables robots to detect and track groups of people from an ego-centric perspective using a crowd-aware, tracking-by-detection approach. Our system employs a novel technique that leverages person re-identification deep learning features to address the group data association problem. REGROUP is robust to real-world vision challenges such as occlusion, camera ego-motion, shadow, and varying lighting illuminations. Also, it runs in real-time on real-world data. We show that REGROUP outperformed three group detection methods by up to 40% in terms of precision and up to 18% in terms of recall. Also, we show that REGROUP’s group tracking method outperformed three state-of-the-art methods by up to 66% in terms of tracking accuracy and 20% in terms of tracking precision. We plan to publicly release our system to support HRI teaming research and development. We hope this work will enable the development of robots that can more effectively locate and perceive their teammates, particularly in uncertain, unstructured environments.

About the Speaker: Angelique Taylor is a Visiting Research Scientist at Meta Reality Labs Research. She received her Ph.D. in Computer Science and Engineering at UC San Diego. Her research lies at the intersection of computer vision, robotics, and health informatics. She develops systems that enable robots to interact and work with groups of people in safety-critical environments. At Meta, Dr. Taylor is working on augemented/virtual reality (AR/VR) systems that deploy AI algorithms to help multiple people coordinate to achieve a common goal on collaborative tasks. She has received the NSF GRFP, Microsoft Dissertation Award, the Google Anita Borg Memorial Fellowship, the Arthur J. Schmitt Presidential Fellowship, a GEM Fellowship, and an award from the National Center for Women in Information Technology (NCWIT). More information on her research can be found at angeliquemtaylor.com.

Designing Emotionally-Intelligent Agents that Move, Express, and Feel Like Us!

Speaker: Aniket Bera

Headshot of speaker Aniket Bera

1/27/2022

Location: 122 Gates Hall

Time: 2:40 p.m.-3:30 p.m.

Abstract:

Human behavior modeling is vital for many virtual/augmented reality systems as well as human-robot interactions. As the world increasingly uses digital and virtual platforms for everyday communication and interactions, there is a heightened need to create human-like virtual avatars and agents endowed with social and emotional intelligence. Interactions between humans and virtual agents are being used in different areas including, VR, games and story-telling, computer-aided design, social robotics, and healthcare. At the same time, recent advances in robotic perception technologies are gradually enabling humans and human-like robots to co-exist, co-work, and share spaces in different environments. Knowing the perceived affective states and social-psychological constructs (such as behavior, emotions, psychology, motivations, and beliefs) of humans in such scenarios allows the agents (virtual humans or social robots) to make more informed decisions and interact in a socially intelligent manner.

In this talk, I will give an overview of our recent work on simulating intelligent, interactive, and immersive human-like agents who can also learn, understand and be sentient to the world around them using a combination of emotive gestures, gaits, and expressions. Finally, I will also talk about our many ongoing projects which use our AI-driven IVAs, including intelligent digital humans for urban simulation, crowd simulation, mental health and therapy applications, and social robotics.

 

About the speaker:

Aniket Bera is an Assistant Research Professor at the Department of Computer Science. His core research interests are in Affective Computing, Computer Graphics (AR/VR, Augmented Intelligence, Multi-Agent Simulation), Autonomous Agents, Cognitive modeling, and planning for intelligent characters. His work has won multiple best paper awards at top VR/AR conferences. He has previously worked in many research labs, including Disney Research and Intel Labs. Aniket’s research has been featured on CBS, WIRED, Forbes, FastCompany, etc. Find out more about Aniket here: https://cs.umd.edu/~ab.