Taking off: autonomy for insect-scale robots

Date: 9/1/2022

Speaker: Farrell Helbling

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: Countless science fiction works have set our expectations for small, mobile, autonomous robots for use in a broad range of applications. The ability to move through highly dynamic and complex environments can expand capabilities in search and rescue operations and safety inspection tasks. These robots can also form a diverse collective to provide more flexibility than a multifunctional robot. Advances in multi-scale manufacturing and the proliferation of small electronic devices have paved the way to realizing this vision with centimeter-scale robots. However, there remain significant challenges in making these highly-articulated mechanical devices fully autonomous due to the severe mass and power constraints. My research takes a holistic approach to navigating the inherent tradeoffs in each component in terms of their size, mass, power, and computation requirements. In this talk I will present strategies for creating an autonomous vehicle, the RoboBee – an insect-scale flapping-wing robot with unprecedented mass, power, and computation constraints. I will present my work on the analysis of control and power requirements for this vehicle, as well as results on the integration of onboard sensors. I also will discuss recent results that culminate nearly two decades of effort to create a power autonomous insect-scale vehicle. Lastly, I will outline how this design strategy can be readily applied to other micro and bioinspired autonomous robots.

Bio: Farrell Helbling is an assistant professor in Electrical and Computer Engineering at Cornell University, where she focuses on the systems-level design of insect-scale vehicles. Her graduate and post-doctoral work at the Harvard Microrobotics Lab focused on the Harvard RoboBee, an insect-scale flapping-wing robot, and HAMR, a bio-inspired crawling robot. Her research looks at the integration of the control system, sensors, and power electronics within the strict weight and power constraints of these vehicles. Her work on the first autonomous flight of a centimeter-scale vehicle was recently featured on the cover of Nature. She is a 2018 Rising Star in EECS, the recipient of a NSF Graduate Research Fellowship, and co-author on the IROS 2015 Best Student Paper for an insect-scale, hybrid aerial-aquatic vehicle. Her work on the RoboBee project can be seen at the Boston Museum of Science, World Economic Forum, London Science Museum, and the Smithsonian, as well as in the popular press (The New York Times, PBS NewsHour, Science Friday, and the BBC). She is interested in the codesign of mechanical and electrical systems for mass-, power-, and computation-constrained robots.

 

 

 

Welcome to the Fall 2022 Robotics Seminar!

Tapomayukh Bhattacharjee and Sanjiban Choudhury

8/25/2022

Location: 122 Gates Hall

Time: 2:40p.m.

Hey everyone! Welcome back for the semester. The first seminar is just an informal meet and greet. We will cover the logistics of what to expect from this semester’s seminar/class as well as give an introduction to Cornell Robotics as a community. The Robotics Graduate Student Organization will also cover some of what is to come for graduate students. If you’re new to the Cornell Robotics community, be sure to come for this week’s seminar! We will also have snacks!

Humanizing the Robot as a Medium for Communication

Michael Suguitan

12/6/21

Location: 310 Gates Hall

Time: 11 a.m

Abstract: Robots will not soon be entering our lives, particularly in social capacities, largely due to the difficulty in humanizing robots. Humanizing social robots is the long-held goal of human-robot interaction research, and often involves a combination of two objectives: anthropomorphism of the mind through human-like intelligence, or anthropomorphic bodies through lifelike humanoid features. We propose avoiding both of these intractable objectives and instead, using Masahiro Mori’s concept of the “uncanny” valley, humanize by making accessible three phases of robot development – design, movement, and telepresence – using the Blossom robot as a case study. By making Blossom accessible and involving users in its hardware, software, and embodied telecommunication capabilities, we make the robot more familiar and, thus, more human. We hope that this work inspires more human-robot interaction research that emphasizes robot-mediated communication for human-human interaction.

Reactive Task and Motion Planning in Unknown Environments

Vasileios Vasilopoulos

12/2/21

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Unlike the problem of safe task and motion planning in a completely known environment, the setting where the obstacles in a robot’s workspace are not initially known and are incrementally revealed online has so far received little theoretical interest, with existing algorithms usually demanding constant replanning in the presence of unanticipated conditions. In this talk, I will present a hierarchical framework for task and motion planning in the setting of mobile manipulation, which exploits recent developments in semantic SLAM and object pose and triangular mesh extraction using convolutional neural net architectures. Under specific sufficient conditions, formal results accompanying the (online) lower-level vector field motion planner guarantee collision avoidance and convergence to fixed or slowly moving targets, for both a single robot and a robot gripping and manipulating objects. Using this reactive motion planner as a module for high-level task planning, I will discuss how we can efficiently solve geometric rearrangement tasks with legged robots or satisfy complicated temporal logic specifications involving gripping and manipulating objects of interest, in previously unexplored workspaces cluttered with non-convex obstacles.

Bio: Vasileios is a Postdoctoral Associate at MIT CSAIL, working with Prof. Nicholas Roy. His research focuses on reactive task and motion planning in partially known or completely unknown environments. He is particularly interested in developing algorithms that make autonomous robots capable of interacting with the physical environment around them and solving tasks that require autonomous mobile manipulation. To this end, he frequently employs tools from motion planning, topology and perception. He obtained a Ph.D. in Mechanical Engineering from the University of Pennsylvania, advised by Dan Koditschek. He also holds a M.S.E. from the University of Pennsylvania and a Diploma from the National Technical University of Athens, both in Mechanical Engineering

Learning Object-centric Representations for Robot Manipulation Tasks

Karthik Desingh

11/18/21

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: A crucial question for complex multi-step robotic tasks is how to represent relationships between entities in the world, particularly as they pertain to preconditions for various skills the robot might employ. In goal-directed sequential manipulation tasks with long-horizon planning, it is common to use a state estimator followed by a task and motion planner or other model-based system. A variety of powerful approaches exist for explicitly estimating the state of objects in the world. However, it is challenging to generalize these approaches to an arbitrary collection of objects. In addition, the objects are often in contact in manipulation scenarios, where explicit state estimation struggles from the problem of generalizing to unseen objects. In this talk, I will talk about our recent work where we take an important step towards a manipulation framework that generalizes few-shot to unseen tasks with unseen objects. Specifically, we propose a neural network that extracts implicit object embeddings directly from raw RGB images. Trained from large amounts of simulated robotic manipulation data, the object-centric embeddings produced by our network can be used to predict spatial relationships between the entities in the scene to inform a task and motion planner with relevant implicit state information toward goal-directed sequential manipulation tasks.

Bio: Karthik Desingh works as a Postdoctoral Scholar at the University of Washington (UW) with Professor Dieter Fox. Before joining UW, he received his Ph.D. in Computer Science and Engineering from the University of Michigan working with Professor Chad Jenkins. During his Ph.D. he was closely associated with the Robotics Institute and Michigan AI. He earned his B.E. in Electronics and Communication Engineering at Osmania University, India, and M.S. in Computer Science at IIIT-Hyderabad and Brown University. He researches at the intersection of robotics, computer vision, and machine learning, primarily focusing on providing perceptual capabilities to robots using deep learning and probabilistic techniques to perform goal-directed tasks in unstructured environments.

Can Cars Gesture? Expressive Autonomous Vehicles

Paul Schmitt

11/11/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Imagine this. You’re walking down a street in a busy city and, as you’re about to cross the road, you see a vehicle approaching. Something gives you pause. You look closer, and you realize the driver’s seat is empty. There’s no one behind the wheel, and the car appears to be driving itself. What would you do? How would you feel? Is it safe to cross? Now what if the vehicle was able to express its intent to you, in a way that was almost familiar?

We exposed 60 pedestrians to a variety of AV intention expressions using exaggerated sound, light, and sculpted motion within a virtual intersection environment. We are excited to share our take-aways from applying HRI concepts to a non-anthropomorphic robot.

At Motional our goal is to make driverless vehicles a safe, reliable, and accessible reality. Central to our mission is ensuring society understands how our vehicles fit into their communities, and feels safe in their presence.



Bio: Paul Schmitt believes robots can be more and do more.

Paul is passionate about automated vehicles and what they can mean for our lives, our families, our neighborhoods…for us as a society. Paul knows that when an automated system is introduced in society, it can be confusing at best. So his mission is shaping AVs to ensure we in society understand AVs intentions and feel safe in their presence.

    • At Motional, Paul is Automated Vehicle Stack Chief Architect and Expressive Robotics Research Lead.
      • Medium Blog Post: Building Trust in Driverless Vehicles
      • Building Trust Video
    • Paul volunteers at MassRobotics where he is proud to work with the automated vehicle community to promote and advance the development and testing of automated vehicle technology in New England. In this capacity, Paul has participated in AV policy discussions at the White House twice.
    • At iRobot Paul was Systems Engineering Lead for several exciting connected Roombas that are now in millions of homes around the globe.
    • At Volvo Research, Paul was the North America Intelligent Vehicle and Automation Manager and oversaw a $2m USDOT funded Truck Platooning research project. Paul was guest speaker at the 2013 Automated Vehicle Symposium.
    • At Ford, Paul was Active Safety Systems Engineer and recipient of the prestigious Henry Ford Technology Award and the Ford Technical Excellence Executive Award.

Paul’s research thesis at Georgia Tech was in mobile robot path planning. Paul has ten patents to date.

Paul Schmitt Headshot

 

Adaptive Attention: Bringing Active Vision into the Camera

Sanjeev Koppal, University of Florida

11/4/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Most cameras today capture images without considering scene content. In contrast, animal eyes have fast mechanical movements that control how the scene is imaged in detail by the fovea, where visual acuity is highest. The prevalence of active vision during biological imaging, and the wide variety of it, makes it very clear that this is an effective visual design strategy. In this talk, I cover our recent work on creating *both* new camera designs and novel vision algorithms to enable adaptive and selective active vision and imaging inside cameras and sensors.

Bio: Sanjeev Koppal is an Associate Professor at the University of Florida’s Electrical and Computer Engineering Department. He also holds a UF Term Professor Award for 2021-24. Sanjeev is the Director of the FOCUS Lab at UF. Prior to joining UF, he was a researcher at the Texas Instruments Imaging R&D lab. Sanjeev obtained his Masters and Ph.D. degrees from the Robotics Institute at Carnegie Mellon University. After CMU, he was a postdoctoral research associate in the School of Engineering and Applied Sciences at Harvard University. He received his B.S. degree from the University of Southern California in 2003 as a Trustee Scholar. He is a co-author on best student paper awards for ECCV 2016 and NEMS 2018, and work from his FOCUS lab was a CVPR 2019 best-paper finalist. Sanjeev won an NSF CAREER award in 2020 and is an IEEE Senior Member. His interests span computer vision, computational photography and optics, novel cameras and sensors, 3D reconstruction, physics-based vision, and active illumination.

 

Towards precise generalization of robot skills: accurate pick-and-place of novel objects

Maria Bauza Villalonga, MIT

10/28/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Reliable robots must understand their environment and act on it with precision. Practical robots should also be able to achieve wide generalization; i.e, a single robot should be capable of solving multiple tasks. For instance, we would like to have, but still lack, a robot that can reliably assemble most IKEA furniture instead of having one robot tailored to each piece of furniture. Towards this, in this talk, I will present an approach to robotic pick-and-place that provides robots with both high-precision and generalization skills. The proposed approach uses only simulation to learn probabilistic models of grasping, planning, and localization that transfer with high accuracy to the actual robotic system. In real experiments, we show that our dual-arm robot is capable to exert task-aware picks on new objects, use visuo-tactile sensing to localize them, and perform dexterous placings of these objects that involve in-hand regrasps and tight placing requirements with less than 1mm of tolerance. Overall, our proposed approach can handle new objects and placing configurations, providing the robot with precise generalization skills.

Bio: Maria Bauza Villalonga is a PhD student in Robotics at the Massachusetts Institute of Technology, working with Professor Alberto Rodriguez. Before that, she received Bachelor’s degrees in Mathematics and Physics from CFIS, an excellence center at the Polytechnic University of Catalonia. Her research focuses on achieving precise robotic generalization by learning probabilistic models of the world that allow robots to reuse their skills across multiple tasks with high success.
Maria has received several fellowships including Facebook, NVIDIA, or LaCaixa fellowships. Her research has obtained awards such as Best Paper Finalist in Service Robotics at ICRA 2021, Best Cognitive Paper award at IROS 2018, and Best Paper award finalist at IROS 2016. She was also part of the MIT-Princeton Team participating in the Amazon Robotics Challenge, winning the stowing task in 2017 and receiving the 2018 Amazon Best Systems Paper Award in Manipulation.

 

Building Distributed Robot Teams that Search, Track, and Explore: Systems in Theory and in a Dark and Muddy Limestone Mine

Micah Corah, NASA Jet Propulsion Laboratory

10/21/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Processes of observing unknown and uncertain objects and environments are pervasive in robotics applications spanning autonomous mapping, tracking, and inspection. Further, autonomy is especially important to applications such as search and rescue as autonomous operation is critical to enabling mobile robots to penetrate through rubble beyond the communication range of human operators.

This talk will be split into two parts: The first part will focus on methods for informative planning and active perception for one or more robots, and the latter will discuss the DARPA Subterranean Challenge and my experience competing with team CoSTAR.

Autonomous perception tasks such as mapping a building or tracking targets often produce optimization problems that are difficult to solve (NP-Hard) and yet highly structured. Taking advantage of structure can greatly simplify these problems such as by providing efficient and accurate objective evaluation or efficient distributed algorithms with strong suboptimality guarantees. The methods we will discuss enable individual robots and teams to navigate and observe unknown environments at high speeds while quickly and collectively adapting to information from new observations.

Still, there are significant gaps between laboratory (and theoretic) settings and the field. The multi-robot systems deployed in the DARPA Subterranean Challenge Finals represent incredible advances in the ability of teams of robots to operate autonomously and intelligently in harsh and unstructured underground environments. Yet, the development of these systems is focused primarily on reliability and redundancy, and simple methods that work well enough in practice often prevail over seemingly advanced methods. This talk will focus on the performance of these multi-robot aerial and ground systems in the competition and speculate on what to expect in the near future.

Bio: Micah is a postdoc at the NASA Jet Propulsion Laboratory with Dr. Ali Agha where he competed with team CoSTAR in the DARPA Subterranean Challenge. Previously, Micah completed a Ph.D. in Robotics at Carnegie Mellon University advised by Prof. Nathan Michael focusing on distributed perception planning and multi-robot exploration. Micah is deeply interested in problems related to navigation, perception planning, and control for mobile robots and especially aerial robot teams.

 

Soft Structures: The Backbone of Soft Robots

Gina Olson, Carnegie Mellon University

10/14/2021

Location: 122 Gates Hall

Time: 2:40p.m.

Abstract: Soft robots use geometric and material deformation to absorb impacts, mimic natural motions, mechanically adapt to motion or unevenness and to store and reuse energy. Soft robots, by virtue of these traits, offer potential for robots that grasp robustly, adapt to unstructured environments and work safely alongside, or are even worn by, humans. However, compliance breaks many of the assumptions underpinning traditional approaches to robot design, dynamics, control, sensing and planning, and new or modified approaches are required. During this talk, I will introduce the concept of soft robots as soft structures, with capabilities and behaviors derived from the type and organization of their active and passive elements. I will present my current and prior work on the development and analysis of soft robotic structures, with a particular focus on the mechanics of soft arms. I will briefly discuss ongoing work on a modular soft architecture actuated by nitinol wire.

Bio: Dr. Gina Olson is a postdoctoral research scientist working in Prof. Carmel Majidi’s Soft Machines Lab at Carnegie Mellon University. She earned her doctorate in Robotics and Mechanical Engineering at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, where she was advised by Dr. Yiğit Mengüç and Prof. Julie A. Adams. Her current research interests are the development and study of the soft and compliant structures within soft robots, and her past research interests lie in the area of deployable space structures for small satellites. She previously worked as a Technical Lead Engineer at Meggitt Polymers and Composites, where she led the development and certification of fire seals for aircraft engines.