New complexity results and performance-guaranteed algorithms for multirobot navigation of communication-restricted environments

Jacopo Banfi, Cornell University

4/9/19

Exploiting a team of mobile robots can provide a valid alternative to the employment of human operators in carrying out different kinds of information-gathering tasks, like environmental monitoring, exploration, and patrolling. Frequently, the proposed coordination mechanisms work under the assumption that communication between robots is possible between any two locations of the environment. However, real operational conditions may require to deploy robots only equipped with local limited-range communication modules. In this talk, I will first present a general graph-based framework for planning multirobot missions subject to different kinds of communication constraints. Then, I will focus on a few selected problems taken from the literature that can be framed in such planning framework (like computing a set of joint paths ensuring global connectivity at selected times), and present either new complexity results or performance-guaranteed algorithms to compute good quality solutions to these problems in reasonable time.

Simulation based control, a case study

Andy Ruina & Matt Sheen, Cornell University

4/16/19

  1. Simulations are imperfect. So there is a question about how to use simulations for control. Certainly the better the simulation the easier, so there is a need for better simulators. We are working on that.
  2. But we live in the world we live in. And have the imperfect simulators we have. How to live with that? We have chosen a model problem: the game of QWOP. In this model system, the QWOP game is a model of reality, and our various simulations of the game are models of our models of reality. Kind of meta. How well can we do at controlling QWOP using imperfect models of QWOP? And how do we do that? This seminar is about our successes at this model of modeling. To get the most from this seminar, spend 10 minutes playing QWOP before the seminar. Google QWOP on your phone or computer. In short, it’s not so easy. And our synthetic play is also only so good so far.

Leveraging Vision for 3D Perception in Robotics

Wei-Lun (Harry) Chao & Brian Wang, Cornell University

4/23/19

Abstract: Many robotics applications require accurate 3D perception, for example an autonomous car determining the positions of other cars on the road, or an industrial manipulator robot recognizing an object it is supposed to pick up. Recent advancements driven by deep neural networks have led to remarkable performance in 2D image processing tasks. However, the properties of 3D sensor data make it challenging to realize similar performance in 3D. In this talk, we present two recent works that leverage successes in 2D vision towards 3D perception tasks.

We first present Label Diffusion LiDAR Segmentation (LDLS), an algorithm for point-level object recognition in 3D LiDAR point clouds. LDLS uses information from aligned camera images to avoid any need for training on labeled 3D data. Our method applies a pre-trained 2D image segmentation model on a camera image, then diffuses information from the image into a LiDAR point cloud using a semi-supervised graph learning algorithm. Any object classes that are recognized by the 2D image segmentation model can also be detected in LiDAR, allowing LDLS to recognize a far greater variety of objects than possible in previous works.

We then present a novel vision-based 3D object detection algorithm, which can bypass the expensive LiDAR signal or serves as an auxiliary system to LiDAR-based detectors. The key insight is to apply stereo depth estimation from pairs of 2D images and back-project the depths into a 3D point cloud, which we call pseudo-LiDAR. With pseudo-LiDAR, we can essentially apply any existing LiDAR-based algorithms for 3D object detection, leading to a 300% performance improvement over the previous state-of-the-art vision-based detector.

Bio: Wei-Lun (Harry) Chao is a Postdoctoral Associate in Computer Science at Cornell University working with Kilian Q. Weinberger and Mark Campbell. His research interests are in machine learning and its applications to computer vision, natural language processing, artificial intelligence, and healthcare. His recent work has focused on robust autonomous driving. He received a Ph.D. degree in Computer Science from the University of Southern California. He will be joining the Ohio State University as an assistant professor in Computer Science and Engineering in 2019 Fall.

Brian Wang is a third-year MAE PhD student at Cornell, in Mark Campbell’s research group. His research interests include vision- and LiDAR-based perception, probabilistic tracking and estimation, autonomous driving, and human-robot interaction.

Implicit Communication of Actionable Information in Human-AI teams

Claire Liang, Cornell University

4/30/19

Humans expect their collaborators to look beyond the explicit interpretation of their words. Implicature is a common form of implicit communication that arises in natural language discourse when an utterance leverages context to imply information beyond what the words literally convey. Whereas computational methods have been proposed for interpreting and using different forms of implicature, its role in human and artificial agent collaboration has not yet been explored in a concrete domain. The results of this paper provide insights to how artificial agents should be structured to facilitate natural and efficient communication of actionable information with humans. We investigated implicature by implementing two strategies for playing Hanabi, a cooperative card game that relies heavily on communication of actionable implicit information to achieve a shared goal. In a user study with 904 completed games and 246 completed surveys, human players randomly paired with an implicature AI are 71% more likely to think their partner is human than players paired with a non-implicature AI. These teams demonstrated game performance similar to other state of the art approaches

Design and Control of the DONUts: A Scalable, Self-Reconfigurable Modular Robot with Compliant Modules

Nialah Wilson, Steven Ceron

5/7/19

Modular self-reconfigurable robots are composed of active modules capable of rearranging their connection topology to adapt to dynamic environments, changing task settings, and partial failures. The research challenges are considerable, and cover hardware design with inexpensive, simple fabrication and maintenance, durable mechanical parts, and efficient power management; and planning and control for scalable autonomy. We present a new planar, modular robot with advantageous scaling properties in both fabrication and operation. Modules consist primarily of a flexible printed circuit board wrapped in a loop, which eases assembly and introduces compliance for safe interaction with external objects and other modules, as well as configuration of large scale lattice structures beyond what the manufacturing tolerances allow. We further present ongoing work on coordination schemes that leverages these features for basic autonomous behaviors, including distributed shape estimation and gradient tracking in cluttered environments. This work brings us a step closer to the ambition of robust autonomous robots capable of exploring cluttered, unpredictable environments.

Additive Manufacturing of Soft Robots

Shuo Li, Cornell University

8/28/18

This talk will present multidisciplinary work from material composites and robotics. We have created new types of actuators, sensors, displays, and additive manufacturing techniques for soft robots and haptic interfaces. For example, we now use stretchable optical waveguides as sensors for high accuracy, repeatability, and material compatibility with soft actuators. For displaying information, we have created stretchable, elastomeric light emitting displays as well as texture morphing skins for soft robots. We have created a new type of soft actuator based on molding of foams, new chemical routes for stereolithography printing of silicone and hydrogel elastomer based soft robots, and implemented deep learning in stretchable membranes for interpreting touch. All of these technologies depend on the iterative and complex feedback between material and mechanical design.  I will describe this process, what is the present state of the art, and future opportunities for science in the space of additive manufacturing of elastomeric robots.

Scaling up autonomous flight

Adam Bry, Skydio

9/5/18

NOTE: Special time and location: 5pm on Wednesday in Upson 106

Abstract: Drones hold enormous potential for consumer video, inspection, mapping, monitoring, and perhaps even delivery. They’re also natural candidates for autonomy and likely to be among the first widely-deployed systems that incorporate meaningful intelligence based on computer vision and robotics research. In this talk I’ll discuss the research we’re doing at Skydio, along with the challenges involved in building a robust robotics software system that needs to work at scale.

Bio: Adam Bry is co-founder and CEO of Skydio, a venture backed drone startup based in the bay area. Prior to Skydio he helped start Project Wing at Google[x] where he worked on the flight algorithms and software. He holds a SM in Aero/Astro from MIT and a BS in Mechanical Engineering from Olin College. Adam grew up flying radio controlled airplanes and is a former national champion in precision aerobatics. He was named to the MIT Tech Review 35 list in 2016.

Some Thoughts on Model Reduction for Robotics

 Andy Ruina, Cornell University

9/11/18

These are unpublished thoughts, actually more questions than thoughts.And not all that well informed. So audience feedback is welcome. Especially from people who know about how to formulate machine learning problems
(I already know, sort of, how to formulate MatSheen learning problems).One posing of many robotics control problems is as a general problem in `motor control’ (a biological term, I think).Assume one has a machine and the best model (something one can compute simulations with) one can actually get of the machine, its environment, its sensors and its computation abilities. One also has some sense of the uncertainty in various aspects of these.The general motor problem is this: Given a history of sensor readings and requested goals (commands), and all of the givens above, what computation should be done to determine the motor commands so as to best achieve the goals.”Best” means, most accurately and most reliably by whatever measures one chooses.If one poses this as an optimization problem over the space of all controllers (all mappings from command and sensor histories to the set of commands), it is too big a problem, even if coarsely discretized.Hence, everyone applies all manner of assumed simplifications before attempting to make a controller.The question here is this, can one pose an optimization problem for the best simplification? Can one pose it in a way such that finding a useful approximate solution could be useful?In bipedal robots there are various classes of simplified models used by various people to attempt to control their robots. Might there be a rational way to choose between them, or find better ones?As abstract as this all sounds, perhaps thinking about such things could help us make better walking-robot controllers.

Big-data machine learning meets small-data robotics

Group Discussion

9/18/18

Abstract: Machine learning techniques have transformed many fields, including computer vision and natural language processing, where plentiful data can be cheaply and easily collected and curated.  Training data in robotics is expensive to collect and difficult to curate or annotate.  Furthermore, robotics cannot be formulated as simply a prediction problem in the way that vision and NLP can often be.  Robots must close the loop, meaning that we ask our learning techniques to consider the effect of possible decisions on future predictions.  Despite exciting progress in some relatively controlled (toy) domains, we still lack good approaches to adapting modern machine learning techniques to the robotics problem.  How can we overcome these hurdles?  Please come prepared to discuss.  Here are some potential discussion topics:

  1. Are robot farms like the one at Google a good approach?  Google has dozens of robots picking and placing blocks 24/7 to collect big training data in service of training traditional models.
  2. Since simulation allows the cheap and easy generation of big training data, many researchers are attempting domain transfer from simulation to the real robot.  Should we be attempting to make simulators photo-realistic with perfect physics?  Alternatively, should we instead vary simulator parameters to train a more general model?
  3. How can learned models adapt to unpredictable and unstructured environments such as people’s homes?  When you buy a Rosie the Robot, is it going to need to spend a week exploring the house, picking up everything, and tripping over the cat to train its models?
  4. If we train mobile robots to automatically explore and interact with the world in order to gather training data at relatively low cost, the data will be biased by choices made in building that autonomy.  Similar to other recent examples in which AI algorithms adopt human biases, what are the risks inherent in biased robot training data?
  5. What role does old-fashioned robotics play?  We have long learned to build state estimators, planners, and controllers by hand.  Given that these work pretty well, should we be building learning methods around them?  Or should they be thrown out and the problems solved from scratch with end-to-end deep learning methods?
  6. What is the connection between machine learning and hardware design?  Can a robot design co-evolve with its algorithms during training?  Doing so would require us to encode design specifications much more precisely than has been done in the past, but so much of design practice resists specification due to its complexity.  Specifically, can design be turned into a fully-differentiable neural network structure?

Please bring your own questions for the group to discuss, too!