Simulation based control, a case study

Andy Ruina & Matt Sheen, Cornell University

4/16/19

  1. Simulations are imperfect. So there is a question about how to use simulations for control. Certainly the better the simulation the easier, so there is a need for better simulators. We are working on that.
  2. But we live in the world we live in. And have the imperfect simulators we have. How to live with that? We have chosen a model problem: the game of QWOP. In this model system, the QWOP game is a model of reality, and our various simulations of the game are models of our models of reality. Kind of meta. How well can we do at controlling QWOP using imperfect models of QWOP? And how do we do that? This seminar is about our successes at this model of modeling. To get the most from this seminar, spend 10 minutes playing QWOP before the seminar. Google QWOP on your phone or computer. In short, it’s not so easy. And our synthetic play is also only so good so far.

Leveraging Vision for 3D Perception in Robotics

Wei-Lun (Harry) Chao & Brian Wang, Cornell University

4/23/19

Abstract: Many robotics applications require accurate 3D perception, for example an autonomous car determining the positions of other cars on the road, or an industrial manipulator robot recognizing an object it is supposed to pick up. Recent advancements driven by deep neural networks have led to remarkable performance in 2D image processing tasks. However, the properties of 3D sensor data make it challenging to realize similar performance in 3D. In this talk, we present two recent works that leverage successes in 2D vision towards 3D perception tasks.

We first present Label Diffusion LiDAR Segmentation (LDLS), an algorithm for point-level object recognition in 3D LiDAR point clouds. LDLS uses information from aligned camera images to avoid any need for training on labeled 3D data. Our method applies a pre-trained 2D image segmentation model on a camera image, then diffuses information from the image into a LiDAR point cloud using a semi-supervised graph learning algorithm. Any object classes that are recognized by the 2D image segmentation model can also be detected in LiDAR, allowing LDLS to recognize a far greater variety of objects than possible in previous works.

We then present a novel vision-based 3D object detection algorithm, which can bypass the expensive LiDAR signal or serves as an auxiliary system to LiDAR-based detectors. The key insight is to apply stereo depth estimation from pairs of 2D images and back-project the depths into a 3D point cloud, which we call pseudo-LiDAR. With pseudo-LiDAR, we can essentially apply any existing LiDAR-based algorithms for 3D object detection, leading to a 300% performance improvement over the previous state-of-the-art vision-based detector.

Bio: Wei-Lun (Harry) Chao is a Postdoctoral Associate in Computer Science at Cornell University working with Kilian Q. Weinberger and Mark Campbell. His research interests are in machine learning and its applications to computer vision, natural language processing, artificial intelligence, and healthcare. His recent work has focused on robust autonomous driving. He received a Ph.D. degree in Computer Science from the University of Southern California. He will be joining the Ohio State University as an assistant professor in Computer Science and Engineering in 2019 Fall.

Brian Wang is a third-year MAE PhD student at Cornell, in Mark Campbell’s research group. His research interests include vision- and LiDAR-based perception, probabilistic tracking and estimation, autonomous driving, and human-robot interaction.

Implicit Communication of Actionable Information in Human-AI teams

Claire Liang, Cornell University

4/30/19

Humans expect their collaborators to look beyond the explicit interpretation of their words. Implicature is a common form of implicit communication that arises in natural language discourse when an utterance leverages context to imply information beyond what the words literally convey. Whereas computational methods have been proposed for interpreting and using different forms of implicature, its role in human and artificial agent collaboration has not yet been explored in a concrete domain. The results of this paper provide insights to how artificial agents should be structured to facilitate natural and efficient communication of actionable information with humans. We investigated implicature by implementing two strategies for playing Hanabi, a cooperative card game that relies heavily on communication of actionable implicit information to achieve a shared goal. In a user study with 904 completed games and 246 completed surveys, human players randomly paired with an implicature AI are 71% more likely to think their partner is human than players paired with a non-implicature AI. These teams demonstrated game performance similar to other state of the art approaches

Design and Control of the DONUts: A Scalable, Self-Reconfigurable Modular Robot with Compliant Modules

Nialah Wilson, Steven Ceron

5/7/19

Modular self-reconfigurable robots are composed of active modules capable of rearranging their connection topology to adapt to dynamic environments, changing task settings, and partial failures. The research challenges are considerable, and cover hardware design with inexpensive, simple fabrication and maintenance, durable mechanical parts, and efficient power management; and planning and control for scalable autonomy. We present a new planar, modular robot with advantageous scaling properties in both fabrication and operation. Modules consist primarily of a flexible printed circuit board wrapped in a loop, which eases assembly and introduces compliance for safe interaction with external objects and other modules, as well as configuration of large scale lattice structures beyond what the manufacturing tolerances allow. We further present ongoing work on coordination schemes that leverages these features for basic autonomous behaviors, including distributed shape estimation and gradient tracking in cluttered environments. This work brings us a step closer to the ambition of robust autonomous robots capable of exploring cluttered, unpredictable environments.