Robotics Seminar Spring 2019

Beyond Mere Human-Robot Interaction

Malte Jung, Cornell University

1/22/2019

Human-robot interaction research to date has been dominated by laboratory studies, largely examining a single human interacting with a single robot. This research has helped establish a fundamental understanding of human-robot interaction, how specific design choices affect interactions with robots, and how novel mechanisms or computational tools can be used to improve HRI. The predominant focus of this growing body of work, however, stands in stark contrast to the complex social contexts in which robots are increasingly placed. As a result, we have a limited understanding of how groups people will interact with robots and how robots will affect how people interact with each other in groups. I will provide an overview of recent research performed at Robots in Groups lab, which addresses questions about human-robot collaboration with groups of people.

A Discussion About Issues of Ethics in Robotics

Ross Knepper, Cornell University

1/29/2019

Every engineer has a duty to be aware of the ethical implications of their work.  How could their technologies be used or misused?  What are their impacts on society?  Robotics technologies have the potential to transform society, with impacts on the economy, social relationships, care giving, jobs and work, safety, and many more.  Please come prepared with questions and thoughts about the consequences of robots on society and the world.

What is the best way to validate robotics research?

Ross Knepper, Cornell University

2/5/19

What makes robotics robotics?  What does it take to validate our robots?  There is a natural tension between building real robots and benchmarking robot algorithms.  Real robot tests do not easily scale to large numbers, meaning that it is hard to take advantage of tools and techniques used by other fields (deep learning, statistical power).  On the other hand, simulations make many approximations and simplifying assumptions that mean algorithms designed in simulation may achieve lackluster performance on real robot hardware.  A standard formula in robotics papers is “proof by video”, which reviewers may give more weight than it deserves.  A new development in the robotics field is a growing interest from computer vision researchers.  They bring with them a culture of standardized benchmarks, large scale datasets, and deep learning techniques.  They deploy robots to navigate within and even interact with the real world, and they are developing new datasets and benchmarks for use in robotics problems.  We will discuss how vision is changing robotics research as well as how robotics is changing vision research.  How will results be evaluated in the future within these neighboring cultures?

Spatial Maps of Dynamics, Long-Term Human Motion Prediction and the Next Best Smelling Robots

Achim J. Lilienthal, Örebro University

2/12/19

 Abstract: In this presentation I will first briefly introduce the Mobile Robot & Olfaction lab at Örebro University, Sweden. Grounded in a basic research interest in perception systems we study, as the name suggests, topics in mobile robotics and mobile robot olfaction (gas-sensitive robots). Following this division, I will present recent work addressing the creation and use of spatial Maps of Dynamics (MoDs), and long-term human motion prediction (mobile robotics) as well as recent developments in mobile robot olfaction, including bout-guided gas source localization and robot assisted gas tomography (mobile robot olfaction).

Bio: Prof. Achim J. Lilienthal is head of the Mobile Robotics and Olfaction Lab at Örebro University, Sweden. His research interests are mobile robot olfaction, rich 3D perception, navigation of autonomous transport robots, human robot interaction and mathematics education research. Achim Lilienthal obtained his Ph.D. in computer science from Tübingen University, Germany and his M.Sc. in Physics from the University of Konstanz, Germany. The Ph.D. thesis addresses gas distribution mapping and gas source localisation with mobile robots. The M.Sc. thesis is concerned with structure analysis of (C60)n+ clusters using gas phase ion chromatography.

Hybrid Body Craft: Convergence of Function, Culture, and Aesthetics on the Skin Surface

Cindy Hsin-Liu Kao, Cornell University

2/19/19

Sensor device miniaturization and breakthroughs in novel materials are allowing for the placement of technology increasingly close to our physical bodies. However, unlike all other media, the human body is not simply another surface for enhancement – it is the substance of life, one that encompasses the complexity of individual and social identity. The human body is inseparable from the cultural, the social, and the political, yet technologies for placement on the body have often been developed separately from these considerations, with an emphasis on engineering breakthroughs. My work investigates opportunities for cultural interventions in the development of technologies that move beyond wearable clothing and accessories, and that are purposefully designed to be placed directly on the skin surface. How can we design emerging on-body interfaces to reflect existing cultural practices of decorating the body, with the intent to expand the agency of self-expression? I examine this question through the development of a series of research artifacts, and the contextualization of a design space for culturally sensitive design.Body Craft is defined as existing cultural, historical, and fashion-driven practices and rituals associated with body decoration, ornamentation, and modification. As its name implies, Hybrid Body Craft (HBC) is an attempt to hybridize technology with body craft materials, form factors, and application rituals, with the intention of integrating existing cultural practices with new technological functions that have no prior relationships with the human body. With this grounding, HBC seeks to support the generation of future technologized customs in which technology is integrated into culturally meaningful body adornments.

In this talk, I will introduce six example artifacts which encompass the integration of technologies such as on-body robotics, flexible electronics, and bio-compatible materials into existing Body Craft customs. These artifacts contribute novel, culturally inspired form factors, and introduce unprecedented interaction modalities for on-body technologies. A design space is created in which to examine shifts in the communicative qualities of these Body Crafts due to the integration of technology, as well as new forms of self-expression that have emerged. The Hybrid Body Craft research practice contributes a culturally sensitive lens to the design of on-body technologies. The intention is to expand their lifetimes and purposes beyond mere novelty and into the realms of cultural customs and traditions.

February Break

No Seminar

2/26/19

Group Discussion about the Future of Robotics

Ross Knepper, Cornell University & Dylan Shell, Texas A&M University

3/5/19

Robotics research is at a tipping point.  Until now, robotics has largely taken a frontier mentality, akin to American westward expansion in the nineteenth century.  Manifest Destiny was the belief that Americans were destined to conquer the continent from coast to coast.  Settlers packed up their belongings and moved westward to build a homestead and plant their own personal flag on 160 acres of land.  Similarly, flag-planting has long characterized much of robotics research, with many systems built to showcase firsts in the field (e.g. the first flatpack furniture assembly robot).  We arrived at a tipping point now because industry has decided to make major investments in robotics engineering.  The past flag-planting papers of academia serve the needs of industry poorly.

In this seminar, we hold a group discussion about the future of robotics research.  We will begin by discussing the following questions.

  1. Robotics research is traditionally splintered by the flag-planting mentality.  There is little incentive to replicate results, and there are many small research problems.  Does the end of the frontier necessitate that we work on fewer, bigger problems?  How can we all do a better job of making our results relevant and applicable to one another?
  2. Industry is better than academia at engineering.  Is a shift towards a more scientific approach to robotics research inevitable?  What are the consequences of a scientific outlook?
  3. How can the work we do in academia be made more relevant to the needs of industry while continuing to do what universities do best? Is this what is fertilizing the specialized robotics degree programs that are currently proliferating?
  4. Turner’s Frontier Thesis postulates that the fundamental American character is a consequence of the frontier movement.  Does the frontier movement in robotics portend a similarly distinct character for post-frontier robotics research?  If so, what are the specific consequences?

Drinking and Driving

Sunghwan (Sunny) Jung, Cornell University

3/12/19

Fluids are vital to all life forms, and organisms presumably adapted their behaviors or features in response to mechanical forces to achieve better performance. In this talk, I will discuss two biological problems in which animals exploit mechanics principles. First, we investigated how animals transport water into the mouth using an inertia-driven (lapping) mechanism. Dogs accelerate the tongue upward (up to 4 g) to create a larger water column while drinking, whereas cats use a tongue motion with relatively small acceleration. We found that, in order to maximize the water intake per lap, both cats and dogs close the jaw at the column beak-up time governed by either unsteady or steady inertia. In the context of animal drinking, I will also talk about how bats drink water on the wing from a mechanics point-of-view, and illustrate an on-going design work to develop a bat-inspired vehicle to monitor the water quality along rivers or lakes. Second, we studied how birds with long slender necks plunge-dive and survive from the impact. Physical experiments of an elastic beam as a model for the neck attached to a skull-like cone revealed the limits for the stability of the neck during plunge-dive. We found that the small angle of the bird’s beak and the strong muscles in the neck predominantly reduce the likelihood of injury during high-speed plunge-dive. As a bio-inspired engineering, we design a bird-inspired projectile to explore underwater without propulsive mechanisms.

Robots in Our Midst

Wendy Ju, Cornell University

3/19/19

The advent of autonomous technologies are both exciting and alarming. Ironically, the success or failure of such systems will very much depend on how they interact with people: the need for strong communication, interface and interaction design grows larger rather than smaller in the age of autonomy. In my Future Autonomy Research Lab, we are looking at how people will interact with robots and vehicles in the future. We are particularly concerned with joint performance of task, recognizing human states, and opportunities of learning and adaptation. By using simulation techniques, we can prototype and test interactions to understand how best to design our future.

Model for Grounding Instructions to Plans

Nakul Gopalan

3/26/19

Abstract: In order to easily and efficiently collaborate with humans, robots must learn to complete tasks specified using natural language.  Natural language provides an intuitive interface for a layperson to interact with a robot without the person needing to program a robot, which might require expertise.  Natural language instructions can easily specify goal conditions or provide guidances and constraints required to complete a task.  Given a natural language command, a robot needs to ground the instruction to a plan that can be executed in the environment. This grounding can be challenging to perform, especially when we expect robots to generalize to novel natural language descriptions and novel task specifications while providing as little prior information as possible. In this talk, I will present a model for grounding instructions to plans.  Furthermore, I will present two strategies under this model for language grounding and compare their effectiveness.  We will explore the use of approaches using deep learning, semantic parsing, predicate logic and linear temporal logic for task grounding and execution during the talk.

Bio: Nakul Gopalan is a graduate student in the H2R lab at Brown University. His interests are in the problems of language grounding for robotics, and abstractions within reinforcement learning and planning. He has an MSc. in Computer Science from Brown University (2015) and an MSc. in Information and Communication Engineering from T.U. Darmstadt (2013) in Germany.  He completed a Bachelor of Engineering from R.V. College of Engineering in Bangalore, India (2008). His team recently won the Brown-Hyundai Visionary Challenge for their proposal to use Mixed Reality and Social Feedback for Human-Robot collaboration.

Spring Break

No Seminar

4/2/19

New complexity results and performance-guaranteed algorithms for multirobot navigation of communication-restricted environments

Jacopo Banfi, Cornell University

4/9/19

Exploiting a team of mobile robots can provide a valid alternative to the employment of human operators in carrying out different kinds of information-gathering tasks, like environmental monitoring, exploration, and patrolling. Frequently, the proposed coordination mechanisms work under the assumption that communication between robots is possible between any two locations of the environment. However, real operational conditions may require to deploy robots only equipped with local limited-range communication modules. In this talk, I will first present a general graph-based framework for planning multirobot missions subject to different kinds of communication constraints. Then, I will focus on a few selected problems taken from the literature that can be framed in such planning framework (like computing a set of joint paths ensuring global connectivity at selected times), and present either new complexity results or performance-guaranteed algorithms to compute good quality solutions to these problems in reasonable time.

 Simulation based control, a case study

Andy Ruina & Matt Sheen, Cornell University

4/16/19

  1. Simulations are imperfect. So there is a question about how to use simulations for control. Certainly the better the simulation the easier, so there is a need for better simulators. We are working on that.
  2. But we live in the world we live in. And have the imperfect simulators we have. How to live with that? We have chosen a model problem: the game of QWOP. In this model system, the QWOP game is a model of reality, and our various simulations of the game are models of our models of reality. Kind of meta. How well can we do at controlling QWOP using imperfect models of QWOP? And how do we do that? This seminar is about our successes at this model of modeling. To get the most from this seminar, spend 10 minutes playing QWOP before the seminar. Google QWOP on your phone or computer. In short, it’s not so easy. And our synthetic play is also only so good so far.

Leveraging Vision for 3D Perception in Robotics

Wei-Lun (Harry) Chao & Brian Wang, Cornell University

4/23/19

Abstract: Many robotics applications require accurate 3D perception, for example an autonomous car determining the positions of other cars on the road, or an industrial manipulator robot recognizing an object it is supposed to pick up. Recent advancements driven by deep neural networks have led to remarkable performance in 2D image processing tasks. However, the properties of 3D sensor data make it challenging to realize similar performance in 3D. In this talk, we present two recent works that leverage successes in 2D vision towards 3D perception tasks.

We first present Label Diffusion LiDAR Segmentation (LDLS), an algorithm for point-level object recognition in 3D LiDAR point clouds. LDLS uses information from aligned camera images to avoid any need for training on labeled 3D data. Our method applies a pre-trained 2D image segmentation model on a camera image, then diffuses information from the image into a LiDAR point cloud using a semi-supervised graph learning algorithm. Any object classes that are recognized by the 2D image segmentation model can also be detected in LiDAR, allowing LDLS to recognize a far greater variety of objects than possible in previous works.

We then present a novel vision-based 3D object detection algorithm, which can bypass the expensive LiDAR signal or serves as an auxiliary system to LiDAR-based detectors. The key insight is to apply stereo depth estimation from pairs of 2D images and back-project the depths into a 3D point cloud, which we call pseudo-LiDAR. With pseudo-LiDAR, we can essentially apply any existing LiDAR-based algorithms for 3D object detection, leading to a 300% performance improvement over the previous state-of-the-art vision-based detector.  

Bio: Wei-Lun (Harry) Chao is a Postdoctoral Associate in Computer Science at Cornell University working with Kilian Q. Weinberger and Mark Campbell. His research interests are in machine learning and its applications to computer vision, natural language processing, artificial intelligence, and healthcare. His recent work has focused on robust autonomous driving. He received a Ph.D. degree in Computer Science from the University of Southern California. He will be joining the Ohio State University as an assistant professor in Computer Science and Engineering in 2019 Fall.

Brian Wang is a third-year MAE PhD student at Cornell, in Mark Campbell’s research group. His research interests include vision- and LiDAR-based perception, probabilistic tracking and estimation, autonomous driving, and human-robot interaction.

Implicit Communication of Actionable Information in Human-AI teams

Claire Liang, Cornell University 

4/30/19

Humans expect their collaborators to look beyond the explicit interpretation of their words. Implicature is a common form of implicit communication that arises in natural language discourse when an utterance leverages context to imply information beyond what the words literally convey. Whereas computational methods have been proposed for interpreting and using different forms of implicature, its role in human and artificial agent collaboration has not yet been explored in a concrete domain. The results of this paper provide insights to how artificial agents should be structured to facilitate natural and efficient communication of actionable information with humans. We investigated implicature by implementing two strategies for playing Hanabi, a cooperative card game that relies heavily on communication of actionable implicit information to achieve a shared goal. In a user study with 904 completed games and 246 completed surveys, human players randomly paired with an implicature AI are 71% more likely to think their partner is human than players paired with a non-implicature AI. These teams demonstrated game performance similar to other state of the art approaches

Design and Control of the DONUts: A Scalable, Self-Reconfigurable Modular Robot with Compliant Modules

Nialah Wilson, Steven Ceron

5/7/19

Modular self-reconfigurable robots are composed of active modules capable of rearranging their connection topology to adapt to dynamic environments, changing task settings, and partial failures. The research challenges are considerable, and cover hardware design with inexpensive, simple fabrication and maintenance, durable mechanical parts, and efficient power management; and planning and control for scalable autonomy. We present a new planar, modular robot with advantageous scaling properties in both fabrication and operation. Modules consist primarily of a flexible printed circuit board wrapped in a loop, which eases assembly and introduces compliance for safe interaction with external objects and other modules, as well as configuration of large scale lattice structures beyond what the manufacturing tolerances allow. We further present ongoing work on coordination schemes that leverages these features for basic autonomous behaviors, including distributed shape estimation and gradient tracking in cluttered environments. This work brings us a step closer to the ambition of robust autonomous robots capable of exploring cluttered, unpredictable environments.

The schedule is maintained by Corey Torres (ct635@cornell.edu) and Ross Knepper (rak@cs.cornell.edu). To be added to the mailing list, please follow the e-list instructions for joining a mailing list. The name of the mailing list is robotics-l. If you have any questions, please email ct635@cornell.edu.