Robotics Seminar Fall 2018

Additive Manufacturing of Soft Robots

Shuo Li, Cornell University

8/28/18

 This talk will present multidisciplinary work from material composites and robotics. We have created new types of actuators, sensors, displays, and additive manufacturing techniques for soft robots and haptic interfaces. For example, we now use stretchable optical waveguides as sensors for high accuracy, repeatability, and material compatibility with soft actuators. For displaying information, we have created stretchable, elastomeric light emitting displays as well as texture morphing skins for soft robots. We have created a new type of soft actuator based on molding of foams, new chemical routes for stereolithography printing of silicone and hydrogel elastomer based soft robots, and implemented deep learning in stretchable membranes for interpreting touch. All of these technologies depend on the iterative and complex feedback between material and mechanical design.  I will describe this process, what is the present state of the art, and future opportunities for science in the space of additive manufacturing of elastomeric robots.

Scaling up autonomous flight

Adam Bry, Skydio

9/5/18

NOTE: Special time and location: 5pm on Wednesday in Upson 106

Abstract: Drones hold enormous potential for consumer video, inspection, mapping, monitoring, and perhaps even delivery. They’re also natural candidates for autonomy and likely to be among the first widely-deployed systems that incorporate meaningful intelligence based on computer vision and robotics research. In this talk I’ll discuss the research we’re doing at Skydio, along with the challenges involved in building a robust robotics software system that needs to work at scale.

Bio: Adam Bry is co-founder and CEO of Skydio, a venture backed drone startup based in the bay area. Prior to Skydio he helped start Project Wing at Google[x] where he worked on the flight algorithms and software. He holds a SM in Aero/Astro from MIT and a BS in Mechanical Engineering from Olin College. Adam grew up flying radio controlled airplanes and is a former national champion in precision aerobatics. He was named to the MIT Tech Review 35 list in 2016.

Architectural Robotics: Ecosystems of Bits, Bytes and Biology

9/4/18 Bonus Seminar this week!

4-5 p.m. in 203 Thurston Hall

Info here: https://www.mae.cornell.edu/news/events.cfm?event=18875&view=future&y=2018&m=8&d=31

Some Thoughts on Model Reduction for Robotics

 Andy Ruina, Cornell University

9/11/18

These are unpublished thoughts, actually more questions than thoughts.And not all that well informed. So audience feedback is welcome. Especially from people who know about how to formulate machine learning problems
(I already know, sort of, how to formulate MatSheen learning problems).One posing of many robotics control problems is as a general problem in `motor control’ (a biological term, I think).Assume one has a machine and the best model (something one can compute simulations with) one can actually get of the machine, its environment, its sensors and its computation abilities. One also has some sense of the uncertainty in various aspects of these.The general motor problem is this: Given a history of sensor readings and requested goals (commands), and all of the givens above, what computation should be done to determine the motor commands so as to best achieve the goals.”Best” means, most accurately and most reliably by whatever measures one chooses.If one poses this as an optimization problem over the space of all controllers (all mappings from command and sensor histories to the set of commands), it is too big a problem, even if coarsely discretized.Hence, everyone applies all manner of assumed simplifications before attempting to make a controller.The question here is this, can one pose an optimization problem for the best simplification? Can one pose it in a way such that finding a useful approximate solution could be useful?In bipedal robots there are various classes of simplified models used by various people to attempt to control their robots. Might there be a rational way to choose between them, or find better ones?As abstract as this all sounds, perhaps thinking about such things could help us make better walking-robot controllers.

Big-data machine learning meets small-data robotics

Group Discussion

9/18/18

Abstract: Machine learning techniques have transformed many fields, including computer vision and natural language processing, where plentiful data can be cheaply and easily collected and curated.  Training data in robotics is expensive to collect and difficult to curate or annotate.  Furthermore, robotics cannot be formulated as simply a prediction problem in the way that vision and NLP can often be.  Robots must close the loop, meaning that we ask our learning techniques to consider the effect of possible decisions on future predictions.  Despite exciting progress in some relatively controlled (toy) domains, we still lack good approaches to adapting modern machine learning techniques to the robotics problem.  How can we overcome these hurdles?  Please come prepared to discuss.  Here are some potential discussion topics:

  1. Are robot farms like the one at Google a good approach?  Google has dozens of robots picking and placing blocks 24/7 to collect big training data in service of training traditional models.
  2. Since simulation allows the cheap and easy generation of big training data, many researchers are attempting domain transfer from simulation to the real robot.  Should we be attempting to make simulators photo-realistic with perfect physics?  Alternatively, should we instead vary simulator parameters to train a more general model?
  3. How can learned models adapt to unpredictable and unstructured environments such as people’s homes?  When you buy a Rosie the Robot, is it going to need to spend a week exploring the house, picking up everything, and tripping over the cat to train its models?
  4. If we train mobile robots to automatically explore and interact with the world in order to gather training data at relatively low cost, the data will be biased by choices made in building that autonomy.  Similar to other recent examples in which AI algorithms adopt human biases, what are the risks inherent in biased robot training data?
  5. What role does old-fashioned robotics play?  We have long learned to build state estimators, planners, and controllers by hand.  Given that these work pretty well, should we be building learning methods around them?  Or should they be thrown out and the problems solved from scratch with end-to-end deep learning methods?
  6. What is the connection between machine learning and hardware design?  Can a robot design co-evolve with its algorithms during training?  Doing so would require us to encode design specifications much more precisely than has been done in the past, but so much of design practice resists specification due to its complexity.  Specifically, can design be turned into a fully-differentiable neural network structure?

Please bring your own questions for the group to discuss, too!

Sensing + Interaction On and Around the Body

Cheng Zhang, Cornell University

9/25/18

Cheng Zhang

Abstract: Wearables are a significant part of the new generation of computing. Compared with more traditional computers (e.g., laptop, smartphones), wearable devices are more readily available for immediate use, but significantly smaller in size, creating new opportunities and challenges for on-body sensing and interaction. My holistic research approach (from problem understanding to invention to implementation and evaluation) investigates how to effectively exchange information between humans, their environment, and wearables. My Ph.D. thesis focuses on novel wearable input using on-body sensing through various high-level interaction gestures, low-level input events, and a redesign of the interaction. In this talk, I will highlight three projects. The first is a wearable ring that allows the user to input over 40 unistroke gestures (including text and numbers). It also shows how to overcome a limited training set size, a common challenge in applying machine learning techniques to real systems, through an understanding of the characteristics of data and algorithms. The second project demonstrates how to combine a strong, yet incomplete, understanding of on-body signal propagation physics with machine learning to create a novel yet practical sensing and interaction techniques. The third project is an active acoustic sensing technique that enables a user to interact with wearable devices in the surrounding 3D space through continuous high-resolution tracking of finger’s absolute 3D position. It demonstrates how to solve a technical interaction challenge through a deep understanding of signal propagation. I will also share my vision on future opportunities for on-body sensing and interaction, especially in high-impact areas, such as health, activity recognition, AR/VR, and more futuristic interaction paradigms between humans and the increasingly connected environment.

Bio: Cheng Zhang is an assistant professor in Information Science at Cornell University. He received his Ph.D. in Computer Science at Georgia Institute of Technology, advised by Gregory Abowd (IC) and Omer Inan (ECE). His research focuses on enabling the seamless exchange of information among humans, computers, and the environment, with a particular emphasis on the interface between humans and wearable technology. His Ph.D. thesis presents 10 different novel input techniques for wearables, some leveraging commodity devices while others incorporate new hardware. His work blends an understanding of signal propagation on and around the body with, when necessary, appropriate machine learning techniques. His work has resulted in over a dozen publications in top-tier conferences and journals in the field of Human-Computer Interaction and Ubiquitous Computing (including two best paper awards), as well as over 6 pending U.S. and international patents. His work has attracted the attention of various media outlets, including ScienceDaily, DigitalTrends, ZDNet, New Scientist, RT, TechRadar, Phys.org<http://phys.org/>, Yahoo News, Business Insider, and MSN News. The work that leverages commodity devices has resulted in significant commercial impact. His work on novel interaction on smartwatch was licensed by Canadian startup ProximityHCI to improve the smartwatch interaction experience.

Short Student Talks

10/2/18

Presenter 1: Alap Kshirsagar, Hoffman Research Group

Title: Monetary-Incentive Competition between Humans and Robots: Experimental Results

Abstract: In this talk, I will describe an experiment studying monetary-incentive competition between a human and a robot. In this first of its kind experiment, participants (n=60) competed against an autonomous robot arm in ten competition rounds, carrying out a monotonous task for winning monetary rewards. For each participant, we manipulated the robot’s performance and the reward in each round. We found a small discouragement effect, with human effort decreasing with increased robot performance, significant at the p < 0.005 level. We also found a positive effect of the robot’s performance on its perceived competence, a negative effect on the participants’ liking of the robot, and a negative effect on the participants’ self-competence, all at p<0.0001.
These findings shed light on how people may exert work effort and perceive robotic competitors in a human-robot workforce, and could have implications on labor supply decisions and the design of compensation schemes in the workplace. I will also briefly comment on some experimental and statistical analysis practices that we adhered to in this study.

Presenter 2: Carlos Araújo de Aguiar, Green Research Group

Title: transFORM – A Cyber-Physical Environment Increasing Social Interaction and Place Attachment in Underused, Public Spaces

Abstract: The emergence of social networks and apps has reduced the importance of physical space as a locus for social interaction. In response, we introduce transFORM, a cyber-physical environment installed in under-used, outdoor, public spaces. transFORM embodies our understanding of how a responsive, cyber-physical architecture can augment social relationship and increase place attachment. In this paper we critically examine the social interaction problem in the context of our increasingly digital society, present our ambition, and introduce our prototype which we will iteratively design and test. Cyber-physical interventions at large scale in public spaces are an inevitable future, and this paper serves to establish the fundamental terms of this frontier.

Task and Motion Planning: Algorithms, Implementation, and Evaluation

Dylan Shell, Texas A&M University

10/16/18

Dylan Shell

Everyday tasks combine discrete and geometric decision-making. The robotics, AI, and formal methods communities have concurrently explored different planning approaches, producing techniques with different capabilities and trade-offs. We identify the combinatorial and geometric challenges of planning for everyday tasks, develop a hybrid planning algorithm, and implement an extensible planning framework. In ongoing work, we are improving the scalability and extensibility of our task-motion planner and developing planner-independent evaluation metrics.

Short Student Talks

10/23/18

Speaker 1: Thais Campos de Almeida, Cornell University (Kress-Gazit Group)

Title: A novel approach to synthesize task-based designs of modular manipulators

Abstract: A remarkable advantage of modular robots is that they can be rearranged to perform several different tasks; however, selecting a new configuration for a specific task can be a complex problem. In this talk, I present a new approach for synthesis of provably-correct design and controls of robotic manipulators given a task description. In our framework, we use tools from the program synthesis community, which enables us to not only find a design for a feasible task but also to identify feasible and infeasible subtasks within the task and search for multiple designs that satisfy the entire task. I will also briefly present a new formulation for the inverse kinematics problem used in this work, as well as compare our approach with the state-of-art techniques used to solve this problem.

Speaker 2: Yuhan Hu, Cornell University (Hoffman Group)

Title: Using Skin Texture Change to Design Social Robots

Abstract: Robots designed for social interaction often express their internal and emotional states through nonverbal behavior. Most robots use their facial expressions, gestures, locomotion, and tone of voice. In this talk, I will present a new expressive nonverbal channel for social robots in the form of texture-changing skin. This is inspired by biological systems, which frequently respond to external stimuli and display their internal states through skin texture change. I will present the design of the robot and some findings from the experiment of user-robot interaction.

Speaker 3: Haron Abdel-Raziq, Cornell University (Petersen Group)

Title: Leveraging Honey Bees as Cyber Physical Systems

Abstract: Honey bees, nature’s premiere agricultural pollinators, have proven capable of robust, complex, and versatile operation in unpredictable environments far beyond what is possible with state-of-the-art robotics. Bee keepers and farmers are heavily dependent on these honey bees for successful crop yields, evident by the $150B global pollination industry. This, coupled with the current interest in bio-inspired robotics, has prompted research on understanding honey bee swarms and their behavior both inside and outside of the hive. Prior attempts at monitoring bees are limited to the use of expensive, complicated, short range, or obstruction sensitive approaches. By combining traditional engineering methods with the honey bee’s extraordinary capabilities, we present a novel solution to monitor long-range bee flights by utilizing a new class of easily manufactured sensor and a probabilistic mapping algorithm. Specifically, the goal is to equip bees with millimeter scale ASIC technology “backpacks” that record key flight information, thus transforming a honey bee swarm into a vast cyber-physical system which can acquire data related to social insect behavior as well as bust and bloom over large areas. Foraging probability maps will then be developed by applying a simultaneous localization and mapping algorithm to the gathered data. The project is still in its initial phase and thus, we will discuss the motivation for this project as well as provide background on the various enabling technologies. We will then discuss a prototype system for gathering data on flight patterns prior to placing the actual technology on a bee. The data yielded from this work will benefit both the scientific community and bee keepers with knowledge gains spanning low power micro-scale devices and robotics, to improved understanding of how pollination occurs in different environments.

Short Student Talks

10/30/18

Speaker 1: Adam Pacheck, Cornell University

Title: Reactive Composition of Learned Abstractions

Abstract: We want robots to be able to perform high level reactive tasks and inform us if they are not able to do so given their current skills. In this talk, I present work in which we give a robot a set of skills, automatically generate an abstraction of the preconditions and effects of the skills, and automatically encode the skills in linear temporal logic. A task can then be specified for the robot and we are able to reason about its feasibility and receive suggestions for repair from the robot if it is infeasible.

Speaker 2: Yixiao Wang, Cornell University

Title: “Space Agent” as a Design Partner – Study and design interactions between robot surfaces and human designers

Abstract: In this presentation, we first propose the concept of “Space Agents”,  which are “interactive and intelligent environments perceived by users as human agents.” The foundation of this concept are communication theories, and it functions as the bridge between human users and the built environment. To better study the human-human like interactions and partnerships between users and their environments, we decide to design and study interactions between “space agents” and human designers, which is my dissertation topic. More specifically, we would like to study and test following hypotheses: 1) The “space agents” could form a (temporary) partnership with the human designers; 2) The “space agent” together with the “designer-space partnership” could improve designer’s work performance, perceived spatial support, and work life quality. We propose to design continuous robotic surfaces as space-making robots which could give agency to a traditional working space. Scenarios are specified to demonstrate how could these robotic surfaces enable spatial reconfigurations as an effective partner, and previous works are presented to show the progress of my dissertation.

Speaker 3: Ryan O’Hern, Cornell University

Title: Automating Vineyard Yield Prediction

Abstract: Advances in mobile computing, sensors, and machine learning technology have been a boon to the fields of agricultural robotics and precision agriculture. In this talk, I will discuss preliminary results of an on-going collaboration between Cornell’s College of Engineering and the College of Agriculture and Life Sciences to advance viticultural practices with new robotics techniques. This talk will focus on our initial work to predict yield in vineyards using computer vision techniques.

Short Student Talks

11/6/18

Speaker 1: Nialah Wilson, Cornell University

Title: Design, Coordination, and Validation of Controllers for Decision Making and Planning in Large-Scale Distributed Systems

Abstract: A good swarm will be comprised of cheap, simple robots and run on efficient algorithms, making it scalable with regards to both cost, computation, and maintenance. Previous work has been done to control large-scale distributed systems with centralized or decentralized control, but none examine what happens when modules are allowed to decide when to switch between control schemes, or explore the optimality and guarantees that can still be made in a hybrid control system. I propose using two robotic platforms, a flexible modular robot, and a team of micro blimps, to study decision making and task-oriented behaviors in large-scale distributed systems by creating new hybrid control algorithms for an extended subsumption architecture.

Speaker 2: Wil Thomason, Cornell University

Title: A Flexible Sampling-Based Approach to Integrated Task and Motion Planning

Abstract: Integrated Task and Motion Planning (TAMP) seeks to combine tools from symbolic (task) planning and geometric (motion) planning to efficiently solve geometrically constrained long-horizon planning problems. In this talk, I will present some of my work in progress on a new approach to solving the TAMP problem based on a real-valued “unsatisfaction” semantics for interpreting symbolic formulae. This semantics permits us to directly sample in regions where the preconditions for symbolic actions are satisfied. In conjunction with arbitrary task-level heuristics, this enables us to use off-the-shelf sampling based motion planning to efficiently solve TAMP problems.

Speaker 3: Ji Chen, Cornell University

Title: Verifiable Control of Robotic Swarms from High-level Specifications

Abstract: Designing controllers automatically for robotic swarm systems to guarantee safety, correctness, scalability and flexibility in achieving high-level tasks remains a challenging problem. In this talk, I will present a control scheme that takes in specifications for high-level tasks and outputs continuous controllers which result in the desired collective behaviors. In particular, I will discuss the properties that swarm must have in the continuous level to ensure the correctness of mapping from symbolic plans to real-world execution. In addition, I will also compare the centralized and decentralized approaches in terms of time efficiency, failure resilience, and computation complexity.

Coordination dynamics in human-robot teams

Tariq Iqbal, MIT

11/13/18

Tariq Iqbal

Abstract: As autonomous robots are becoming more prominent across various domains, they will be expected to interact and work with people in teams. If a robot has an understanding of the underlying dynamics of a group, then it can recognize, anticipate, and adapt to the human motion to be a more effective teammate. In this talk, I will present algorithms to measure the degree of coordination in groups and approaches to extend these understandings by a robot to enable fluent collaboration with people. I will first describe a non-linear method to measure group coordination, which takes multiple types of discrete, task-level events into consideration. Building on this method, I will then present two anticipation algorithms to predict the timings of future actions in teams. Finally, I will describe a fast online activity segmentation algorithm which enables fluent human-robot collaboration.

Bio: Tariq Iqbal is a postdoctoral associate in the Interactive Robotics Group at MIT. He received his Ph.D. from the University of California San Diego, where he was a member of the Contextual Robotics Institute and the Healthcare Robotics Lab. His research focuses on developing algorithms for robots to solve problems in complex human environments, by enabling them to perceive, anticipate, adapt, and collaborate with people.

Learning Adaptive Models for Robot Motion Planning and Human-Robot Interaction

Tom Howard, University of Rochester

11/20/18

Abstract: The efficiency and optimality of robot decision making is often dictated by the fidelity and complexity of models for how a robot can interact with its environment.  It is common for researchers to engineer these models a priori to achieve particular levels of performance for specific tasks in a restricted set of environments and initial conditions.  As we progress towards more intelligent systems that perform a wider range of objectives in a greater variety of domains, the models for how robots make decisions must adapt to achieve, if not exceed,  engineered levels of performance.  In this talk I will discuss progress towards model adaptation for robot intelligence, including recent efforts in natural language understanding for human-robot interaction and robot motion planning.
Biosketch: Thomas Howard is an assistant professor in the Department of Electrical and Computer Engineering at the University of Rochester.  He also holds secondary appointments in the Department of Biomedical Engineering, Department of Computer Science, and Department of Neuroscience and directs the University of Rochester’s Robotics and Artificial Intelligence Laboratory. Previously he held appointments as a research scientist and a postdoctoral associate at MIT’s Computer Science and Artificial Intelligence Laboratory in the Robust Robotics Group, a research technologist at the Jet Propulsion Laboratory in the Robotic Software Systems Group, and a lecturer in mechanical engineering at Caltech and was a Goergen Institute for Data Science Center of Excellence Distinguished Researcher.Howard earned a PhD in robotics from the Robotics Institute at Carnegie Mellon University in 2009 in addition to BS degrees in electrical and computer engineering and mechanical engineering from the University of Rochester in 2004. His research interests span artificial intelligence, robotics, and human-robot interaction with particular research focus on improving the optimality, efficiency, and fidelity of models for decision making in complex and unstructured environments with applications to robot motion planning and natural language understanding.  Howard was a member of the flight software team for the Mars Science Laboratory, the motion planning lead for the JPL/Caltech DARPA Autonomous Robotic Manipulation team, and a member of Tartan Racing, winner of the DARPA Urban Challenge.  Howard has earned Best Paper Awards at RSS (2016) and IEEE SMC (2017), two NASA Group Achievement Awards (2012, 2014), and was a finalist for the ICRA Best Manipulation Paper Award (2012).  Howard’s research at the University of Rochester has been supported by National Science Foundation, Army Research Office, Army Research Laboratory, Department of Defense Congressionally Directed Medical Research Program, and the New York State Center of Excellence in Data Science.

Finite Set Statistics Based Multi-object Tracking: Recent Advances, Challenges, and Space Applications

Keith LeGrand,  Sandia National Lab

11/27/18

Abstract: Multi-object tracking is the process of simultaneously estimating an unknown number of objects and their partially hidden states using unlabeled noisy measurement data. Common applications of multi-object tracking algorithms include space situational awareness (SSA), missile defense, pedestrian tracking, and airborne surveillance. In recent years, a new branch of statistical calculus known as finite set statistics (FISST) has provided a formalism for solving such tracking problems and has resulted in a renaissance in tracking research. Today, researchers are applying FISST to formalize and solve problems not typically thought of as traditional tracking problems, such as robotic simultaneous localization and mapping (SLAM), obstacle localization for driverless vehicles, lunar descent and landing, and autonomous swarm control. This talk discusses the basic principles of multi-object tracking with a focus on FISST and highlights recent advancements. Special challenges, such as probabilistic object appearance detection, extended object tracking, and distributed multi-sensor fusion are presented. Finally, this talk will present the latest application of FISST theory to sensor planning, whereby multi-object information measures are used to optimize the performance of large dynamic sensor networks.

Short Student Talks

12/4/18

Matt Law, Cornell University

Steven Ceron, Cornell University

Chris Mavrogiannis, Cornell University

The schedule is maintained by Corey Torres (ct635@cornell.edu) and Ross Knepper (rak@cs.cornell.edu). To be added to the mailing list, please follow the e-list instructions for joining a mailing list. The name of the mailing list is robotics-l. If you have any questions, please email ct635@cornell.edu.