Additive Manufacturing of Soft Robots

Shuo Li, Cornell University

8/28/18

This talk will present multidisciplinary work from material composites and robotics. We have created new types of actuators, sensors, displays, and additive manufacturing techniques for soft robots and haptic interfaces. For example, we now use stretchable optical waveguides as sensors for high accuracy, repeatability, and material compatibility with soft actuators. For displaying information, we have created stretchable, elastomeric light emitting displays as well as texture morphing skins for soft robots. We have created a new type of soft actuator based on molding of foams, new chemical routes for stereolithography printing of silicone and hydrogel elastomer based soft robots, and implemented deep learning in stretchable membranes for interpreting touch. All of these technologies depend on the iterative and complex feedback between material and mechanical design.  I will describe this process, what is the present state of the art, and future opportunities for science in the space of additive manufacturing of elastomeric robots.

Scaling up autonomous flight

Adam Bry, Skydio

9/5/18

NOTE: Special time and location: 5pm on Wednesday in Upson 106

Abstract: Drones hold enormous potential for consumer video, inspection, mapping, monitoring, and perhaps even delivery. They’re also natural candidates for autonomy and likely to be among the first widely-deployed systems that incorporate meaningful intelligence based on computer vision and robotics research. In this talk I’ll discuss the research we’re doing at Skydio, along with the challenges involved in building a robust robotics software system that needs to work at scale.

Bio: Adam Bry is co-founder and CEO of Skydio, a venture backed drone startup based in the bay area. Prior to Skydio he helped start Project Wing at Google[x] where he worked on the flight algorithms and software. He holds a SM in Aero/Astro from MIT and a BS in Mechanical Engineering from Olin College. Adam grew up flying radio controlled airplanes and is a former national champion in precision aerobatics. He was named to the MIT Tech Review 35 list in 2016.

Some Thoughts on Model Reduction for Robotics

 Andy Ruina, Cornell University

9/11/18

These are unpublished thoughts, actually more questions than thoughts.And not all that well informed. So audience feedback is welcome. Especially from people who know about how to formulate machine learning problems
(I already know, sort of, how to formulate MatSheen learning problems).One posing of many robotics control problems is as a general problem in `motor control’ (a biological term, I think).Assume one has a machine and the best model (something one can compute simulations with) one can actually get of the machine, its environment, its sensors and its computation abilities. One also has some sense of the uncertainty in various aspects of these.The general motor problem is this: Given a history of sensor readings and requested goals (commands), and all of the givens above, what computation should be done to determine the motor commands so as to best achieve the goals.”Best” means, most accurately and most reliably by whatever measures one chooses.If one poses this as an optimization problem over the space of all controllers (all mappings from command and sensor histories to the set of commands), it is too big a problem, even if coarsely discretized.Hence, everyone applies all manner of assumed simplifications before attempting to make a controller.The question here is this, can one pose an optimization problem for the best simplification? Can one pose it in a way such that finding a useful approximate solution could be useful?In bipedal robots there are various classes of simplified models used by various people to attempt to control their robots. Might there be a rational way to choose between them, or find better ones?As abstract as this all sounds, perhaps thinking about such things could help us make better walking-robot controllers.

Big-data machine learning meets small-data robotics

Group Discussion

9/18/18

Abstract: Machine learning techniques have transformed many fields, including computer vision and natural language processing, where plentiful data can be cheaply and easily collected and curated.  Training data in robotics is expensive to collect and difficult to curate or annotate.  Furthermore, robotics cannot be formulated as simply a prediction problem in the way that vision and NLP can often be.  Robots must close the loop, meaning that we ask our learning techniques to consider the effect of possible decisions on future predictions.  Despite exciting progress in some relatively controlled (toy) domains, we still lack good approaches to adapting modern machine learning techniques to the robotics problem.  How can we overcome these hurdles?  Please come prepared to discuss.  Here are some potential discussion topics:

  1. Are robot farms like the one at Google a good approach?  Google has dozens of robots picking and placing blocks 24/7 to collect big training data in service of training traditional models.
  2. Since simulation allows the cheap and easy generation of big training data, many researchers are attempting domain transfer from simulation to the real robot.  Should we be attempting to make simulators photo-realistic with perfect physics?  Alternatively, should we instead vary simulator parameters to train a more general model?
  3. How can learned models adapt to unpredictable and unstructured environments such as people’s homes?  When you buy a Rosie the Robot, is it going to need to spend a week exploring the house, picking up everything, and tripping over the cat to train its models?
  4. If we train mobile robots to automatically explore and interact with the world in order to gather training data at relatively low cost, the data will be biased by choices made in building that autonomy.  Similar to other recent examples in which AI algorithms adopt human biases, what are the risks inherent in biased robot training data?
  5. What role does old-fashioned robotics play?  We have long learned to build state estimators, planners, and controllers by hand.  Given that these work pretty well, should we be building learning methods around them?  Or should they be thrown out and the problems solved from scratch with end-to-end deep learning methods?
  6. What is the connection between machine learning and hardware design?  Can a robot design co-evolve with its algorithms during training?  Doing so would require us to encode design specifications much more precisely than has been done in the past, but so much of design practice resists specification due to its complexity.  Specifically, can design be turned into a fully-differentiable neural network structure?

Please bring your own questions for the group to discuss, too!

Sensing + Interaction On and Around the Body

Cheng Zhang, Cornell University

9/25/18

Cheng Zhang

Abstract: Wearables are a significant part of the new generation of computing. Compared with more traditional computers (e.g., laptop, smartphones), wearable devices are more readily available for immediate use, but significantly smaller in size, creating new opportunities and challenges for on-body sensing and interaction. My holistic research approach (from problem understanding to invention to implementation and evaluation) investigates how to effectively exchange information between humans, their environment, and wearables. My Ph.D. thesis focuses on novel wearable input using on-body sensing through various high-level interaction gestures, low-level input events, and a redesign of the interaction. In this talk, I will highlight three projects. The first is a wearable ring that allows the user to input over 40 unistroke gestures (including text and numbers). It also shows how to overcome a limited training set size, a common challenge in applying machine learning techniques to real systems, through an understanding of the characteristics of data and algorithms. The second project demonstrates how to combine a strong, yet incomplete, understanding of on-body signal propagation physics with machine learning to create a novel yet practical sensing and interaction techniques. The third project is an active acoustic sensing technique that enables a user to interact with wearable devices in the surrounding 3D space through continuous high-resolution tracking of finger’s absolute 3D position. It demonstrates how to solve a technical interaction challenge through a deep understanding of signal propagation. I will also share my vision on future opportunities for on-body sensing and interaction, especially in high-impact areas, such as health, activity recognition, AR/VR, and more futuristic interaction paradigms between humans and the increasingly connected environment.

Bio: Cheng Zhang is an assistant professor in Information Science at Cornell University. He received his Ph.D. in Computer Science at Georgia Institute of Technology, advised by Gregory Abowd (IC) and Omer Inan (ECE). His research focuses on enabling the seamless exchange of information among humans, computers, and the environment, with a particular emphasis on the interface between humans and wearable technology. His Ph.D. thesis presents 10 different novel input techniques for wearables, some leveraging commodity devices while others incorporate new hardware. His work blends an understanding of signal propagation on and around the body with, when necessary, appropriate machine learning techniques. His work has resulted in over a dozen publications in top-tier conferences and journals in the field of Human-Computer Interaction and Ubiquitous Computing (including two best paper awards), as well as over 6 pending U.S. and international patents. His work has attracted the attention of various media outlets, including ScienceDaily, DigitalTrends, ZDNet, New Scientist, RT, TechRadar, Phys.org<http://phys.org/>, Yahoo News, Business Insider, and MSN News. The work that leverages commodity devices has resulted in significant commercial impact. His work on novel interaction on smartwatch was licensed by Canadian startup ProximityHCI to improve the smartwatch interaction experience.

Short Student Talks

10/2/18

Presenter 1: Alap Kshirsagar, Hoffman Research Group

Title: Monetary-Incentive Competition between Humans and Robots: Experimental Results

Abstract: In this talk, I will describe an experiment studying monetary-incentive competition between a human and a robot. In this first of its kind experiment, participants (n=60) competed against an autonomous robot arm in ten competition rounds, carrying out a monotonous task for winning monetary rewards. For each participant, we manipulated the robot’s performance and the reward in each round. We found a small discouragement effect, with human effort decreasing with increased robot performance, significant at the p < 0.005 level. We also found a positive effect of the robot’s performance on its perceived competence, a negative effect on the participants’ liking of the robot, and a negative effect on the participants’ self-competence, all at p<0.0001.
These findings shed light on how people may exert work effort and perceive robotic competitors in a human-robot workforce, and could have implications on labor supply decisions and the design of compensation schemes in the workplace. I will also briefly comment on some experimental and statistical analysis practices that we adhered to in this study.

Presenter 2: Carlos Araújo de Aguiar, Green Research Group

Title: transFORM – A Cyber-Physical Environment Increasing Social Interaction and Place Attachment in Underused, Public Spaces

Abstract: The emergence of social networks and apps has reduced the importance of physical space as a locus for social interaction. In response, we introduce transFORM, a cyber-physical environment installed in under-used, outdoor, public spaces. transFORM embodies our understanding of how a responsive, cyber-physical architecture can augment social relationship and increase place attachment. In this paper we critically examine the social interaction problem in the context of our increasingly digital society, present our ambition, and introduce our prototype which we will iteratively design and test. Cyber-physical interventions at large scale in public spaces are an inevitable future, and this paper serves to establish the fundamental terms of this frontier.

Task and Motion Planning: Algorithms, Implementation, and Evaluation

Dylan Shell, Texas A&M University

10/16/18

Dylan Shell

Everyday tasks combine discrete and geometric decision-making. The robotics, AI, and formal methods communities have concurrently explored different planning approaches, producing techniques with different capabilities and trade-offs. We identify the combinatorial and geometric challenges of planning for everyday tasks, develop a hybrid planning algorithm, and implement an extensible planning framework. In ongoing work, we are improving the scalability and extensibility of our task-motion planner and developing planner-independent evaluation metrics.

Short Student Talks

10/23/18

Speaker 1: Thais Campos de Almeida, Cornell University (Kress-Gazit Group)

Title: A novel approach to synthesize task-based designs of modular manipulators

Abstract: A remarkable advantage of modular robots is that they can be rearranged to perform several different tasks; however, selecting a new configuration for a specific task can be a complex problem. In this talk, I present a new approach for synthesis of provably-correct design and controls of robotic manipulators given a task description. In our framework, we use tools from the program synthesis community, which enables us to not only find a design for a feasible task but also to identify feasible and infeasible subtasks within the task and search for multiple designs that satisfy the entire task. I will also briefly present a new formulation for the inverse kinematics problem used in this work, as well as compare our approach with the state-of-art techniques used to solve this problem.

Speaker 2: Yuhan Hu, Cornell University (Hoffman Group)

Title: Using Skin Texture Change to Design Social Robots

Abstract: Robots designed for social interaction often express their internal and emotional states through nonverbal behavior. Most robots use their facial expressions, gestures, locomotion, and tone of voice. In this talk, I will present a new expressive nonverbal channel for social robots in the form of texture-changing skin. This is inspired by biological systems, which frequently respond to external stimuli and display their internal states through skin texture change. I will present the design of the robot and some findings from the experiment of user-robot interaction.

Speaker 3: Haron Abdel-Raziq, Cornell University (Petersen Group)

Title: Leveraging Honey Bees as Cyber Physical Systems

Abstract: Honey bees, nature’s premiere agricultural pollinators, have proven capable of robust, complex, and versatile operation in unpredictable environments far beyond what is possible with state-of-the-art robotics. Bee keepers and farmers are heavily dependent on these honey bees for successful crop yields, evident by the $150B global pollination industry. This, coupled with the current interest in bio-inspired robotics, has prompted research on understanding honey bee swarms and their behavior both inside and outside of the hive. Prior attempts at monitoring bees are limited to the use of expensive, complicated, short range, or obstruction sensitive approaches. By combining traditional engineering methods with the honey bee’s extraordinary capabilities, we present a novel solution to monitor long-range bee flights by utilizing a new class of easily manufactured sensor and a probabilistic mapping algorithm. Specifically, the goal is to equip bees with millimeter scale ASIC technology “backpacks” that record key flight information, thus transforming a honey bee swarm into a vast cyber-physical system which can acquire data related to social insect behavior as well as bust and bloom over large areas. Foraging probability maps will then be developed by applying a simultaneous localization and mapping algorithm to the gathered data. The project is still in its initial phase and thus, we will discuss the motivation for this project as well as provide background on the various enabling technologies. We will then discuss a prototype system for gathering data on flight patterns prior to placing the actual technology on a bee. The data yielded from this work will benefit both the scientific community and bee keepers with knowledge gains spanning low power micro-scale devices and robotics, to improved understanding of how pollination occurs in different environments.

Short Student Talks

10/30/18

Speaker 1: Adam Pacheck, Cornell University

Title: Reactive Composition of Learned Abstractions

Abstract: We want robots to be able to perform high level reactive tasks and inform us if they are not able to do so given their current skills. In this talk, I present work in which we give a robot a set of skills, automatically generate an abstraction of the preconditions and effects of the skills, and automatically encode the skills in linear temporal logic. A task can then be specified for the robot and we are able to reason about its feasibility and receive suggestions for repair from the robot if it is infeasible.

Speaker 2: Yixiao Wang, Cornell University

Title: “Space Agent” as a Design Partner – Study and design interactions between robot surfaces and human designers

Abstract: In this presentation, we first propose the concept of “Space Agents”,  which are “interactive and intelligent environments perceived by users as human agents.” The foundation of this concept are communication theories, and it functions as the bridge between human users and the built environment. To better study the human-human like interactions and partnerships between users and their environments, we decide to design and study interactions between “space agents” and human designers, which is my dissertation topic. More specifically, we would like to study and test following hypotheses: 1) The “space agents” could form a (temporary) partnership with the human designers; 2) The “space agent” together with the “designer-space partnership” could improve designer’s work performance, perceived spatial support, and work life quality. We propose to design continuous robotic surfaces as space-making robots which could give agency to a traditional working space. Scenarios are specified to demonstrate how could these robotic surfaces enable spatial reconfigurations as an effective partner, and previous works are presented to show the progress of my dissertation.

Speaker 3: Ryan O’Hern, Cornell University

Title: Automating Vineyard Yield Prediction

Abstract: Advances in mobile computing, sensors, and machine learning technology have been a boon to the fields of agricultural robotics and precision agriculture. In this talk, I will discuss preliminary results of an on-going collaboration between Cornell’s College of Engineering and the College of Agriculture and Life Sciences to advance viticultural practices with new robotics techniques. This talk will focus on our initial work to predict yield in vineyards using computer vision techniques.