Robotics Seminar Spring 2018

Synthesis for Composable Robots: Guarantees and Feedback for Complex Behaviors

Hadas Kress-Gazit, Cornell University

1/24/18

Getting a robot to perform a complex task, for example completing the DARPA Robotics Challenge, typically requires a team of engineers who program the robot in a time consuming and error prone process and who validate the resulting robot behavior through testing in different environments. The vision of synthesis for robotics is to bypass the manual programming and testing cycle by enabling users to provide specifications – what the robot should do – and automatically generating, from the specification, robot control that provides guarantees for the robot’s behavior.

This talk will describe the work done in the verifiable robotics research group towards realizing the synthesis vision and will focus on synthesis for composable robots – modular robots and swarms. Such robotic systems require new abstractions and synthesis techniques that address the overall system behavior in addition to the individual control of each component, i.e. module or swarm member.

Explorations using Telepresence Robots in the Wild

Susan Fussell and Elijah Webber-Han

 1/31/18

Mobile Robotic (Tele)Presence (MRP) systems are a promising technology for distance interaction because they provide both embodiment and mobility.  In principle, MRPs have the potential to support a wide array of informal activities, such as walking across campus, attending a movie or visiting a restaurant. However, realizing this potential has been challenging, due to a host of issues including internet connectivity, audio interference, limited mobility and limited line of sight. We will describe some ongoing work looking at the benefits and challenges of using MRPs in the wild.  The goal of this work is to develop a framework for understanding MRP use in informal social settings that captures key relationships among the physical requirements of the setting, the social norms of the setting, and the challenges posed for MRP pilots and people in the local environment.  This framework will then inform the design of novel user interfaces and crowdsourcing techniques to help MRP pilots anticipate and overcome challenges of specific informal social settings Joint Work: Sue Fussell, Elijah Weber-Han, Dept. of Communication & Dept. of Info. Science at Cornell University

What We Talk About When We Talk About Design

Panel, Cornell University

2/7/18

Panelists: Keith Evan Green, Kirstin Hagelskjaer Petersen, Guy Hoffman, Rob Shepherd and François Guimbretière. As panelists, we will interact with each other and the audience on the topic of what design means for robotics and what robotics means for design. Panelists would also like to discuss briefly the Q-exam in design.

Sailing in Space

Bo Fu, Cornell University

2/14/18

Solar sail is a type of spacecraft propelled by harvesting momentum from solar radiation. Compared with spacecraft propelled by traditional chemical rockets or the more advanced electric propulsion engines, the unique feature of solar sails is that they do not use fuel for propulsion. This allows for the possibility of return-type (round-trip) missions to other heavenly bodies, which would be difficult or near impossible with conventional propulsion methods. This also makes solar sails highly promising candidates for service as interplanetary cargo ships in future space missions. Solar sail research is quite broad and multi-disciplinary. In this talk, an overview of solar sail technology including the history, the fundamentals of photon-sail interaction, and the state of the art of solar sailing is presented. One specific area solar sail research – attitude dynamics and control – is discussed in detail. Attitude control of large sails poses a challenge because most methods developed for solar sail attitude control require the controller mass to scale with the sail’s surface area. This is addressed by a newly proposed tip displacement method (TDM), where by moving the wing tips, the geometry of sail film is exploited to generate the necessary control forces and torques. The TDM method is described as it applies to a square solar sail that consists of four triangular wings. The mathematical relationship between the displacement of the wing tip and the control torque generated is fully developed under quasi-static condition and assuming the wing takes on the shape of a right cylindrical shell. Results from further investigation by relaxing previous modeling assumptions are presented. Future research directions in aerospace engineering spanning field of autonomy, sensing, controls, and modeling are discussed.

Science Fiction / Double Feature: Design Q Exam and Nonverbal Behaviors

Guy Hoffman, Cornell University

2/21/18

In this informal meeting of the robotics seminar, we will do good on our promise to discuss the structure of the new(ish) Design Q exam, including presentations by faculty of the expectations, war stories from students who took the Design Q, and Q&A (no pun intended). The second part of this double feature seminar is going to be a presentation and discussion on one of the classics papers at the foundation of HRI, Paul Ekman’s 1969 article “The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding”, which is at the basis of decades of research on body language, and a must-know for any researcher interested in HRI systems using gestures and facial expressions. For a non-light reading: http://www.communicationcache.com/uploads/1/0/8/8/10887248/the_repertoire_of_nonverbal_behavior_categories_origins__usage_and_coding.pdf

 Autonomy, Embodiment, and Anthropomorphism: the Ethics of Robotics

Ross Knepper, Cornell University

2/28/18

A robot is an artificially intelligent machine that can sense, think, and act in the world. It’s physical, embodied aspect sets a robot apart from other artificially intelligent systems, and it also profoundly affects the way that people interact with robots. Although a robot is an autonomous, engineered machine, its appearance and behavior can trigger anthropomorphic impulses in people who work with it. In many ways, robots occupy a niche that is somewhere between man and machine, which can lead people to form unhealthy emotional attitudes towards them. We can develop unidirectional emotional bonds with robots, and there are indications that robots occupy a distinct moral status from humans, leading us to treat them without the same dignity afforded to a human being. Are emotional relationships with robots inevitable? How will they influence human behavior, given that robots do not reciprocate as humans would? This talk will examine issues such as cruelty to robots, sex robots, and robots used for sales, guard or military duties. This talk was previously presented in spring 2017 as part of CS 4732: Social and Ethical Issues in AI.

How do people, and how should legged robots, avoid falling down?

Andy Ruina

3/7/18

What actuators does a person or legged robot have available to help prevent falls? Only ones that can more the relative horizontal position of the support point and the center of mass. What are these? Ankle torques, distortions of the upper body (bending at hips, swinging arms), stepping and pushing off. Of these, by far the biggest control authority is in stepping and pushing off. And these can be well understood, and well approximated, by a point mass model. Why? Because the same things that can’t help much, namely ankle torques and upper body distortions, can’t hurt much either. Thus, we believe we can design a robust balance controller using foot placement and pushoff and nothing else. And, reverse engineering, we think this explains most of what people do also, at least when recovering from large disturbances. A balanced broomstick, a Segway, a bicycle, a walking robot and a walking person all use the same basic idea.

Multi-Robot Mini Symposium

Kirstin H. Petersen, Cornell University

3/14/18

This Multi-Robot Mini Symposium will feature a series of brief talks by students and professors related to recent work on Multi-Robot/Swarm Robotics research. The goal is to identify and inspire new ideas among the multi-robot community at Cornell. We are looking for speakers – please notify Kirstin Petersen (khp37) if you would like to do a pitch!

Robotics Debate/Discussion

3/21/18 This week we will host a debate/discussion on some topics in robotics. There is still time to contribute discussion questions here: https://docs.google.com/document/d/1_H3M-WIM6UN_TMsNQvgW9sMoYGVBuFXwi14fEor5tDM/edit?usp=sharing

Anything is fair game. The topics will be announced Wednesday morning. Good questions have an opportunity for deep discussion, support a variety of viewpoints, and engage the broad robotics community. You may sign your name or leave your question anonymous. If you put your name, you are volunteering to give a few sentence explanation of the question and its implications. -Ross

Maidbot: Designing and Building Rosie the Robot for the Hospitality Industry

Steve Supron, Maidbot

3/28/18

Steve Supron joined Maidbot as Manufacturing Lead during its incubation days over two years ago at REV Ithaca. Micah Green, a former Cornellian and the founder and CEO of Maidbot, hired Steve to help bring his dream of Rosie the Robot to the hotel industry. Steve will present the company’s story as well as the challenges and considerations of robotics in a hospitality setting. Steve will review some of the unique design decisions and technology and production choices the team has made along the way from early prototypes to testable pilot units and on to the production design.

Learning How to Plan for Multi-Step Manipulation in Collaborative Robotics

Claudia Pérez D’Arpino, MIT

4/11/18

Abstract: The use of robots for complex manipulation tasks is currently challenged by the limited ability of robots to construct a rich representation of the activity at both the motion and tasks levels in ways that are both functional and apt for human-supervised execution. For instance, the operator of a remote robot would benefit from planning assistance, as opposed to the currently used method of joint-by-joint direct teleoperation. In manufacturing, robots are increasingly expected to execute manipulation tasks in shared workspace with humans, which requires the robot to be able to predict the human actions and plan around these predictions. In both cases, it is beneficial to deploy systems that are capable of learning skills from observed demonstrations, as this would enable the application of robotics by users without programming skills. However, previous work on learning from demonstrations is limited in the range of tasks that can be learned and generalized across different skills and different robots. I this talk, I present C-LEARN, a method of learning from demonstrations that supports the use of hard geometric constraints for planning multi-step functional manipulation tasks with multiple end effectors in quasi-static settings, and show the advantages of using the method in a shared autonomy framework.

Speaker Bio: Claudia Pérez D’Arpino is a PhD Candidate in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, advised by Prof. Julie A. Shah in the Interactive Robotics Group since 2012. She received her degrees in Electronics Engineering (2008) and Masters in Mechatronics (2010) from the Simon Bolivar University in Caracas, Venezuela, where she served as Assistant Professor in the Electronics and Circuits Department (2010-2012) with a focus on Robotics. She participated in the DARPA Robotics Challenge with Team MIT (2012-2015). Her research at CSAIL combines machine learning and planning techniques to empower humans through the use of robotics and AI. Her PhD research centers in enabling robots to learn and create strategies for multi-step manipulation tasks by observing demonstrations, and develop efficient methods for robots to employ these skills in collaboration with humans, either for shared workspace collaboration, such as assembly in manufacturing, or for remote robot control in shared autonomy, such as emergency response scenarios.

Web: http://people.csail.mit.edu/cdarpino/

Autonomous and Intelligent Robots in Unstructured Field Environments

Dr. Girish Chowdhary, UIUC, Co-Founder EarthSense Inc.

4/18/18

Abstract: What if a team of collaborative autonomous robots grew your food for you? In this talk, I will demonstrate some key theoretical and algorithm advances in adaptive control, reinforcement learning, collaborative autonomy, and robot-based analytics my group is working to bring this future a lot nearer! I will discuss my group’s theoretical and practical work towards the challenges in making autonomous, persistent, and collaborative field robotics a reality. I will discuss new algorithms that are laying the foundation for robust long-duration autonomy in harsh, changing, and uncertain environments, including deep learning for robot embedded vision, deep adversarial reinforcement learning for large state-action spaces, and transfer learning for deep reinforcement learning domains. I will also describe the new breed of lightweight, compact, and highly autonomous field robots that my group is creating and deploying in fields across the US. I will show several videos of the TerraSentia robot, which is being widely hailed as opening the doors to an exciting revolution in agricultural robotics by popular media, including Chicago Tribune, the MIT Technology Review, Discovery Canada and leading technology blogs. I will also discuss several technological and socio-economic challenges of making autonomous field-robotic applications with small robots a reality, including opportunities in high-throughput phenotyping, mechanical weeding, and robots for defense applications.

Speaker Bio: Girish Chowdhary is an assistant professor at the University of Illinois at Urbana-Champaign, and the director of the Distributed Autonomous Systems laboratory at UIUC. He holds a PhD (2010) from Georgia Institute of Technology in Aerospace Engineering. He was a postdoc at the Laboratory for Information and Decision Systems (LIDS) of the Massachusetts Institute of Technology (2011-2013), and an assistant professor at Oklahoma State University’s Mechanical and Aerospace Engineering department (2013-2016). He also worked with the German Aerospace Center’s (DLR’s) Institute of Flight Systems for around three years (2003-2006). Girish’s ongoing research interest is in theoretical insights and practical algorithms for adaptive autonomy, with a particular focus on field-robotics. He has authored over 90 peer reviewed publications in various areas of adaptive control, robotics, and autonomy. On the practical side, Girish has led the development and flight-testing of over 10 research UAS platform. UAS autopilots based on Girish’s work have been designed and flight-tested on six UASs, including by independent international institutions. Girish is an investigator on NSF, AFOSR, NASA, ARPA-E, and DOE grants. He is the winner of the Air Force Young Investigator Award, and the Aerospace Guidance and Controls Systems Committee Dave Ward Memorial award. He is the co-founder of EarthSense Inc., working to make ultralight agricultural robotics a reality.

Design and Analysis of a Wearable Robotic Forearm

Vignesh Vatsal, Cornell  University

4/25/18

Human augmentations that can enhance a user’s capabilities in terms of strength, power, safety, and task efficiency have been a persistent area of research. Historically, most efforts in this field have focused on prostheses and exoskeletons, which serve either to replace and rehabilitate lost capabilities, or enhance already existing ones by adhering to human limb structures. More recently, we are witnessing devices that add capabilities beyond those found in nature, such as additional limbs and fingers. However, most of these devices have been designed for specific tasks and applications, at far ends on a spectrum of power, size, and weight. Additionally, they are not considered to be agents for collaborative activities, with interaction modes typically involving teleoperation or demonstration-based programmable motions. We envision a more general-purpose wearable robot, on the scale of a human forearm, which enhances the reach of a user, and acts as a truly collaborative autonomous agent. We aim to connect the fields of wearable robot design, control systems, and computational human-robot interaction (HRI). We report on an iterative process for user-centered design of the robot, followed by an analysis of its kinematics, dynamics and biomechanics. The collaboration aspect involves collecting data from human-human teleoperation studies to build models for human intention recognition and robot behavior generation in joint human-robot tasks.

Where will our cars take us? The history, challenges, and potential impact of self driving cars

Mark Campbell, Cornell University

5/2/18

Autonomous, self-driving cars have the potential to impact society in many ways, including taxi/bus service; shipping and delivery; and commuting to/from work. This talk will give an overview of the history, technological work to date and challenges, and potential future impact of self-driving cars. A key challenge is the ability to perceive the environment from the cars sensors, i.e. how can a car convert pixels from a camera, to knowledge of a scene with cars, cyclist, and pedestrians. Perception in self-driving cars is particularly challenging, given the fast viewpoint changes and close proximity of other objects. This perceived information is typically uncertain, constantly being updated, yet must also be used for important decisions by the car, ranging from a simple change to lanes, or stopping and queuing at a traffic light. Videos, examples, and insights will be given of Cornell’s autonomous car, as well as key performers such as Google/Waymo and car companies.

Can you teach me?: Leveraging and Managing Interaction to Enable Concept Grounding

Kalesha Bullard, Georgia Tech

5/9/18

Abstract: When a robotic agent is given a recipe for a task, it must perceptually ground each entity and concept within the recipe (e.g., items, locations) in order to perform the task. Assuming no prior knowledge, this is particularly challenging in newly situated or dynamic environments, where the robot has limited representative training data. This research examines the problem of enabling a social robotic agent to leverage interaction with a human partner for learning to efficiently ground task-relevant concepts in its situated environment. Our prior work has investigated Learning from Demonstration approaches for the acquisition of (1) training instances as examples of task-relevant concepts and (2) informative features for appropriately representing and discriminating between task-relevant concepts. In ongoing work, we examine the design of algorithms that enable the social robot learner to autonomously manage the interaction with its human partner, towards actively gathering both instance and feature information for learning the concept groundings. This is motivated by the way that humans learn, by combining information rather than simply focusing on one type. In this talk, I present insights and findings from our initial work on learning from demonstration for grounding of task-relevant concepts and ongoing work on interaction management to improve the learning of grounded concepts.

Bio: Kalesha Bullard is a PhD candidate in Computer Science at Georgia Institute of Technology. Her thesis research lies at the intersection of Human-Robot Interaction and Machine Learning: enabling a social robot to learn groundings for task-relevant concepts, through leveraging and managing interaction with a human teacher. She is co-advised by Sonia Chernova, associate professor in the school of Interactive Computing at Georgia Tech, and Andrea L. Thomaz, associate professor in the department of Electrical and Computer Engineering at The University of Texas in Austin. Before coming to Georgia Tech, Kalesha received her undergraduate degree in Mathematics Education from The University of Georgia and subsequently participated in the Teach For America national service corps as a high school mathematics teacher. Over the course of her research career, Kalesha has served as a Program Committee co-chair for three different workshops and symposia, completed research internships at IBM Watson and NASA Jet Propulsion Laboratory, and was awarded an NSF Graduate Research Fellowship and a Google Generation Scholarship. Kalesha’s broader personal research vision is to enable social robots with the cognitive reasoning abilities and social intelligence necessary to engage in meaningful dialogue with their human partners, over long-term time horizons. Towards that end, she is particularly interested in grounded and embodied dialogue whereby the agent can communicate autonomously, intuitively, and expressively.

The schedule is maintained by Corey Torres (ct635@cornell.edu) and Ross Knepper (rak@cs.cornell.edu). To be added to the mailing list, please follow the e-list instructions for joining a mailing list. The name of the mailing list is robotics-l. If you have any questions, please email ct635@cornell.edu.