Task and Motion Planning: Algorithms, Implementation, and Evaluation

Dylan Shell, Texas A&M University

10/16/18

Dylan Shell

Everyday tasks combine discrete and geometric decision-making. The robotics, AI, and formal methods communities have concurrently explored different planning approaches, producing techniques with different capabilities and trade-offs. We identify the combinatorial and geometric challenges of planning for everyday tasks, develop a hybrid planning algorithm, and implement an extensible planning framework. In ongoing work, we are improving the scalability and extensibility of our task-motion planner and developing planner-independent evaluation metrics.

Short Student Talks

10/23/18

Speaker 1: Thais Campos de Almeida, Cornell University (Kress-Gazit Group)

Title: A novel approach to synthesize task-based designs of modular manipulators

Abstract: A remarkable advantage of modular robots is that they can be rearranged to perform several different tasks; however, selecting a new configuration for a specific task can be a complex problem. In this talk, I present a new approach for synthesis of provably-correct design and controls of robotic manipulators given a task description. In our framework, we use tools from the program synthesis community, which enables us to not only find a design for a feasible task but also to identify feasible and infeasible subtasks within the task and search for multiple designs that satisfy the entire task. I will also briefly present a new formulation for the inverse kinematics problem used in this work, as well as compare our approach with the state-of-art techniques used to solve this problem.

Speaker 2: Yuhan Hu, Cornell University (Hoffman Group)

Title: Using Skin Texture Change to Design Social Robots

Abstract: Robots designed for social interaction often express their internal and emotional states through nonverbal behavior. Most robots use their facial expressions, gestures, locomotion, and tone of voice. In this talk, I will present a new expressive nonverbal channel for social robots in the form of texture-changing skin. This is inspired by biological systems, which frequently respond to external stimuli and display their internal states through skin texture change. I will present the design of the robot and some findings from the experiment of user-robot interaction.

Speaker 3: Haron Abdel-Raziq, Cornell University (Petersen Group)

Title: Leveraging Honey Bees as Cyber Physical Systems

Abstract: Honey bees, nature’s premiere agricultural pollinators, have proven capable of robust, complex, and versatile operation in unpredictable environments far beyond what is possible with state-of-the-art robotics. Bee keepers and farmers are heavily dependent on these honey bees for successful crop yields, evident by the $150B global pollination industry. This, coupled with the current interest in bio-inspired robotics, has prompted research on understanding honey bee swarms and their behavior both inside and outside of the hive. Prior attempts at monitoring bees are limited to the use of expensive, complicated, short range, or obstruction sensitive approaches. By combining traditional engineering methods with the honey bee’s extraordinary capabilities, we present a novel solution to monitor long-range bee flights by utilizing a new class of easily manufactured sensor and a probabilistic mapping algorithm. Specifically, the goal is to equip bees with millimeter scale ASIC technology “backpacks” that record key flight information, thus transforming a honey bee swarm into a vast cyber-physical system which can acquire data related to social insect behavior as well as bust and bloom over large areas. Foraging probability maps will then be developed by applying a simultaneous localization and mapping algorithm to the gathered data. The project is still in its initial phase and thus, we will discuss the motivation for this project as well as provide background on the various enabling technologies. We will then discuss a prototype system for gathering data on flight patterns prior to placing the actual technology on a bee. The data yielded from this work will benefit both the scientific community and bee keepers with knowledge gains spanning low power micro-scale devices and robotics, to improved understanding of how pollination occurs in different environments.

Short Student Talks

10/30/18

Speaker 1: Adam Pacheck, Cornell University

Title: Reactive Composition of Learned Abstractions

Abstract: We want robots to be able to perform high level reactive tasks and inform us if they are not able to do so given their current skills. In this talk, I present work in which we give a robot a set of skills, automatically generate an abstraction of the preconditions and effects of the skills, and automatically encode the skills in linear temporal logic. A task can then be specified for the robot and we are able to reason about its feasibility and receive suggestions for repair from the robot if it is infeasible.

Speaker 2: Yixiao Wang, Cornell University

Title: “Space Agent” as a Design Partner – Study and design interactions between robot surfaces and human designers

Abstract: In this presentation, we first propose the concept of “Space Agents”,  which are “interactive and intelligent environments perceived by users as human agents.” The foundation of this concept are communication theories, and it functions as the bridge between human users and the built environment. To better study the human-human like interactions and partnerships between users and their environments, we decide to design and study interactions between “space agents” and human designers, which is my dissertation topic. More specifically, we would like to study and test following hypotheses: 1) The “space agents” could form a (temporary) partnership with the human designers; 2) The “space agent” together with the “designer-space partnership” could improve designer’s work performance, perceived spatial support, and work life quality. We propose to design continuous robotic surfaces as space-making robots which could give agency to a traditional working space. Scenarios are specified to demonstrate how could these robotic surfaces enable spatial reconfigurations as an effective partner, and previous works are presented to show the progress of my dissertation.

Speaker 3: Ryan O’Hern, Cornell University

Title: Automating Vineyard Yield Prediction

Abstract: Advances in mobile computing, sensors, and machine learning technology have been a boon to the fields of agricultural robotics and precision agriculture. In this talk, I will discuss preliminary results of an on-going collaboration between Cornell’s College of Engineering and the College of Agriculture and Life Sciences to advance viticultural practices with new robotics techniques. This talk will focus on our initial work to predict yield in vineyards using computer vision techniques.

Short Student Talks

11/6/18

Speaker 1: Nialah Wilson, Cornell University

Title: Design, Coordination, and Validation of Controllers for Decision Making and Planning in Large-Scale Distributed Systems

Abstract: A good swarm will be comprised of cheap, simple robots and run on efficient algorithms, making it scalable with regards to both cost, computation, and maintenance. Previous work has been done to control large-scale distributed systems with centralized or decentralized control, but none examine what happens when modules are allowed to decide when to switch between control schemes, or explore the optimality and guarantees that can still be made in a hybrid control system. I propose using two robotic platforms, a flexible modular robot, and a team of micro blimps, to study decision making and task-oriented behaviors in large-scale distributed systems by creating new hybrid control algorithms for an extended subsumption architecture.

Speaker 2: Wil Thomason, Cornell University

Title: A Flexible Sampling-Based Approach to Integrated Task and Motion Planning

Abstract: Integrated Task and Motion Planning (TAMP) seeks to combine tools from symbolic (task) planning and geometric (motion) planning to efficiently solve geometrically constrained long-horizon planning problems. In this talk, I will present some of my work in progress on a new approach to solving the TAMP problem based on a real-valued “unsatisfaction” semantics for interpreting symbolic formulae. This semantics permits us to directly sample in regions where the preconditions for symbolic actions are satisfied. In conjunction with arbitrary task-level heuristics, this enables us to use off-the-shelf sampling based motion planning to efficiently solve TAMP problems.

Speaker 3: Ji Chen, Cornell University

Title: Verifiable Control of Robotic Swarms from High-level Specifications

Abstract: Designing controllers automatically for robotic swarm systems to guarantee safety, correctness, scalability and flexibility in achieving high-level tasks remains a challenging problem. In this talk, I will present a control scheme that takes in specifications for high-level tasks and outputs continuous controllers which result in the desired collective behaviors. In particular, I will discuss the properties that swarm must have in the continuous level to ensure the correctness of mapping from symbolic plans to real-world execution. In addition, I will also compare the centralized and decentralized approaches in terms of time efficiency, failure resilience, and computation complexity.

Coordination dynamics in human-robot teams

Tariq Iqbal, MIT

11/13/18

Tariq Iqbal

Abstract: As autonomous robots are becoming more prominent across various domains, they will be expected to interact and work with people in teams. If a robot has an understanding of the underlying dynamics of a group, then it can recognize, anticipate, and adapt to the human motion to be a more effective teammate. In this talk, I will present algorithms to measure the degree of coordination in groups and approaches to extend these understandings by a robot to enable fluent collaboration with people. I will first describe a non-linear method to measure group coordination, which takes multiple types of discrete, task-level events into consideration. Building on this method, I will then present two anticipation algorithms to predict the timings of future actions in teams. Finally, I will describe a fast online activity segmentation algorithm which enables fluent human-robot collaboration.

Bio: Tariq Iqbal is a postdoctoral associate in the Interactive Robotics Group at MIT. He received his Ph.D. from the University of California San Diego, where he was a member of the Contextual Robotics Institute and the Healthcare Robotics Lab. His research focuses on developing algorithms for robots to solve problems in complex human environments, by enabling them to perceive, anticipate, adapt, and collaborate with people.

Learning Adaptive Models for Robot Motion Planning and Human-Robot Interaction

Tom Howard, University of Rochester

11/20/18

Abstract: The efficiency and optimality of robot decision making is often dictated by the fidelity and complexity of models for how a robot can interact with its environment.  It is common for researchers to engineer these models a priori to achieve particular levels of performance for specific tasks in a restricted set of environments and initial conditions.  As we progress towards more intelligent systems that perform a wider range of objectives in a greater variety of domains, the models for how robots make decisions must adapt to achieve, if not exceed,  engineered levels of performance.  In this talk I will discuss progress towards model adaptation for robot intelligence, including recent efforts in natural language understanding for human-robot interaction and robot motion planning.
Biosketch: Thomas Howard is an assistant professor in the Department of Electrical and Computer Engineering at the University of Rochester.  He also holds secondary appointments in the Department of Biomedical Engineering, Department of Computer Science, and Department of Neuroscience and directs the University of Rochester’s Robotics and Artificial Intelligence Laboratory. Previously he held appointments as a research scientist and a postdoctoral associate at MIT’s Computer Science and Artificial Intelligence Laboratory in the Robust Robotics Group, a research technologist at the Jet Propulsion Laboratory in the Robotic Software Systems Group, and a lecturer in mechanical engineering at Caltech and was a Goergen Institute for Data Science Center of Excellence Distinguished Researcher.Howard earned a PhD in robotics from the Robotics Institute at Carnegie Mellon University in 2009 in addition to BS degrees in electrical and computer engineering and mechanical engineering from the University of Rochester in 2004. His research interests span artificial intelligence, robotics, and human-robot interaction with particular research focus on improving the optimality, efficiency, and fidelity of models for decision making in complex and unstructured environments with applications to robot motion planning and natural language understanding.  Howard was a member of the flight software team for the Mars Science Laboratory, the motion planning lead for the JPL/Caltech DARPA Autonomous Robotic Manipulation team, and a member of Tartan Racing, winner of the DARPA Urban Challenge.  Howard has earned Best Paper Awards at RSS (2016) and IEEE SMC (2017), two NASA Group Achievement Awards (2012, 2014), and was a finalist for the ICRA Best Manipulation Paper Award (2012).  Howard’s research at the University of Rochester has been supported by National Science Foundation, Army Research Office, Army Research Laboratory, Department of Defense Congressionally Directed Medical Research Program, and the New York State Center of Excellence in Data Science.

Finite Set Statistics Based Multi-object Tracking: Recent Advances, Challenges, and Space Applications

Keith LeGrand,  Sandia National Lab

11/27/18

Abstract: Multi-object tracking is the process of simultaneously estimating an unknown number of objects and their partially hidden states using unlabeled noisy measurement data. Common applications of multi-object tracking algorithms include space situational awareness (SSA), missile defense, pedestrian tracking, and airborne surveillance. In recent years, a new branch of statistical calculus known as finite set statistics (FISST) has provided a formalism for solving such tracking problems and has resulted in a renaissance in tracking research. Today, researchers are applying FISST to formalize and solve problems not typically thought of as traditional tracking problems, such as robotic simultaneous localization and mapping (SLAM), obstacle localization for driverless vehicles, lunar descent and landing, and autonomous swarm control. This talk discusses the basic principles of multi-object tracking with a focus on FISST and highlights recent advancements. Special challenges, such as probabilistic object appearance detection, extended object tracking, and distributed multi-sensor fusion are presented. Finally, this talk will present the latest application of FISST theory to sensor planning, whereby multi-object information measures are used to optimize the performance of large dynamic sensor networks.

Synthesis for Composable Robots: Guarantees and Feedback for Complex Behaviors

Hadas Kress-Gazit, Cornell University

1/24/18

Getting a robot to perform a complex task, for example completing the DARPA Robotics Challenge, typically requires a team of engineers who program the robot in a time consuming and error prone process and who validate the resulting robot behavior through testing in different environments. The vision of synthesis for robotics is to bypass the manual programming and testing cycle by enabling users to provide specifications – what the robot should do – and automatically generating, from the specification, robot control that provides guarantees for the robot’s behavior.

This talk will describe the work done in the verifiable robotics research group towards realizing the synthesis vision and will focus on synthesis for composable robots – modular robots and swarms. Such robotic systems require new abstractions and synthesis techniques that address the overall system behavior in addition to the individual control of each component, i.e. module or swarm member.

Explorations using Telepresence Robots in the Wild

Susan Fussell and Elijah Webber-Han

1/31/18

Mobile Robotic (Tele)Presence (MRP) systems are a promising technology for distance interaction because they provide both embodiment and mobility.  In principle, MRPs have the potential to support a wide array of informal activities, such as walking across campus, attending a movie or visiting a restaurant. However, realizing this potential has been challenging, due to a host of issues including internet connectivity, audio interference, limited mobility and limited line of sight. We will describe some ongoing work looking at the benefits and challenges of using MRPs in the wild.  The goal of this work is to develop a framework for understanding MRP use in informal social settings that captures key relationships among the physical requirements of the setting, the social norms of the setting, and the challenges posed for MRP pilots and people in the local environment.  This framework will then inform the design of novel user interfaces and crowdsourcing techniques to help MRP pilots anticipate and overcome challenges of specific informal social settings Joint Work: Sue Fussell, Elijah Weber-Han, Dept. of Communication & Dept. of Info. Science at Cornell University