Design and Analysis of a Wearable Robotic Forearm

Vignesh Vatsal, Cornell  University

4/25/18

Human augmentations that can enhance a user’s capabilities in terms of strength, power, safety, and task efficiency have been a persistent area of research. Historically, most efforts in this field have focused on prostheses and exoskeletons, which serve either to replace and rehabilitate lost capabilities, or enhance already existing ones by adhering to human limb structures. More recently, we are witnessing devices that add capabilities beyond those found in nature, such as additional limbs and fingers. However, most of these devices have been designed for specific tasks and applications, at far ends on a spectrum of power, size, and weight. Additionally, they are not considered to be agents for collaborative activities, with interaction modes typically involving teleoperation or demonstration-based programmable motions. We envision a more general-purpose wearable robot, on the scale of a human forearm, which enhances the reach of a user, and acts as a truly collaborative autonomous agent. We aim to connect the fields of wearable robot design, control systems, and computational human-robot interaction (HRI). We report on an iterative process for user-centered design of the robot, followed by an analysis of its kinematics, dynamics and biomechanics. The collaboration aspect involves collecting data from human-human teleoperation studies to build models for human intention recognition and robot behavior generation in joint human-robot tasks.

Where will our cars take us? The history, challenges, and potential impact of self driving cars

Mark Campbell, Cornell University

5/2/18

Autonomous, self-driving cars have the potential to impact society in many ways, including taxi/bus service; shipping and delivery; and commuting to/from work. This talk will give an overview of the history, technological work to date and challenges, and potential future impact of self-driving cars. A key challenge is the ability to perceive the environment from the cars sensors, i.e. how can a car convert pixels from a camera, to knowledge of a scene with cars, cyclist, and pedestrians. Perception in self-driving cars is particularly challenging, given the fast viewpoint changes and close proximity of other objects. This perceived information is typically uncertain, constantly being updated, yet must also be used for important decisions by the car, ranging from a simple change to lanes, or stopping and queuing at a traffic light. Videos, examples, and insights will be given of Cornell’s autonomous car, as well as key performers such as Google/Waymo and car companies.

Can you teach me?: Leveraging and Managing Interaction to Enable Concept Grounding

Kalesha Bullard, Georgia Tech

5/9/18

Abstract: When a robotic agent is given a recipe for a task, it must perceptually ground each entity and concept within the recipe (e.g., items, locations) in order to perform the task. Assuming no prior knowledge, this is particularly challenging in newly situated or dynamic environments, where the robot has limited representative training data. This research examines the problem of enabling a social robotic agent to leverage interaction with a human partner for learning to efficiently ground task-relevant concepts in its situated environment. Our prior work has investigated Learning from Demonstration approaches for the acquisition of (1) training instances as examples of task-relevant concepts and (2) informative features for appropriately representing and discriminating between task-relevant concepts. In ongoing work, we examine the design of algorithms that enable the social robot learner to autonomously manage the interaction with its human partner, towards actively gathering both instance and feature information for learning the concept groundings. This is motivated by the way that humans learn, by combining information rather than simply focusing on one type. In this talk, I present insights and findings from our initial work on learning from demonstration for grounding of task-relevant concepts and ongoing work on interaction management to improve the learning of grounded concepts.

Bio: Kalesha Bullard is a PhD candidate in Computer Science at Georgia Institute of Technology. Her thesis research lies at the intersection of Human-Robot Interaction and Machine Learning: enabling a social robot to learn groundings for task-relevant concepts, through leveraging and managing interaction with a human teacher. She is co-advised by Sonia Chernova, associate professor in the school of Interactive Computing at Georgia Tech, and Andrea L. Thomaz, associate professor in the department of Electrical and Computer Engineering at The University of Texas in Austin. Before coming to Georgia Tech, Kalesha received her undergraduate degree in Mathematics Education from The University of Georgia and subsequently participated in the Teach For America national service corps as a high school mathematics teacher. Over the course of her research career, Kalesha has served as a Program Committee co-chair for three different workshops and symposia, completed research internships at IBM Watson and NASA Jet Propulsion Laboratory, and was awarded an NSF Graduate Research Fellowship and a Google Generation Scholarship. Kalesha’s broader personal research vision is to enable social robots with the cognitive reasoning abilities and social intelligence necessary to engage in meaningful dialogue with their human partners, over long-term time horizons. Towards that end, she is particularly interested in grounded and embodied dialogue whereby the agent can communicate autonomously, intuitively, and expressively.

Continuum Robot Trunks and Tentacles

Ian Walker, Clemson University

8/30/17

This talk will provide an overview of research in biologically inspired continuous backbone “trunk and tentacle” continuum robots. In particular, robots inspired by octopus arms and plants (vines) will be discussed. Use of these robots for novel inspection and manipulation operations, targeted towards Aging in Place applications and Space-based operations, will be discussed.Ian Walker received the B.Sc. in Mathematics from the University of Hull, England, in 1983 and the M.S. and Ph.D. in Electrical and Computer Engineering from the University of Texas at Austin in 1985 and 1989. He is a Professor in the Department of Electrical and Computer Engineering at Clemson University. Professor Walker’s research focuses on research in the construction, modeling, and application of continuum robots.

The Additive Manufacturing of Robots

Rob Shepherd

9/12/17

The liquid phase processing of polymers has been used in the last 100 years to produce items that vary in size and function from buoyant boat hulls to the living hinges on tic-tac boxes. Recently, the fields of stretchable electronics and soft robotics have made significant progress in manufacturing approaches to add increased mechanical function as well as sensory feedback from the additive manufacturing of soft materials, including polymers and elastomers. This talk will be a survey of the work my research group, the Organic Robotics Laboratory, has contributed in this space. Much of the work will revolved around a 3D printing process called Projection Stereolithography. Our group leases a Carbon M1 3D printer that is available for other researchers to use, so attending this talk can also be seen as an introduction to the process and its capabilities.

Synthesis for Robots: Guarantees and Feedback for Complex Behaviors

Hadas Kress-Gazit

9/19/17

Getting a robot to perform a complex task, for example completing the DARPA Robotics Challenge, typically requires a team of engineers who program the robot in a time consuming and error prone process and who validate the resulting robot behavior through testing in different environments. The vision of synthesis for robotics is to bypass the manual programming and testing cycle by enabling users to provide specifications – what the robot should do – and automatically generating, from the specification, robot control that provides guarantees for the robot’s behavior.

In this talk I will describe the work done in my group towards realizing the synthesis vision. I will discuss what it means to provide guarantees for physical robots, types of feedback we can generate, specification formalisms that we use and our approach to synthesis for different robotic systems such as modular robots, soft robots and multi robot systems.

Learning Competent Social Navigation

Ross Knepper

9/26/17

Competence in pedestrian social navigation requires a robot to exhibit many strengths, from perceiving the intentions of others through social signals to acting clearly to convey intent. It is made more difficult by the presence of many individual people with their own agendas as well as by the fact that all communication and coordination occurs implicitly through social signaling (chiefly gross body motion, eye gaze, and body language).  Furthermore, much of the information people glean about one another’s intentions is derived from the social context.  For example, office workers are more likely to be heading towards the cafeteria if it is lunchtime and towards the exit if it is time to go home.

In this talk, I explore some of the mathematical tools that allow us to tease apart the problem of social navigation into patterns that distill enough of the complexity to be learnable.  One of the key problems is to predict the future motions of others based on an observed “path prefix”.  Past results have shown that geometric prediction of pedestrian motion is nearly impossible to do accurately due to the very fact that people are behaving in a socially competent manner, since they react to other people in ways that achieve their joint goals.  Instead, I show how trajectories of navigating pedestrians can be jointly predicted topologically.  This prediction can readily be learned in order to understand how people intend to avoid colliding with one another while achieving their goals.

Why Don’t Bicycles Fall Over? (2:45 p.m. in Kimball B11)

Andy Ruina

10/3/17

When viewed from the rear a bicycle looks like an inverted pendulum. Where the wheels touch the ground, it has an effective hinge point.  So, if a bike tips a little, gravity acting on the center of mass tends to tip it more. So, superficially, a bike is unstable. Yet in practice, moving bicycles don’t fall over.  Why not? This question has three variants.  How do bike riders control control bikes to stay up? That is, what forces are invoked to keep the bike from falling? Second, how do people balance bikes when riding no hands?  And third, how does ghost riding work?  That is, at least some bikes won’t fall over when they are moving fast enough, even with no rider. How does that happen?

The third question, about ghost riding (bicycle self stability), being purely a question of mechanics, seems simplest. In the folk lore, there are two dominant theories: the gyroscopic theory of Klein and Sommerfeld (~1911) and the Castor (aka ‘trail’)  theory of Jones (1970).  By means of examples, we now know that both were wrong. Gyroscopic terms and ‘positive’ castor are neither necessary nor sufficient, separately or in combination, for bicycle self-stability.

As for what riders do, hands on or hands off, the centrifugal theory of bicycle balance is pretty complete: when having an undesirable fall to the right, you should steer to the right.

Methods and Metrics in Human-Robot Interaction

Sue Fussell, Malte Jung, Guy Hoffman, Ross Knepper

10/24/17

Several faculty who study human-robot interaction present some of the best practices in HRI research.  HRI differs from many other subfields of robotics because it deals with humans.  We are limited both in our understanding of human psychology and in our ability to experiment on humans. To help audiences better appreciate HRI research presentations, this talk and discussion will cover popular approaches to conducting HRI research, including experimental methodology and useful metrics for evaluation of experiments.