Events

Formal Verification of End-to-End Deep Reinforcement Learning

Yasser Shoukry, University of California – Irvine

11/26/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: From simple logical constructs to complex deep neural network models, Artificial Intelligence (AI)-agents are increasingly controlling physical/mechanical systems. Self-driving cars, drones, and smart cities are just examples of such systems to name a few. However, regardless of the explosion in the use of AI within a multitude of cyber-physical systems (CPS) domains, the safety, and reliability of these AI-enabled CPS is still an understudied problem. Mathematically based techniques for the specification, development, and verification of software and hardware systems, also known as formal methods, hold the promise to provide appropriate rigorous analysis of the reliability and safety of AI-enabled CPS. In this talk, I will discuss our work on applying formal verification techniques to provide formal verification of the safety of autonomous vehicles controlled by end-to-end machine learning models and the synthesis of certifiable end-to-end neural network architectures.

Bio: Yasser Shoukry is an Assistant Professor in the Department of Electrical Engineering and Computer Science at the University of California, Irvine where he leads the Resilient Cyber-Physical Systems Lab. Before joining UCI, he spent two years as an assistant professor at the University of Maryland, College Park. He received his Ph.D. in Electrical Engineering from the University of California, Los Angeles in 2015. Between September 2015 and July 2017, Yasser was a joint postdoctoral researcher at UC Berkeley, UCLA, and UPenn. His current research focuses on the design and implementation of resilient cyber-physical systems and IoT. His work in this domain was recognized by the NSF CAREER Award, the Best Demo Award from the International Conference on Information Processing in Sensor Networks (IPSN) in 2017, the Best Paper Award from the International Conference on Cyber-Physical Systems (ICCPS) in 2016, and the Distinguished Dissertation Award from UCLA EE department in 2016. In 2015, he led the UCLA/Caltech/CMU team to win the NSF Early Career Investigators (NSF-ECI) research challenge. His team represented the NSF- ECI in the NIST Global Cities Technology Challenge, an initiative designed to advance the deployment of Internet of Things (IoT) technologies within a smart city. He is also the recipient of the 2019 George Corcoran Memorial Award for his contributions to teaching and educational leadership in the field of CPS and IoT.

Can Science Fiction Help Real Robots?

Deanna Kocher and Ross Knepper

11/19/19

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: With creative license, science fiction envisions a future in which robots operate among humans. Stories like Bladerunner and Star Trek help us to imagine the ways in which robots could bring out both the best and the worst in humanity. As researchers and companies develop real robots, we notice that they operate on a different plane of assumptions than sci-fi robots. For instance, Isaac Asimov’s three laws of robotics tacitly assume an accurate human detector. In the real world, the three laws are useless to a robot that cannot reliably distinguish a person from a piece of furniture. Science fiction authors are not technologists, for the most part, but do they have something useful to contribute to us? We lead a group discussion about how the two separate planes of real robotics and fantasy robots can be made to intersect. We ask how we roboticsts could utilize science fiction, which has a rich history of considering the ethical dilemmas that may one day arise from robots. And we ask what roboticists can do for science fiction authors and society at large to create a better understanding of robot capabilities and limitations.

Robotics Collaboration Speed Dating

11/5/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: Collaboration opportunities abound in robotics. Today, we will do an activity to speculatively explore connections between people’s research areas. We will pair you up with other people in the group for short amounts of time, and the goal of each encounter is to find a common theme, idea, or project that the two of you could work on together. If you don’t currently do research in robotics, you can shadow somebody else or make up a project on the spot. After we are done brainstorming projects in pairs, we will have an opportunity to share our best ideas with the group.

Transience, Replication, and the Paradox of Social Robotics

Guy Hoffman, Cornell University

10/29/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: As we continue to develop social robots designed for connectedness, we struggle with paradoxes related to authenticity, transience, and replication. In this talk, I will attempt to link together 15 years of experience designing social robots with 100-year-old texts on transience, replication, and the fear of dying. Can there be meaningful relationships with robots who do not suffer natural decay? What would our families look like if we all choose to buy identical robotic family members? Could hand-crafted robotics offer a relief from the mass-replication of the robot’s physical body and thus also from the mass-customization of social experiences?

Robots, Language, and Human Environments: Approaches to Modeling Linguistic Human-Robot Interactions

Cynthia Matuszek, University of Maryland

10/22/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: As robots move from labs and factories into human-centric spaces, it becomes progressively harder to predetermine the environments, tasks, and human interactions they will need to be able to handle. Letting these robots learn from end users via natural language is an intuitive, versatile approach to handling novel situations robustly. Grounded language acquisition is concerned with learning the meaning of language as it applies to the physical world. At the same time, physically embodied agents offer a way to learn to understand natural language in the context of the world to which it refers. In this presentation, I will give an overview of our recent work on joint statistical models to learn the grounded semantics of natural language describing objects, spaces, and actions, as well as presenting some open problems.

Bio: Cynthia Matuszek is an assistant professor of computer science and electrical engineering at the University of Maryland, Baltimore County. Dr. Matuszek directs UMBC’s Interactive Robotics and Language lab, in which research is focused on robots’ acquisition of grounded language, including work in human-robot interfaces, natural language, machine learning, and collaborative robot learning. She has developed a number of algorithms and approaches that make it possible for robots to learn about their environment and how to follow instructions from interactions with non-technical end users. She received her Ph.D. in computer science and engineering from the University of Washington in 2014. Dr Matuszek has published in artificial intelligence, robotics, and human-robot interaction venues, and was named in the most recent IEEE bi-annual “10 to watch in AI.”

Who Must Adapt to Whom?

A conversation with Keith Green, Cornell DEA/MAE, and Chajoong Kim, Cornell Visiting Professor

10/8/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: Robots are engineered products designed to perform a task. In industrial robot deployments, such as factory and warehouse settings, the robot’s environment is often engineered to simplify the robot’s task. As robots begin to be deployed in our daily lives around untrained human users, the question becomes: who must adapt to whom? In this panel, we discuss the following questions:

Anthropomorphism and Bio-inspiration: Shouldn’t robots have their own look and behavior, or must they reference familiar living things?

If robots can’t (yet) do all that we’d like them to do in a given physical environment (e.g. a hospital, a school, a workplace), might we change the physical environment to better fit the (current and near future) capacities of robots, or should we focus our efforts on advancing the robot to fit the human environments we already have?

Bio: Keith Evan Green is professor of design (DEA) and mechanical engineering (MAE) at Cornell University. He addresses problems and opportunities of an increasingly digital society by developing and evaluating interactive and adaptive physical environments and, more broadly, novel robotic manipulators. For Green, the built environment—furniture to metropolis—is a next frontier at the interface of robotics, design, and psychology.

Bio: Chajoong (“CJ”) Kim is associate professor at the Graduate School of Creative Design Engineering, Ulsan National Institute of Science and Technology, South Korea. Dr. Kim investigates how affective experiences in human-product interactions influence user well-being. During this sabbatical year at Cornell, Dr. Kim is studying “the functions of experiencing diverse positive emotions in de-accelerating hedonic adaptation and promoting subjective well-being in the context of consumer product use.”

Autonomous Matter – Bridging the Robotics and Material Composites Communities

Robert Shepherd, Cornell University

10/1/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: The robotics community has been more aggressively incorporating new materials for improved performance—some call this Robotic Materials. These systems are, essentially, smaller versions of existing robots that are sometimes used in swarms; an example of this concept is “Smart Dust.” Concurrently, the materials community has been applying their knowledge towards autonomic responses—they call this Autonomous or Smart Materials. These materials have a feed forward response to applied stimulus, an example of these responses are self healing, or swelling with humidity. For example, Our research group, Organic Robotics Laboratory, is at the intersection of these two approaches. We are building towards the concept of Autonomous Matter, where sensing, computation, actuation, and power are part of a composite material. Examples of how we are moving towards the complexity and size scales that can be considered a material system with these abilities will be shown and discussed.

Dynamics of Solid Liquid Composite Structures

Yoav Matia, Technion

9/24/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: In this work we analyze the transient dynamics of solid-fluid composite structures. This is an interdisciplinary research subject, which lies on the border between theoretical fluid mechanics, soft-robotics and composite-structures. We focus on an elastic beam embedded with fluid-filled cavities as a representing common configurations. Beam deformation both creates, and is induced by internal viscous flow, where changes to cavities’ volume are balanced by a change in axial flux. As a result, pressure gradients develop in the fluid in order to conserve mass, and stresses are induced at the solid-fluid interface; these in turn, create local moments and normal forces, deforming the surrounding solid and vice versa.

The results of the presented research can be applied to define the required geometric and physical properties of solid-fluid structures in order to achieve specific responses to external excitations, thus allowing to leverage viscous-elastic dynamics to create novel soft-actuators and solid-fluid composite materials with unconventional mechanical properties.

Keywords:  Soft-smart Metamaterials, Actuators, Energy harvesting, soft matter, fluid dynamics, fluid structure interaction, large deformation, two way coupling, dynamic modeling

Mapping Natural Language Instructions and Observations to Robot Control

Yoav Artzi, Cornell Tech

9/10/19

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: The problem of mapping natural language instruction to robot actions have been studied largely using modular approaches, where different modules are built or trained for different tasks, and are then combined together in a complex integration process to form a complete system. This approach requires significant engineering effort and designing complex symbolic representations, both to represent language meaning and the interaction between the different modules. We propose to tradeoff these challenges with representation learning, and learn to directly map from natural language instruction and raw sensory observations to robot control in a single model. We design an interpretable model that allows the user to visualize the robot’s plan, and a learning approach that utilizes simulation and demonstrations to learn without autonomous robot control. We apply our method to a quadcopter drone for the task of following navigation instructions.

This work was done by Valts Blukis, who is co-advised with Ross Knepper.

Bio: Yoav Artzi is an Assistant Professor in the Department of Computer Science and Cornell Tech at Cornell University. His research focuses on learning expressive models for natural language understanding, most recently in situated interactive scenarios. He received an NSF CAREER award, paper awards in EMNLP 2015, ACL 2017, and NAACL 2018, a Google Focused Research Award, and faculty awards from Google, Facebook, and Workday. Yoav holds a B.Sc. summa cum laude from Tel Aviv University and a Ph.D. from the University of Washington.

Human-guided Task Transfer for Interactive Robots

Tesca Fitzgerald, Georgia Tech

9/3/2019

Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract:

Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, most robots do not have this kind of adaptability, and yet, as our expectations of robots’ interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans. 

In this talk I will describe my approaches to the problem of task transfer, enabling a robot to transfer a known task model to address scenarios containing differences in the objects used, object configurations, and task constraints. The primary contribution of my work is a series of algorithms for deriving and modeling domain-specific task information from structured interaction with a human teacher. In doing so, this work enables the robot to leverage the teacher’s domain knowledge of the task (such as the contextual use of an object or tool) in order to address a range of tasks without requiring extensive exploration or re-training of the task. By enabling a robot to ask for help in addressing unfamiliar problems, my work contributes toward a future of adaptive, collaborative robots.

 

Bio:

Tesca Fitzgerald is a Computer Science PhD candidate in the School of Interactive Computing at the Georgia Institute of Technology. In her PhD, she has been developing algorithms and knowledge representations for robots to learn, adapt, and reuse task knowledge through interaction with a human teacher. In doing so, she applies concepts of social learning and cognition to develop a robot which adapts to human environments.

Tesca is co-advised by Dr. Ashok Goel (director of the Design and Intelligence Lab) and Dr. Andrea Thomaz (director of the Socially Intelligent Machines Lab). Before joining Georgia Tech in 2013, she graduated from Portland State University with a B.Sc. in Computer Science. Tesca is an NSF Graduate Research Fellow (2014), Microsoft Graduate Women Scholar (2014), and IBM Ph.D. Fellow (2017).