Join the Robotics Listserv

To subscribe to event updates, send an email to with “join” in the subject line.


Academic Paper Writing Clinic: Principles and Practice

Guy Hoffman


Location: Upson 106 Conference Room Next to the Lounge

Time: 2:45p.m.

Abstract: How does one write a good academic paper? What makes some papers easier to read than others? Are there techniques that can easily be applied to improve your paper? How do you overcome “blank-page syndrome”? In this workshop, I will share some of the lessons I have learned over years of writing academic and non-academic texts. I will analyze published papers and, if there is interest, propose strategies for students’ existing papers-in-process. Please send examples your own writing that you would like us to discuss at least 48 hours before the seminar.

Teaser: Here are two of Donella Meadows’s [] tips for writing an op-ed column:

1. Be clear, not fancy: Use everyday language. Be specific, not abstract.  Offer easily imaginable examples. Be sure your words make pictures in people’s heads. Be sure the pictures are the ones you intend.

2. Use most of your column for evidence: Tell stories, give statistics, show the impact of the problem or the solution on the real world. People can form their own conclusions if you give them the evidence. Don’t take much space for grand, abstract conclusions; let the reader form the conclusions.

Interactive Natural Language-based Person Search & Dynamics of Solid-Liquid Structures in Soft Robotics

Vikram Shree & Yoav Matia, Cornell University


Location: Upson 106 Conference Room Next to the Lounge

Time: 2:45p.m.


Vikram Shree: Interactive Natural Language-based Person Search

Today robots are equipped with rich sensors to enable different forms of interactions. Among these, using visual and natural language information is of particular interest and is commonly viewed as the most user-friendly way because of its frequent use in how humans interact with each other. One such problem that entails multi-modality of data is the task to find a person of interest (POI) in a crowd, based on natural language description about their appearance. In the talk, I will present my work on designing algorithms to systematically retrieve descriptions from a user about the POI.

Yoav Matia: Dynamics of Solid-Liquid Structures in Soft Robotics

In this work, we analyze the transient dynamics of solid-fluid composite structures. This is an interdisciplinary research subject, which lies on the border between theoretical fluid mechanics, soft-robotics and composite-structures. We focus on an elastic beam embedded with fluid-filled cavities as a representing common configuration. Beam deformation both creates and is induced by internal viscous flow, where changes to cavities’ volume are balanced by a change in axial flux. As a result, pressure gradients develop in the fluid in order to conserve mass, and stresses are induced at the solid-fluid interface; these in turn, create local moments and normal forces, deforming the surrounding solid and vice versa.

The results of the presented research can be applied to define the required geometric and physical properties of solid-fluid structures in order to achieve specific responses to external excitations, thus allowing to leverage viscous-elastic dynamics to create novel soft-actuators and solid-fluid composite materials with unconventional mechanical properties.

Enabling Local-To-Global Behaviors Through a Scalable, Deformable Collective


Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: Modular self-reconfigurable robots are typically composed of homogeneous units executing a set of programmed interactions with their neighbors based on a deterministic rule set. Some stochastic modular robotic systems take advantage of their physical design to arrive at a desired state and offer great potential in scalability. We propose combining an innovative hardware design with a control algorithm that enables intermodule interactions with some inherent randomness to ensure successful attachment through permanent magnets. We present the FOAMbots, a scalable, planar, modular robot composed of inflatable units, capable of onboard processing, actuation, sensing, and communication. Each module contains a poro-elastic foam that enables structural integrity while allowing fluid to flow through its volume. Pairs of permanent magnets along the modules’ perimeters enable attachment with adjacent modules, and low-cost, stretchable strain sensors allow modules to communicate and sense their surroundings. This presentation will introduce the hardware, various characterizations of the modular robot’s mechanical and locomotion properties, and a discussion of the algorithms that are currently being implemented to achieve local-to-global changes in the collective’s mechanical properties.

Heterogeneous Team of Robots: Sampling in aquatic environments

Alberto Quattrini Li, Dartmouth College


Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: How can robots effectively explore, monitor, and sample in large scale aquatic environments? This talk presents a recent interdisciplinary project funded by the National Science Foundation on monitoring cyanobacterial blooms in lakes with a team of heterogeneous robots. I will present a sample of solutions that involve the development and deployment of aquatic robotic systems for data collection. First, I show our efficient multirobot algorithms for a team of Autonomous Surface Vehicles governed by Dubins vehicle dynamics to cover of large areas of interest. Field trials with custom-modified motorized kayak are presented, providing insights for improvements.

Second, I discuss the use of a heterogeneous team of robots to exploit their complementary capabilities to reduce the operational cost and increase the mission time for environmental monitoring and water sampling. Using machine learning techniques to model the distribution of the observed phenomena, we developed adaptive exploration and sampling strategies that accounts for reduction in uncertainty. Experimental results from several field experiments together with some lessons learned will be presented.

The talk will conclude with a discussion on some of the open problems that still need to be fully addressed for a robust multirobot system useful for addressing environmental problems and current work, such as ensuring high-quality data and recovery mechanisms, towards the long-term goal of a ubiquitous collaborative multiagent/multirobot system for accomplishing large scale real world tasks.

Bio: Alberto Quattrini Li is an assistant professor in the Department of Computer Science at Dartmouth College and co-director of the Dartmouth Reality and Robotics Lab. He was a postdoctoral fellow and research assistant professor in the Autonomous Field Robotics Laboratory (AFRL), led by Professor Ioannis Rekleitis, in University of South Carolina from 2015 to 2018. During 2014, he was a visiting PhD student in the Robotic Sensor Networks Lab, directed by Professor Volkan Isler, at the Department of Computer Science and Engineering, University of Minnesota. He received a M.Sc. (2011) and a Ph.D. (2015) in Computer Science and Engineering from Politecnico di Milano, working with Professor Francesco Amigoni. His main research (currently funded by the National Science Foundation) include autonomous mobile robotics and active perception, applied to the aquatic domain, dealing with problems that span from multirobot exploration and coverage to multisensor fusion based state estimation. He has worked with many ground and marine robots, including Autonomous Surface Vehicles and Autonomous Underwater Vehicles.


Robotics Day

Date: 12/10/2019

Time: 10:00 am – 4:00 pm

Location: Duffield Hall Atrium

Join us for a day of robotics celebration and competition, with interactive exhibits by our award-winning project and research teams. All are welcome!

Robotic Maze Runners
ECE 3400: Intelligent Physical Systems
10:00 am to 12:00 pm

Cube Crazy Robots
MAE 3780: Mechatronics
1:00 pm to 4:00 pm

Formal Verification of End-to-End Deep Reinforcement Learning

Yasser Shoukry, University of California – Irvine


Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: From simple logical constructs to complex deep neural network models, Artificial Intelligence (AI)-agents are increasingly controlling physical/mechanical systems. Self-driving cars, drones, and smart cities are just examples of such systems to name a few. However, regardless of the explosion in the use of AI within a multitude of cyber-physical systems (CPS) domains, the safety, and reliability of these AI-enabled CPS is still an understudied problem. Mathematically based techniques for the specification, development, and verification of software and hardware systems, also known as formal methods, hold the promise to provide appropriate rigorous analysis of the reliability and safety of AI-enabled CPS. In this talk, I will discuss our work on applying formal verification techniques to provide formal verification of the safety of autonomous vehicles controlled by end-to-end machine learning models and the synthesis of certifiable end-to-end neural network architectures.

Bio: Yasser Shoukry is an Assistant Professor in the Department of Electrical Engineering and Computer Science at the University of California, Irvine where he leads the Resilient Cyber-Physical Systems Lab. Before joining UCI, he spent two years as an assistant professor at the University of Maryland, College Park. He received his Ph.D. in Electrical Engineering from the University of California, Los Angeles in 2015. Between September 2015 and July 2017, Yasser was a joint postdoctoral researcher at UC Berkeley, UCLA, and UPenn. His current research focuses on the design and implementation of resilient cyber-physical systems and IoT. His work in this domain was recognized by the NSF CAREER Award, the Best Demo Award from the International Conference on Information Processing in Sensor Networks (IPSN) in 2017, the Best Paper Award from the International Conference on Cyber-Physical Systems (ICCPS) in 2016, and the Distinguished Dissertation Award from UCLA EE department in 2016. In 2015, he led the UCLA/Caltech/CMU team to win the NSF Early Career Investigators (NSF-ECI) research challenge. His team represented the NSF- ECI in the NIST Global Cities Technology Challenge, an initiative designed to advance the deployment of Internet of Things (IoT) technologies within a smart city. He is also the recipient of the 2019 George Corcoran Memorial Award for his contributions to teaching and educational leadership in the field of CPS and IoT.

Can Science Fiction Help Real Robots?

Deanna Kocher and Ross Knepper


Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: With creative license, science fiction envisions a future in which robots operate among humans. Stories like Bladerunner and Star Trek help us to imagine the ways in which robots could bring out both the best and the worst in humanity. As researchers and companies develop real robots, we notice that they operate on a different plane of assumptions than sci-fi robots. For instance, Isaac Asimov’s three laws of robotics tacitly assume an accurate human detector. In the real world, the three laws are useless to a robot that cannot reliably distinguish a person from a piece of furniture. Science fiction authors are not technologists, for the most part, but do they have something useful to contribute to us? We lead a group discussion about how the two separate planes of real robotics and fantasy robots can be made to intersect. We ask how we roboticsts could utilize science fiction, which has a rich history of considering the ethical dilemmas that may one day arise from robots. And we ask what roboticists can do for science fiction authors and society at large to create a better understanding of robot capabilities and limitations.

Robotics Collaboration Speed Dating


Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: Collaboration opportunities abound in robotics. Today, we will do an activity to speculatively explore connections between people’s research areas. We will pair you up with other people in the group for short amounts of time, and the goal of each encounter is to find a common theme, idea, or project that the two of you could work on together. If you don’t currently do research in robotics, you can shadow somebody else or make up a project on the spot. After we are done brainstorming projects in pairs, we will have an opportunity to share our best ideas with the group.

Transience, Replication, and the Paradox of Social Robotics

Guy Hoffman, Cornell University


Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: As we continue to develop social robots designed for connectedness, we struggle with paradoxes related to authenticity, transience, and replication. In this talk, I will attempt to link together 15 years of experience designing social robots with 100-year-old texts on transience, replication, and the fear of dying. Can there be meaningful relationships with robots who do not suffer natural decay? What would our families look like if we all choose to buy identical robotic family members? Could hand-crafted robotics offer a relief from the mass-replication of the robot’s physical body and thus also from the mass-customization of social experiences?

Robots, Language, and Human Environments: Approaches to Modeling Linguistic Human-Robot Interactions

Cynthia Matuszek, University of Maryland


Location: Upson 106 Conference Room Next to the Lounge

Time: 3:00p.m.

Abstract: As robots move from labs and factories into human-centric spaces, it becomes progressively harder to predetermine the environments, tasks, and human interactions they will need to be able to handle. Letting these robots learn from end users via natural language is an intuitive, versatile approach to handling novel situations robustly. Grounded language acquisition is concerned with learning the meaning of language as it applies to the physical world. At the same time, physically embodied agents offer a way to learn to understand natural language in the context of the world to which it refers. In this presentation, I will give an overview of our recent work on joint statistical models to learn the grounded semantics of natural language describing objects, spaces, and actions, as well as presenting some open problems.

Bio: Cynthia Matuszek is an assistant professor of computer science and electrical engineering at the University of Maryland, Baltimore County. Dr. Matuszek directs UMBC’s Interactive Robotics and Language lab, in which research is focused on robots’ acquisition of grounded language, including work in human-robot interfaces, natural language, machine learning, and collaborative robot learning. She has developed a number of algorithms and approaches that make it possible for robots to learn about their environment and how to follow instructions from interactions with non-technical end users. She received her Ph.D. in computer science and engineering from the University of Washington in 2014. Dr Matuszek has published in artificial intelligence, robotics, and human-robot interaction venues, and was named in the most recent IEEE bi-annual “10 to watch in AI.”