Planning for Human-Robot Systems under Augmented Partial Observability

Date: 2/10/2022

Speaker: Shiqi Zhang

Shiqi Zhang Head Shot

Location: 122 Gates Hall and Zoom

Time: 2:40 p.m.-3:30 p.m.

Abstract: The real world is partially observable to both people and robots. To estimate the world state, a robot needs a perception model to interpret sensory data. How does a robot plan its behaviors without such perception models? I will present our recent research on learning algorithms to help robots perceive and plan in stochastic worlds. With humans in the loop, robot planning becomes more difficult, because people and robots need to estimate not only the world state but also each other’s state. My second half of the talk will be about frameworks for human-robot communication and collaboration. I will share our work on leveraging AR/VR visualization strategies for transparent human-robot teaming toward effective collaboration.

About the Speaker: Dr. Shiqi Zhang is an Assistant Professor with the Department of Computer Science, the State  University of New York (SUNY) at Binghamton. Before that, he was an Assistant Professor at Cleveland State University after working as a Postdoc at UT Austin. He received his Ph.D. in Computer Science (2013) from Texas Tech University, and received his M.S. and B.S. degrees from Harbin Institute of Technology. He is leading an NSF NRI project on knowledge-based robot decision making. He received the Best Robotics Paper Award from AAMAS in 2018, a Ford URP Award from 2019-2022, and an OPPO Faculty Research Award in 2020.

 

 

REGROUP: A Robot-Centric Group Detection and Tracking System

Date: 2/3/2022

Speaker: Angelique Taylor

Location: 122 Gates Hall

Time: 2:40 p.m.-3:30 p.m.

Abstract: To facilitate the field of Human-Robot Interaction (HRI) to transition from dyadic to group interaction with robots, new methods are needed for robots to sense and understand human team behavior. We introduce the Robot-Centric Group Detection and Tracking System (REGROUP), a new method that enables robots to detect and track groups of people from an ego-centric perspective using a crowd-aware, tracking-by-detection approach. Our system employs a novel technique that leverages person re-identification deep learning features to address the group data association problem. REGROUP is robust to real-world vision challenges such as occlusion, camera ego-motion, shadow, and varying lighting illuminations. Also, it runs in real-time on real-world data. We show that REGROUP outperformed three group detection methods by up to 40% in terms of precision and up to 18% in terms of recall. Also, we show that REGROUP’s group tracking method outperformed three state-of-the-art methods by up to 66% in terms of tracking accuracy and 20% in terms of tracking precision. We plan to publicly release our system to support HRI teaming research and development. We hope this work will enable the development of robots that can more effectively locate and perceive their teammates, particularly in uncertain, unstructured environments.

About the Speaker: Angelique Taylor is a Visiting Research Scientist at Meta Reality Labs Research. She received her Ph.D. in Computer Science and Engineering at UC San Diego. Her research lies at the intersection of computer vision, robotics, and health informatics. She develops systems that enable robots to interact and work with groups of people in safety-critical environments. At Meta, Dr. Taylor is working on augemented/virtual reality (AR/VR) systems that deploy AI algorithms to help multiple people coordinate to achieve a common goal on collaborative tasks. She has received the NSF GRFP, Microsoft Dissertation Award, the Google Anita Borg Memorial Fellowship, the Arthur J. Schmitt Presidential Fellowship, a GEM Fellowship, and an award from the National Center for Women in Information Technology (NCWIT). More information on her research can be found at angeliquemtaylor.com.

Designing Emotionally-Intelligent Agents that Move, Express, and Feel Like Us!

Speaker: Aniket Bera

Headshot of speaker Aniket Bera

1/27/2022

Location: 122 Gates Hall

Time: 2:40 p.m.-3:30 p.m.

Abstract:

Human behavior modeling is vital for many virtual/augmented reality systems as well as human-robot interactions. As the world increasingly uses digital and virtual platforms for everyday communication and interactions, there is a heightened need to create human-like virtual avatars and agents endowed with social and emotional intelligence. Interactions between humans and virtual agents are being used in different areas including, VR, games and story-telling, computer-aided design, social robotics, and healthcare. At the same time, recent advances in robotic perception technologies are gradually enabling humans and human-like robots to co-exist, co-work, and share spaces in different environments. Knowing the perceived affective states and social-psychological constructs (such as behavior, emotions, psychology, motivations, and beliefs) of humans in such scenarios allows the agents (virtual humans or social robots) to make more informed decisions and interact in a socially intelligent manner.

In this talk, I will give an overview of our recent work on simulating intelligent, interactive, and immersive human-like agents who can also learn, understand and be sentient to the world around them using a combination of emotive gestures, gaits, and expressions. Finally, I will also talk about our many ongoing projects which use our AI-driven IVAs, including intelligent digital humans for urban simulation, crowd simulation, mental health and therapy applications, and social robotics.

 

About the speaker:

Aniket Bera is an Assistant Research Professor at the Department of Computer Science. His core research interests are in Affective Computing, Computer Graphics (AR/VR, Augmented Intelligence, Multi-Agent Simulation), Autonomous Agents, Cognitive modeling, and planning for intelligent characters. His work has won multiple best paper awards at top VR/AR conferences. He has previously worked in many research labs, including Disney Research and Intel Labs. Aniket’s research has been featured on CBS, WIRED, Forbes, FastCompany, etc. Find out more about Aniket here: https://cs.umd.edu/~ab.