Valts Blukis, Ilan Mandel, David Goedicke, Natalie Friedman, Travers Rhodes, PhD students, Cornell Tech
Natalie Friedman: Within human-robot interaction, I study how robots should be designed to move and behave in various contexts, based on the perception of social appropriateness.
David Goedicke: Our lab works mostly on resining implicit interaction for devices that to some degree make their own decision. I will show past work on Autonomous Vehicles (as large robots one sits in). How we use Virtual Reality to explore interaction, and my new research direction, which is Acoustically aware robots.
Valts Blukis: We study representation learning approaches for building robots that understand natural language in context of raw visual and sensory observations. I’ll present our recent work on mapping raw images and navigation instructions to physical quadcopter control, using a neural network model trained using simulated and real data. The model reasons about the need to explore the environment and incorporates geometric computation to predict which locations in the environment to visit. Finally, I’ll talk about the challenges when scaling representation learning methods to reason about previously unseen objects and environments.
Travers Rhodes: Variational Auto-Encoders (VAEs) have been known to “ignore” some latent-variable dimensions in their representations. This talk explores known results for what those representations look like for simplified, linear VAEs and presents some directions for future work on more complicated VAEs.