Time: 10:00 am – 4:00 pm
Location: Duffield Hall Atrium
Join us for a day of robotics celebration and competition, with interactive exhibits by our award-winning project and research teams. All are welcome!
Robotic Maze Runners
ECE 3400: Intelligent Physical Systems
10:00 am to 12:00 pm
Cube Crazy Robots
MAE 3780: Mechatronics
1:00 pm to 4:00 pm
Malte Jung, Cornell University
Human-robot interaction research to date has been dominated by laboratory studies, largely examining a single human interacting with a single robot. This research has helped establish a fundamental understanding of human-robot interaction, how specific design choices affect interactions with robots, and how novel mechanisms or computational tools can be used to improve HRI. The predominant focus of this growing body of work, however, stands in stark contrast to the complex social contexts in which robots are increasingly placed. As a result, we have a limited understanding of how groups people will interact with robots and how robots will affect how people interact with each other in groups. I will provide an overview of recent research performed at Robots in Groups lab, which addresses questions about human-robot collaboration with groups of people.
Ross Knepper, Cornell University
Every engineer has a duty to be aware of the ethical implications of their work. How could their technologies be used or misused? What are their impacts on society? Robotics technologies have the potential to transform society, with impacts on the economy, social relationships, care giving, jobs and work, safety, and many more. Please come prepared with questions and thoughts about the consequences of robots on society and the world.
Ross Knepper, Cornell University
What makes robotics robotics? What does it take to validate our robots? There is a natural tension between building real robots and benchmarking robot algorithms. Real robot tests do not easily scale to large numbers, meaning that it is hard to take advantage of tools and techniques used by other fields (deep learning, statistical power). On the other hand, simulations make many approximations and simplifying assumptions that mean algorithms designed in simulation may achieve lackluster performance on real robot hardware. A standard formula in robotics papers is “proof by video”, which reviewers may give more weight than it deserves. A new development in the robotics field is a growing interest from computer vision researchers. They bring with them a culture of standardized benchmarks, large scale datasets, and deep learning techniques. They deploy robots to navigate within and even interact with the real world, and they are developing new datasets and benchmarks for use in robotics problems. We will discuss how vision is changing robotics research as well as how robotics is changing vision research. How will results be evaluated in the future within these neighboring cultures?
Achim J. Lilienthal, Örebro University
Abstract: In this presentation I will first briefly introduce the Mobile Robot & Olfaction lab at Örebro University, Sweden. Grounded in a basic research interest in perception systems we study, as the name suggests, topics in mobile robotics and mobile robot olfaction (gas-sensitive robots). Following this division, I will present recent work addressing the creation and use of spatial Maps of Dynamics (MoDs), and long-term human motion prediction (mobile robotics) as well as recent developments in mobile robot olfaction, including bout-guided gas source localization and robot assisted gas tomography (mobile robot olfaction).
Bio: Prof. Achim J. Lilienthal is head of the Mobile Robotics and Olfaction Lab at Örebro University, Sweden. His research interests are mobile robot olfaction, rich 3D perception, navigation of autonomous transport robots, human robot interaction and mathematics education research. Achim Lilienthal obtained his Ph.D. in computer science from Tübingen University, Germany and his M.Sc. in Physics from the University of Konstanz, Germany. The Ph.D. thesis addresses gas distribution mapping and gas source localisation with mobile robots. The M.Sc. thesis is concerned with structure analysis of (C60)n+ clusters using gas phase ion chromatography.
Cindy Hsin-Liu Kao, Cornell University
Sensor device miniaturization and breakthroughs in novel materials are allowing for the placement of technology increasingly close to our physical bodies. However, unlike all other media, the human body is not simply another surface for enhancement – it is the substance of life, one that encompasses the complexity of individual and social identity. The human body is inseparable from the cultural, the social, and the political, yet technologies for placement on the body have often been developed separately from these considerations, with an emphasis on engineering breakthroughs. My work investigates opportunities for cultural interventions in the development of technologies that move beyond wearable clothing and accessories, and that are purposefully designed to be placed directly on the skin surface. How can we design emerging on-body interfaces to reflect existing cultural practices of decorating the body, with the intent to expand the agency of self-expression? I examine this question through the development of a series of research artifacts, and the contextualization of a design space for culturally sensitive design.Body Craft is defined as existing cultural, historical, and fashion-driven practices and rituals associated with body decoration, ornamentation, and modification. As its name implies, Hybrid Body Craft (HBC) is an attempt to hybridize technology with body craft materials, form factors, and application rituals, with the intention of integrating existing cultural practices with new technological functions that have no prior relationships with the human body. With this grounding, HBC seeks to support the generation of future technologized customs in which technology is integrated into culturally meaningful body adornments.
In this talk, I will introduce six example artifacts which encompass the integration of technologies such as on-body robotics, flexible electronics, and bio-compatible materials into existing Body Craft customs. These artifacts contribute novel, culturally inspired form factors, and introduce unprecedented interaction modalities for on-body technologies. A design space is created in which to examine shifts in the communicative qualities of these Body Crafts due to the integration of technology, as well as new forms of self-expression that have emerged. The Hybrid Body Craft research practice contributes a culturally sensitive lens to the design of on-body technologies. The intention is to expand their lifetimes and purposes beyond mere novelty and into the realms of cultural customs and traditions.
Ross Knepper, Cornell University & Dylan Shell, Texas A&M University
Robotics research is at a tipping point. Until now, robotics has largely taken a frontier mentality, akin to American westward expansion in the nineteenth century. Manifest Destiny was the belief that Americans were destined to conquer the continent from coast to coast. Settlers packed up their belongings and moved westward to build a homestead and plant their own personal flag on 160 acres of land. Similarly, flag-planting has long characterized much of robotics research, with many systems built to showcase firsts in the field (e.g. the first flatpack furniture assembly robot). We arrived at a tipping point now because industry has decided to make major investments in robotics engineering. The past flag-planting papers of academia serve the needs of industry poorly.
In this seminar, we hold a group discussion about the future of robotics research. We will begin by discussing the following questions.
- Robotics research is traditionally splintered by the flag-planting mentality. There is little incentive to replicate results, and there are many small research problems. Does the end of the frontier necessitate that we work on fewer, bigger problems? How can we all do a better job of making our results relevant and applicable to one another?
- Industry is better than academia at engineering. Is a shift towards a more scientific approach to robotics research inevitable? What are the consequences of a scientific outlook?
- How can the work we do in academia be made more relevant to the needs of industry while continuing to do what universities do best? Is this what is fertilizing the specialized robotics degree programs that are currently proliferating?
- Turner’s Frontier Thesis postulates that the fundamental American character is a consequence of the frontier movement. Does the frontier movement in robotics portend a similarly distinct character for post-frontier robotics research? If so, what are the specific consequences?
Sunghwan (Sunny) Jung, Cornell University
Fluids are vital to all life forms, and organisms presumably adapted their behaviors or features in response to mechanical forces to achieve better performance. In this talk, I will discuss two biological problems in which animals exploit mechanics principles. First, we investigated how animals transport water into the mouth using an inertia-driven (lapping) mechanism. Dogs accelerate the tongue upward (up to 4 g) to create a larger water column while drinking, whereas cats use a tongue motion with relatively small acceleration. We found that, in order to maximize the water intake per lap, both cats and dogs close the jaw at the column beak-up time governed by either unsteady or steady inertia. In the context of animal drinking, I will also talk about how bats drink water on the wing from a mechanics point-of-view, and illustrate an on-going design work to develop a bat-inspired vehicle to monitor the water quality along rivers or lakes. Second, we studied how birds with long slender necks plunge-dive and survive from the impact. Physical experiments of an elastic beam as a model for the neck attached to a skull-like cone revealed the limits for the stability of the neck during plunge-dive. We found that the small angle of the bird’s beak and the strong muscles in the neck predominantly reduce the likelihood of injury during high-speed plunge-dive. As a bio-inspired engineering, we design a bird-inspired projectile to explore underwater without propulsive mechanisms
Wendy Ju, Cornell University
The advent of autonomous technologies are both exciting and alarming. Ironically, the success or failure of such systems will very much depend on how they interact with people: the need for strong communication, interface and interaction design grows larger rather than smaller in the age of autonomy. In my Future Autonomy Research Lab, we are looking at how people will interact with robots and vehicles in the future. We are particularly concerned with joint performance of task, recognizing human states, and opportunities of learning and adaptation. By using simulation techniques, we can prototype and test interactions to understand how best to design our future.
Abstract: In order to easily and efficiently collaborate with humans, robots must learn to complete tasks specified using natural language. Natural language provides an intuitive interface for a layperson to interact with a robot without the person needing to program a robot, which might require expertise. Natural language instructions can easily specify goal conditions or provide guidances and constraints required to complete a task. Given a natural language command, a robot needs to ground the instruction to a plan that can be executed in the environment. This grounding can be challenging to perform, especially when we expect robots to generalize to novel natural language descriptions and novel task specifications while providing as little prior information as possible. In this talk, I will present a model for grounding instructions to plans. Furthermore, I will present two strategies under this model for language grounding and compare their effectiveness. We will explore the use of approaches using deep learning, semantic parsing, predicate logic and linear temporal logic for task grounding and execution during the talk.
Bio: Nakul Gopalan is a graduate student in the H2R lab at Brown University. His interests are in the problems of language grounding for robotics, and abstractions within reinforcement learning and planning. He has an MSc. in Computer Science from Brown University (2015) and an MSc. in Information and Communication Engineering from T.U. Darmstadt (2013) in Germany. He completed a Bachelor of Engineering from R.V. College of Engineering in Bangalore, India (2008). His team recently won the Brown-Hyundai Visionary Challenge for their proposal to use Mixed Reality and Social Feedback for Human-Robot collaboration.