Designing Robot Collectives
In robot collectives, interactions between large numbers of individually simple robots lead to complex global behaviors. A great source of inspiration is social insects such as ants and bees, where thousands of individuals coordinate to handle advanced tasks like food supply and nest construction in a remarkably scalable and error tolerant manner. Likewise, robot swarms have the ability to address tasks beyond the reach of single robots, and promise more efficient parallel operation and greater robustness due to redundancy. Key challenges involve both control and physical implementation. In this seminar I will discuss an approach to such systems relying on embodied intelligent robots designed as an integral part of their environment, where passive mechanical features replace the need for complicated sensors and control.
The majority of my talk will focus on a team of robots for autonomous construction of user-specified three-dimensional structures developed during my thesis. Additionally, I will give a brief overview of my research on the Namibian mound-building termites that inspired the robots. Finally, I will talk about my recent research thrust, enabling stand-alone centimeter-scale soft robots to eventually be used in swarm robotics as well. My work advances the aim of collective robotic systems that achieve human-specified goals, using biologically-inspired principles for robustness and scalability.
Boeing LIFT! Project – Cooperative Drones to Reduce the Cost of Vertical Flight
The LIFT! Project explored scaling of all-electric multi-rotor propulsion and methods of cooperation between multiple VTOL aircraft. Multi-rotor aircraft have become pervasive throughout the hobby industry, toy industry and research institutions due – in part – to very powerful, inexpensive inertial measurement devices and increased energy density of Li-Ion batteries driven by the mobile phone industry. This research demonstrates the viability of large multi-rotor systems up to two magnitudes of gross weight larger than a typical COTS hobby multi-rotor vehicle. Furthermore, this research demonstrates modularity and cooperation between large multi-rotor aircraft. In order to study large multi-rotor technologies, The Boeing Company decided to build a series of large scale multi-rotor vehicles ranging from 6 lbs gross weight to over 525 lbs gross weight using low cost COTS components. The LIFT! Project successfully demonstrated the effectiveness, modularity and scalability of electric multi-rotor technologies while identifying a useful load fraction (useful load/gross weight) of 0.64 for large, electric, unmanned multi-rotor aircraft. This research offers new insights on the feasibility of large electric VTOL aircraft, empirical trends, potential markets, and future research necessary for the commercial viability of electric VTOL aircraft.
Interacting with Robots through Touch: Materials as Affordances
Nonverbal behavior is at the core of human-robot interaction, but the subfield of social haptics is distinctly underrepresented. Most efforts focus around inserting sensors under a soft skin and using pattern recognition to infer a human’s tactile intention. There is virtually no work on robots touching humans in a social way, or robots responding to touch in a socially meaningful tactile manner. In that context, the advent of soft robotics and computational materials offers a new way for social robots to express internal and affective states. In the past, robot used mainly rotational and prismatic degrees of freedom for expression. How can new actuation technologies, such as shape-memory alloys, pneumatics, and “4D printed” structures contribute to new feedback methods and interaction paradigms? Also, how can we integrate traditional materials, such as wood, metals and ceramics to support the robot’s expressive capacity?
Architectural Robotics: Ecosystems of Bits, Bytes, & Biology
Keith Evan Green looks toward a next frontier in robotics: interactive, partly intelligent, meticulously designed physical environments. Green calls this “Architectural robotics”: cyber-physical, built environments made interactive, intelligent, and adaptable by way of embedded robotics, or in William Mitchell’s words, “robots for living in.” In architectural robotics, computation—specifically robotics—is embedded in the very physical fabric of our everyday living environments at relatively large physical scales ranging from furniture to the metropolis. In this talk, Green examines how architectural robotic systems support and augment us at work, school, and home, as we roam, interconnect, and age.
On the Communicative Aspect of Human-Robot Joint Action
Actions performed in the context of a joint activity comprise two aspects: functional and communicative. The functional component achieves the goal of the action, whereas its communicative component, when present, expresses some information to the actor’s partners in the joint activity. The interpretation of such communication requires leveraging information that is public to all participants, known as common ground. Humans cannot help but infer some meaning – whether or not it was intended by the actor – and so robots must be cognizant of how their actions will be interpreted in context. In this talk, I address the questions of why and how robots can deliberately utilize this communicative channel on top of normal functional actions to work more effectively with human partners. We examine various human-robot interaction domains, including social navigation and collaborative assembly.
Part II: On the Communicative Aspect of Human-Robot Joint Action
Part II — Actions performed in the context of a joint activity comprise two aspects: functional and communicative. The functional component achieves the goal of the action, whereas its communicative component, when present, expresses some information to the actor’s partners in the joint activity. The interpretation of such communication requires leveraging information that is public to all participants, known as common ground. Humans cannot help but infer some meaning – whether or not it was intended by the actor – and so robots must be cognizant of how their actions will be interpreted in context. In this talk, I address the questions of why and how robots can deliberately utilize this communicative channel on top of normal functional actions to work more effectively with human partners. We examine various human-robot interaction domains, including social navigation and collaborative assembly.
Telepresence Robot Communication, Gender, and Metaphors of (Dis)ability
The principal use of telepresence robots is for human-human communication, where at least one person (the pilot) is remote via the robot and one or more persons (locals) are on site. It is important, therefore, to understand the nature of such communication – how locals perceive robot pilots as social actors, how robotic mediation affects interactional dynamics and norms, and how the experience of telepresence robot communication varies for different groups of users. In this talk, I address these questions through the dual lenses of gender and (dis)ability. I report the findings of a mock job interview study in which a male interviewer used a Beam+ telepresence robot, and the male and female interviewees were primed in advance with one of three metaphors about the interviewer – as a robot, as a (normal) human, or as a human with disabilities (cf. Takayama & Go, 2012). The interviews and reponses to a post-study survey were analyzed for interaction with, and attitudes toward, the robot interviewer. Initial results reveal differences across genders and across metaphorical priming conditions, but whereas the former are largely consistent with previous findings on gender and technology, the metaphor findings were unexpected. I discuss evidence that telepresence robot communication privileges some groups of communicators over others and suggest possible interventions – including metaphor manipulation and modifications to the robots themselves – to establish a level playing field before telepresence robot communication practices, which are currently emergent, become fixed.
Biographical Note: Susan Herring is Professor of Information Science and Linguistics at Indiana University, Bloomington. Mobility challenged herself, she uses and researches telepresence robots. She is also a long-time researcher of digitally-mediated communication, Director of IU’s Center for Computer-Mediated Communication, a past editor of the Journal of Computer-Mediated Communication, and current editor of Language@Internet.
Verifiable Grounding and Execution of Natural Language Instructions
Robots are increasingly often expected to work along with humans. Natural language enables bi-directional interaction: for users to specify tasks and for the system to provide feedback. A significant challenge particular to this situated interaction is establishing correspondence between language and their physical meaning such as actions and objects, known as grounding. As both tasks and environments increase in complexity, the potential for ambiguity in interpreting the user’s statements increases.
I will present a grounding model which combines both physical and Linear Temporal Logic (LTL) representations to ground instructions. It allows for a formal specification to be generated from the grounding process. This specification is synthesized into a controller guaranteed to accomplish the task. Conversely, if synthesis is unsuccessful, it reveals problems such as logical inconsistencies in the specification or discrepancies between the specification and the physical environment.
In this latter case, the robot conveys these issues through natural language by referencing the physical environment and incorporates the user’s responses back into the specification. This robot-driven interaction enables the user to iteratively correct the grounded specification without requiring knowledge of the underlying representation.
Improving actuation efficiency through variable recruitment hydraulic McKibben muscles
Hydraulic control systems have become increasingly popular as the means of actuation for human-scale legged robots and assistive devices. One of the biggest limitations to these systems is their run time untethered from a power source. One way to increase endurance is by improving actuation efficiency. We investigate reducing servovalve throttling losses by using a selective recruitment artificial muscle bundle comprised of three motor units. Each motor unit is made up of a pair of hydraulic McKibben muscles connected to one servovalve. The pressure and recruitment state of the artificial muscle bundle can be adjusted to match the load in an efficient manner, much like the firing rate and total number of recruited motor units is adjusted in skeletal muscle. A volume-based effective initial braid angle is used in the model of each recruitment level. This semi-empirical model is utilized to predict the efficiency gains of the proposed variable recruitment actuation scheme versus a throttling-only approach. A real-time orderly recruitment controller with pressure-based thresholds is developed. This controller is used to experimentally validate the model-predicted efficiency gains of recruitment on a robot arm. The results show that utilizing variable recruitment allows for much higher efficiencies over a broader operating envelope.
Decentralized Multi-Agent Navigation Planning with Braids
Navigating a human environment is a hard task for a robot, due to the lack of formal rules guiding traffic, the lack of explicit communication among agents and the unpredictability of human behavior. Despite the great progress in robotic navigation over the past few decades, robots still fail to navigate multi-agent human environments seamlessly. Most existing approaches focus on the problem of collision avoidance without explicitly modeling agents’ interactions. This often results in non-smooth robot behaviors that tend to confuse humans, who in turn react unpredictably to the robot motion and further complicate the robot’s decision making.
In this talk, I will present a novel planning framework that aims at reducing the emergence of such undesired oscillatory behaviors by leveraging the power of implicit communication through motion. Inspired by the collaborative nature of human navigation, our approach explicitly incorporates the concept of cooperation in the decision making stage, by reasoning over joint strategies of avoidance instead of treating others as separate moving obstacles. These joint strategies correspond to the spatiotemporal topologies of agents’ trajectories and are modeled using the topological formalism of braids. The braid representation is the basis for the design of an inference mechanism that associates agents’ past trajectories with future collective behaviors in a given context. This mechanism is used as a means of “social understanding” that allows agents to select actions that express compliance with the emerging joint strategy by compromising efficiency. Incorporating such a mechanism in the planning stage results in a rapid uncertainty decrease regarding the emerging joint strategy that facilitates all agents’ decision making. Simulated examples of multi-agent scenarios highlight the benefit of reasoning about joint strategies and appear promising for application in real-world environments.
The schedule is maintained by Corey Torres (email@example.com) and Ross Knepper (firstname.lastname@example.org). To be added to the mailing list, please follow the e-list instructions for joining a mailing list. The name of the mailing list is robotics-l. If you have any questions, please email email@example.com.