Robotics at Cornell spans various subareas, including perception,
control, learning, planning, human-robot interaction. We work with a variety of robots such as aerial robots, home and office assistant robots, autonomous cars, humanoids, evolutionary robots, legged robots, snake robots and more.
works in the area of estimation theory and control for autonomous and semi-autonomous systems, with a special emphasis on robotics and aerospace applications. Specific research areas include sensor fusion and probabilistic perception, control and planning in the presence of uncertainties, human decision modeling, and human-robot interaction. His educational focus is on control systems, estimation, and space systems, with an emphasis on experimental learning projects such as student built satellites and robotics competitions such as the DARPA Urban Challenge.
, director of the Intelligent Machine Systems (LIMS) lab,
works in various areas of robotics and autonomous systems
topics including bio-inspired robotics, energy harvesting vehicles and
controlled biological systems. Topics of interest are ornithoptic
vehicle design, perching and bat-wing inspired UAVs,
and formation flight of UAVs. His interests lie in autonomous air and
ground vehicles for defense and security applications.
He was the faculty advisor to the DARPA Grand Challenge Team
and co-advisor on the Cornell DARPA Urban Challenge Team. He is now the
faculty advisor for the Cornell Minesweeper Project.
, Dean and Vice Provost of the Cornell Tech Campus, works on computer vision and autonomous vehicles.
His research in computer vision ranges from theoretical algorithms (using techniques from computational geometry and graph algorithms) to the development of end-to-end systems that apply visual matching and recognition techniques.
His work on autonomous vehicles grows out of
his role as co-leader of Team Cornell's entry in the DARPA Urban Challenge
race. Their vehicle was one of 6 out of 11 finalists (and 35 semi-finalists)
to complete the race.
research addresses the algorithmic aspects of advanced factory automation, enabling autonomous robots to function safely and comprehensibly alongside humans in environments structured for people. Doing so will open up many products still assembled by hand today to automation opportunities. Leveraging insights from psychology, sociology, and linguistics, robots will interact with factory workers through natural language and gesture, so that they can be programmed without special training and can operate as peers with human workers.
reseach focuses on verifiable high-level robot control. She is interested in creating autonomous robots that perform user-defined high-level tasks in dynamic environments while providing guarantees of correctness for their behavior. Her areas of research span traditional ME, CS and EE topics and include hybrid systems, symbolic control and connections between formal methods, logic, natural language and robotics.
on biologically-inspired robotics. His work addresses ways in which systems
can autonomously adapt their behavior (control) and morphology (shape) to
new tasks and environments. Areas of interest include topics such as
evolutionary robotics, modular robotics, self-assembly and self-modeling.
Lipson directs the Computational Synthesis Lab (CCSL)
, which comprises
graduate and undergraduate students from ME, CS, ECE and many other fields.
goal is to understand the mechanics of walking. The main
route to this is through designing, simulating, building and
testing walking robots. A special focus is on the energetics
of walking. People, no matter how you measure, use much less
energy than almost all walking robots. We want to make robots
as energy-stingy as people, or even better, and we'd like
the robots to be stable too.
research focuses on machine learning for robotics and perception.
His learning algorithms can enable a robot to estimate 3D structure from a
single image. This allows robots such as cars and helicopters to navigate
successfully in cluttered environments.
His home and office assistant robots perform tasks such as opening new
doors, grasping previously unseen object, and unloading items from dishwashers.
He was a recipient of the best paper award at ICCV-3dRR, and his work has been
featured in Wired Magazine and The New York Times.
is focused on the material and mechanical design of soft machines. His work begins by identifying new material compositions for actuators, then mapping them onto a compatible mechanical system for motion. Due to the compliance of his soft material systems, the resulting machines are underactuated (e.g., they have more degrees of freedom than actuators) and much of their capabilities arise from this property. His interests lie in further developing the abilities of these soft machines.
works on using computer vision to automatically build accurate 3D models from large collections of 2D images. His research is particular focused on problems of scale, efficiency, and robustness: how can we quickly, reliably reconstruct city-scale scenes from thousands or millions of photos? Noah also teaches an undergraduate course that uses robots and computer vision to introduce students to computer programming.
is interested in computational aspects of motion, with a focus on algorithms for physically based simulation. One research theme has been real-time algorithms for interactive physics simulation. The work has broad application to computer graphics and animation, haptic force-feedback rendering, medical robotics, and also physically based sound synthesis. He teaches courses on computational motion, computer graphics, and physically based simulation for computer animation.