What is the best way to validate robotics research?

Ross Knepper, Cornell University

2/5/19

What makes robotics robotics?  What does it take to validate our robots?  There is a natural tension between building real robots and benchmarking robot algorithms.  Real robot tests do not easily scale to large numbers, meaning that it is hard to take advantage of tools and techniques used by other fields (deep learning, statistical power).  On the other hand, simulations make many approximations and simplifying assumptions that mean algorithms designed in simulation may achieve lackluster performance on real robot hardware.  A standard formula in robotics papers is “proof by video”, which reviewers may give more weight than it deserves.  A new development in the robotics field is a growing interest from computer vision researchers.  They bring with them a culture of standardized benchmarks, large scale datasets, and deep learning techniques.  They deploy robots to navigate within and even interact with the real world, and they are developing new datasets and benchmarks for use in robotics problems.  We will discuss how vision is changing robotics research as well as how robotics is changing vision research.  How will results be evaluated in the future within these neighboring cultures?