Abstract: Machine learning techniques have transformed many fields, including computer vision and natural language processing, where plentiful data can be cheaply and easily collected and curated. Training data in robotics is expensive to collect and difficult to curate or annotate. Furthermore, robotics cannot be formulated as simply a prediction problem in the way that vision and NLP can often be. Robots must close the loop, meaning that we ask our learning techniques to consider the effect of possible decisions on future predictions. Despite exciting progress in some relatively controlled (toy) domains, we still lack good approaches to adapting modern machine learning techniques to the robotics problem. How can we overcome these hurdles? Please come prepared to discuss. Here are some potential discussion topics:
- Are robot farms like the one at Google a good approach? Google has dozens of robots picking and placing blocks 24/7 to collect big training data in service of training traditional models.
- Since simulation allows the cheap and easy generation of big training data, many researchers are attempting domain transfer from simulation to the real robot. Should we be attempting to make simulators photo-realistic with perfect physics? Alternatively, should we instead vary simulator parameters to train a more general model?
- How can learned models adapt to unpredictable and unstructured environments such as people’s homes? When you buy a Rosie the Robot, is it going to need to spend a week exploring the house, picking up everything, and tripping over the cat to train its models?
- If we train mobile robots to automatically explore and interact with the world in order to gather training data at relatively low cost, the data will be biased by choices made in building that autonomy. Similar to other recent examples in which AI algorithms adopt human biases, what are the risks inherent in biased robot training data?
- What role does old-fashioned robotics play? We have long learned to build state estimators, planners, and controllers by hand. Given that these work pretty well, should we be building learning methods around them? Or should they be thrown out and the problems solved from scratch with end-to-end deep learning methods?
- What is the connection between machine learning and hardware design? Can a robot design co-evolve with its algorithms during training? Doing so would require us to encode design specifications much more precisely than has been done in the past, but so much of design practice resists specification due to its complexity. Specifically, can design be turned into a fully-differentiable neural network structure?
Please bring your own questions for the group to discuss, too!