Imitation Learning of Robot Policies by Combining Language, Vision, and Demonstration
Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Chitta Baral, Heni Ben Amor
NeurIPS Workshop on Robot Learning: Control and Interaction in the Real World, 2019
Workshop Paper
Content

In this work we propose a novel end-to-end imitation learning approach that combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn is used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to direct a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability.


Citation
@misc{1911.11744,
Author = {Simon Stepputtis and Joseph Campbell and Mariano Phielipp and Chitta Baral and Heni Ben Amor},
Title = {Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration},
Year = {2019},
Eprint = {arXiv:1911.11744},
}
wp-content/themes/kerge-child