One-Shot Learning of Human–Robot Handovers with Triadic Interaction Meshes
David Vogt, Simon Stepputtis, Bernhard Jung, Heni Ben Amor
Autonomous Robots, 2018
Journal Paper
Content

We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.


Citation
@article{Vogt2018,
doi = {10.1007/s10514-018-9699-4},
url = {https://doi.org/10.1007/s10514-018-9699-4},
year = {2018},
month = feb,
publisher = {Springer Science and Business Media {LLC}},
volume = {42},
number = {5},
pages = {1053--1065},
author = {David Vogt and Simon Stepputtis and Bernhard Jung and Heni Ben Amor},
title = {One-shot learning of human{\textendash}robot handovers with triadic interaction meshes},
journal = {Autonomous Robots}
}
wp-content/themes/kerge-child