We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from humanusers. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatialrelationships from a single task demonstration of two humans. At the center of our approach is an interaction model thatenables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we proposea data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. Thefeasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can leadto more natural and intuitive interactions with the robot.