Simon Stepputtis
PhD Student - Arizona State University


Recent news from my research

September 2017My workshop paper 'Speech Enhanced Imitation Learning and Task Abstraction for Human-Robot Interaction' will be presented at IROS 2017 SBLI workshop on Synergies Between Learning and Interaction.
View All
July 2017The poster from my presentations at RSS 2017 is available on my publications page.
July 2017My workshop paper 'Deep Predictive Models for Active Slip Control' will be presented at RSS 2017
July 2017My workshop paper 'Active Slip Control for In-Hand Object Manipulation using Deep Predictive Models' will be presented at RSS 2017
May 2017My recent Video "Dynamic Re-Grasp from Tactile Sensing" with Heni Ben Amor is mentioned at IEEE Spectrum.
January 2017Our paper "A System for Learning Continuous Human-Robot Interactions from Human-Human Demonstrations" was accepted for ICRA 2017
December 2016I joined the Interactive Robotics Lab at Arizona State University which is led by Prof. Heni Ben Amor
November 2016Our Video "Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly)" received the Best Video Award at Humanoids 2016

Selected Publication

A full list of my publications can be found here

Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly)

David Vogt, Simon Stepputtis, Richard Weinhold, Bernhard Jung, Heni Ben Amor | Humanoids 2016

This video demonstrates a novel imitation learning approach for learning human-robot interactions from human-human demonstrations. During training, the movements of two human interaction partners are recorded via motion capture. From this, an interaction model is learned that inherently captures important spatial relationships as well as temporal synchrony of body movements between the two interacting partners. The interaction model is based on interaction meshes that were first adopted by the computer graphics community for the offline animation of interacting virtual characters. We developed a variant of interaction meshes that is suitable for real-time human-robot interaction scenarios. During humanrobot collaboration, the learned interaction model allows for adequate spatio-temporal adaptation of the robots behavior to the movements of the human cooperation partner. Thus, the presented approach is well suited for collaborative tasks requiring continuous body movement coordination of a human and a robot. The feasibility of the approach is demonstrated with the example of a cooperative Lego rocket assembly task.

Links: IEEE Xplore | PDF | Video