Simon Stepputtis

Selected Publication

This video demonstrates a novel imitation learning approach for learning human-robot interactions from human-human demonstrations. During training, the movements of two human interaction partners are recorded via motion capture. From this, an interaction model is learned that inherently captures important spatial relationships as well as temporal synchrony of body movements between the two interacting partners. The interaction model is based on interaction meshes that were first adopted by the computer graphics community for the offline animation of interacting virtual characters. We developed a variant of interaction meshes that is suitable for real-time human-robot interaction scenarios. During humanrobot collaboration, the learned interaction model allows for adequate spatio-temporal adaptation of the robots behavior to the movements of the human cooperation partner. Thus, the presented approach is well suited for collaborative tasks requiring continuous body movement coordination of a human and a robot. The feasibility of the approach is demonstrated with the example of a cooperative Lego rocket assembly task.
In Humanoids, 2016.



One-shot Learning of Human-Robot Handovers with Triadic Interaction Meshes

In AURO Journal, 2018

Conference Proceedings

Extrinsic Dexterity through Active Slip Control using Deep Predictive Models

To appear at ICRA, 2018


A System for Learning Continuous Human-Robot Interactions from Human-Human Demonstrations

In ICRA, 2017


Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly)

In Humanoids, 2016



Towards Semantic Policies for Human-Robot Collaboration

To appear at SWRS, 2018

Project Video

Research Projects

I am working on physical Human-Robot Collaboration with the broader goal of creating intuitive and save methods of interaction. This is achieved by combining traditional learning approaches with natural language processing, human intention prediction and novel ways of expressing the robot’s intent.

Intention Projection

Intention projection is used to indicate what actions a robot wants to take or to point out certain objects and areas to a human coworker. The system greatly increases the robots ability to communicate with humans.

Interplanetary Initiative

The interplanetary initiative brings together researches from various fields to pave the way for humans in space. I am working on creating innovative space suits to prevent adverse effects from low-gravity environments.

Semantic Policies

This research aims to provide a safe and transparent human-robot collaboration by combining traditional learning from demonstration with natural language processing.

Tactile Sensing

Leveraging recent insights in Deep Learning, we propose a Deep Predictive Model that uses tactile sensor information to reason about slip and its future influence on the manipulated object.


I am a teaching assistant for the following courses at Arizona State University:

  • CSE 355: Introduction to Theoretical Computer Science (Spring 2018)
  • CSE 571: Artificial Intelligence (Fall 2017)
  • CSE 591: Advances in Robot Learning (Spring 2017)
  • CSE 205: Object Orientated Programming and Data Structures (Spring 2017)


Towards Semantic Policies for Human-Robot Collaboration

A Robot Learns To Jointly Assemble A Lego-Rocket With A User

Dynamic Re-Grasp from Tactile Sensing


  • CTRPT Room 203-03, Arizona State University, Tempe, 85281, USA
  • Friday 9:00 to 11:00 (or email for appointment)