Simon Stepputtis

Selected Publication

This video demonstrates a novel imitation learning approach for learning human-robot interactions from human-human demonstrations. During training, the movements of two human interaction partners are recorded via motion capture. From this, an interaction model is learned that inherently captures important spatial relationships as well as temporal synchrony of body movements between the two interacting partners. The interaction model is based on interaction meshes that were first adopted by the computer graphics community for the offline animation of interacting virtual characters. We developed a variant of interaction meshes that is suitable for real-time human-robot interaction scenarios. During humanrobot collaboration, the learned interaction model allows for adequate spatio-temporal adaptation of the robots behavior to the movements of the human cooperation partner. Thus, the presented approach is well suited for collaborative tasks requiring continuous body movement coordination of a human and a robot. The feasibility of the approach is demonstrated with the example of a cooperative Lego rocket assembly task.
In Humanoids, 2016.

Publications

One-shot Learning of Human-Robot Handovers with Triadic Interaction Meshes

To appear in AURO Journal, 2017

A System for Learning Continuous Human-Robot Interactions from Human-Human Demonstrations

In ICRA, 2017

PDF

Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly)

In Humanoids, 2016

PDF Video DOI

Research Projects

Intention Projection

Intention projection is used to indicate what actions a robot wants to take or to point out certain objects and areas to a human coworker. The system greatly increases the robots ability to communicate with humans.

Interplanetary Initiative

The interplanetary initiative brings together researches from various fields to pave the way for humans in space. I am working on creating innovative space suits to prevent adverse effects from low-gravity environments.

Semantic Policies

This research aims to provide a safe and transparent human-robot collaboration by combining traditional learning from demonstration with natural language processing.

Tactile Sensing

Leveraging recent insights in Deep Learning, we propose a Deep Predictive Model that uses tactile sensor information to reason about slip and its future influence on the manipulated object.

Teaching

I am a teaching assistant for the following courses at Arizona State University:

  • CSE 571: Artificial Intelligence (Fall 2017)
  • CSE 591: Advances in Robot Learning (Spring 2017)
  • CSE 205: Object Orientated Programming and Data Structures (Spring 2017)

Videos

A Robot Learns To Jointly Assemble A Lego-Rocket With A User


Dynamic Re-Grasp from Tactile Sensing

Contact

  • sstepput@asu.edu
  • CTRPT Room 203-03, Arizona State University, Tempe, 85281, USA
  • Tuesday/Thursday 9:00 to 11:00 or email for appointment