Simon Stepputtis

Selected Publication

We present a machine learning methodology for actively controlling slip, in order to increase robot dexterity. Leveraging recent insights in Deep Learning, we propose a Deep Predictive Model that uses tactile sensor information to reason about slip and its future influence on the manipulated object. This information can then be used to precisely manipulate various objects within the robots hand using external perturbations imposed by gravity or acceleration. We show in a set of experiments that this approach can be used to increase a robots repertoire of skills.
In ICRA, 2018.

Publications

Journals

One-shot Learning of Human-Robot Handovers with Triadic Interaction Meshes

In AURO Journal, 2018

PDF DOI

Conference Proceedings

Extrinsic Dexterity through Active Slip Control using Deep Predictive Models

In ICRA, 2018

PDF Project

A System for Learning Continuous Human-Robot Interactions from Human-Human Demonstrations

In ICRA, 2017

PDF DOI

Learning human-robot interactions from human-human demonstrations (with applications in Lego rocket assembly)

In Humanoids, 2016

PDF Video DOI

Workshops

Towards Semantic Policies for Human-Robot Collaboration

To appear at SWRS, 2018

Project Video

Research Projects

I am working on physical Human-Robot Collaboration with the broader goal of creating intuitive and save methods of interaction. This is achieved by combining traditional learning approaches with natural language processing, human intention prediction and novel ways of expressing the robot’s intent.

Intention Projection

Intention projection is used to indicate what actions a robot wants to take or to point out certain objects and areas to a human coworker. The system greatly increases the robots ability to communicate with humans.

Interplanetary Initiative

The interplanetary initiative brings together researches from various fields to pave the way for humans in space. I am working on creating innovative space suits to prevent adverse effects from low-gravity environments.

Semantic Policies

This research aims to provide a safe and transparent human-robot collaboration by combining traditional learning from demonstration with natural language processing.

Tactile Sensing

Leveraging recent insights in Deep Learning, we propose a Deep Predictive Model that uses tactile sensor information to reason about slip and its future influence on the manipulated object.

Teaching

I am a teaching assistant for the following courses at Arizona State University:

  • CSE 355: Introduction to Theoretical Computer Science (Spring 2018)
  • CSE 571: Artificial Intelligence (Fall 2017)
  • CSE 591: Advances in Robot Learning (Spring 2017)
  • CSE 205: Object Orientated Programming and Data Structures (Spring 2017)

Videos

Towards Semantic Policies for Human-Robot Collaboration


A Robot Learns To Jointly Assemble A Lego-Rocket With A User


Dynamic Re-Grasp from Tactile Sensing

Contact

  • sstepput@asu.edu
  • Centerpoint (CTRPT, Room 203-03)
    Arizona State University, Tempe, 85281, USA