Simon Stepputtis is a PhD Student at Arizona State University and part of the interactive robotics laboratory lead by Prof. Heni Ben Amor.
He majored in Computer Science with a strong focus on Human-Robot Interaction (HRI) and Artificial Intelligence (AI).
Prior to that he received a bachelor and master degree in Engineering and Computing from the TU Bergakademie Freiberg.
During his time in Germany, he worked within the Humanoid Robotics Group Freiberg under the lead of Prof. Bernhard Jung.
Previously he worked on teleoperation for robotic arms in underground mines. In his master’s thesis, he used statistical models to improve handovers from humans to robots, leading to more intuitive collaboration. His current research continues the idea by leveraging speech in cooperative assembly tasks. Using novel approaches, complex interaction dynamics can be learned and verified, allowing intuitive and secure Human-Robot Collaboration. He recently collaborated in developing an imitation learning approach for assembling a LEGO rocket by a human-robot team. This work was submitted to Humanoids 2016 and received the best video award. [Video]
As he joined the Interactive Robotics Lab at Arizona State University he was awarded the CIDSE Doctoral Fellowship for Spring 2017. Besides his work as research assistant, Simon works as a teaching assistant for several courses in the fields of computer science and robotics. For additional information and a tabularly summary, see the following CV: Download CV
PhD in Computer Science
Arizona State University, Since 2016
M.Sc. in Engineering & Computing
TU Bergakademie Freiberg, 2016
B.Sc. in Engineering & Computing
TU Bergakademie Freiberg, 2015
CIDSE Doctoral Fellowship
Arizona State University, 2018
CIDSE Doctoral Fellowship
Arizona State University, 2017
Best Video Award
Robert Bosch LLC
Robotics Intern, May - August 2018
I am working on physical Human-Robot Collaboration with the broader goal of creating intuitive and save methods of interaction. This is achieved by combining traditional learning approaches with natural language processing, human intention prediction and novel ways of expressing the robot’s intent.*
Intention projection is used to indicate what actions a robot wants to take or to point out certain objects and areas to a human coworker. The system greatly increases the robots ability to communicate with humans.
The interplanetary initiative brings together researches from various fields to pave the way for humans in space. I am working on creating innovative space suits to prevent adverse effects from low-gravity environments.
This research aims to provide a safe and transparent human-robot collaboration by combining traditional learning from demonstration with natural language processing.
Leveraging recent insights in Deep Learning, we propose a Deep Predictive Model that uses tactile sensor information to reason about slip and its future influence on the manipulated object.
I am a teaching assistant for the following courses at Arizona State University:
Towards Semantic Policies for Human-Robot Collaboration
A Robot Learns To Jointly Assemble A Lego-Rocket With A User
Dynamic Re-Grasp from Tactile Sensing