Simon Stepputtis is a PhD Student at Arizona State University and part of the interactive robotics laboratory
lead by Prof. Heni Ben Amor. He majored in Computer Science with a strong focus on Human-Robot Interaction
(HRI) and Artificial Intelligence (AI). Prior to that he received a bachelor and master degree in Engineering
and Computing from the TU Bergakademie Freiberg. During his time in Germany, he worked within the Humanoid
Robotics Group Freiberg under the lead of Prof. Bernhard Jung.
Previously he worked on teleoperation for robotic arms in underground mines. In his master’s thesis, he used statistical models to improve handovers from humans to robots, leading to more intuitive collaboration. Robots were able to learn the grasp timings instead of relying on preprogrammed behavior. His current research continues this idea and pushes the boundaries from preprogrammed workers to collaborative teams. Using deep learning approaches, more complex behavior can be learned by the robot, allowing intricate joint tasks. He recently collaborated in developing an imitation learning approach for assembling a LEGO rocket by a human-robot team. This work was submitted to Humanoids 2016 and received the best video award. [Video]
As he joined the Interactive Robotics Lab at Arizona State University he was awarded the CIDSE Doctoral Fellowship for Spring 2017. Besides his work as research assistant, Simon works as a teaching assistant for several courses in the fields of computer science and robotics. For additional information and a tabularly summary, see the following CV: Download CV
PhD in Computer Science
Arizona State University, Since 2016
M.Sc. in Engineering & Computing
TU Bergakademie Freiberg, 2016
B.Sc. in Engineering & Computing
TU Bergakademie Freiberg, 2015
CIDSE Doctoral Fellowship
Arizona State University, 2017
Best Video Award
Intention projection is used to indicate what actions a robot wants to take or to point out certain objects and areas to a human coworker. The system greatly increases the robots ability to communicate with humans.
The interplanetary initiative brings together researches from various fields to pave the way for humans in space. I am working on creating innovative space suits to prevent adverse effects from low-gravity environments.
This research aims to provide a safe and transparent human-robot collaboration by combining traditional learning from demonstration with natural language processing.
Leveraging recent insights in Deep Learning, we propose a Deep Predictive Model that uses tactile sensor information to reason about slip and its future influence on the manipulated object.
I am a teaching assistant for the following courses at Arizona State University:
A Robot Learns To Jointly Assemble A Lego-Rocket With A User
Dynamic Re-Grasp from Tactile Sensing