Simon Stepputtis

Simon Stepputtis

Postdoctoral Fellow | Carnegie Mellon Univeristy | Robotics Institute

  • I am on the faculty market for fall 2025!

    My work focuses on developing adaptive robotic systems that efficiently learn, autonomously adapt, and operate safely in human-centric environments by leveraging neuro-symbolic methods, transforming capabilities in in-home assistance, healthcare, and manufacturing.
    • Paper: ShapeGrasp: Zero-Shot Task-Oriented Grasping with Large Language Models through Geometric Decomposition

      A novel method enables robots to intuitively grasp unfamiliar objects by decomposing their shapes and utilizing large language models, achieving high success rates in experimental trials. Paper on arXiv
    • Paper: A Comparison of Imitation Learning Algorithms for Bimanual Manipulation

      Explore how different imitation learning algorithms tackle complex industrial tasks, revealing key strengths and weaknesses in precision, efficiency, and adaptability. Paper on arXiv
    • University of Washington: Invited Talk

      I am excited to give a talk at the University of Washington about Neuro-Symbolic Robot Intelligence!
    • Multiple ICRA Workshop Papers!

      I will be at ICRA 2024 to present some of our most recent work. Check out the Publications!.
    • Multiple New Papers (NeurIPS, EMNLP, CoLLAs, AURO, CVPR)

      I updated the website with multiple new papers, including EMNLP 2023, NeurIPS 2023, and CVPR 2024, CoLLAs 2024 and the Autonomous Robots Journal.
    • Paper: Sigma: Siamese Mamba Network for Multi-Modal Semantic Segmentation

      Introducing Sigma, a groundbreaking Siamese Mamba network that revolutionizes AI scene understanding by combining thermal, depth, and RGB data for more accurate predictions in challenging environments. Paper on arXiv
    • Paper: Sample-Efficient Learning of Novel Visual Concepts

      Sample-efficient extraction of novel objects, affordances, and attributes from images using symbolic domain knowledge, which will be presented at CoLLAs 2023
    • Paper: Introspective Action Advising for Interpretable Transfer Learning

      We propose an alternative approach to transfer learning between tasks based on action advising, which will be presented at CoLLAs 2023!
    • RSS 2023: Articulate Robots Workshop

      I am organizing a workshop at RSS 2023 in Daegu, Republic of Korea on Articulate Robots: Utilizing Language for Robot Learning
    • Paper: Explainable Action Advising for Multi-Agent Reinforcement Learning

      Our new paper will be presented at ICRA 2023 in London, England! Paper on arXiv
    • Paper: Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation

      Our new paper with our collaborators at Intel will be presented at CORL 2022 in Auckland, New Zeland! Paper on OpenReview
    • Paper: Concept Learning for Interpretable Multi-Agent Reinforcement Learning

      Interpretable concept learning for multi-agent robot systems: CORL 2022 in Auckland, New Zeland! Paper on OpenReview
    • Paper: A System for Imitation Learning of Contact-Rich Bimanual Manipulation Policies

      Our paper in collaboration with Intrinsic got accepted to IROS 2022, which we will be presenting in Kyoto, Japan. View full paper
    • IROS 2022: TOM4HAT Workshop

      I organized a workshop at IROS 2022 in Kyoto, Japan on Theory of Mind
    • Workshop: RSS Pioneers

      I was accepted to the RSS Pioneers Workshop 2022 with my work on Language-Conditioned Human-Agent Teaming.
    • Postdoctoral Fellow at Carnegie Mellon University

      I started as a postdoctoral fellow at Carnegie Mellon University (CMU) with Prof. Katia Sycara.
    • Graduation: Ph.D. in Computer Science

      I completed my Ph.D. in Computer Science at Arizona State University with Prof. Heni Ben Amor!
    • Imperial College London: Invited Talk

      I will be giving brief summary and outlook of my work presented in our NeurIPS 2020 paper at the Imperial College London!
    • Resident @ X, The Moonshot Factory

      Over the summer, I will be a resident at X, The Moonshot Factory, where I will be working on industrial manipulation tasks for Intrinsic, a robotics software and AI project at X .
    • Video: Language Conditioned Imitation Learning

      We contributed a video to the robot expo at IJCAI 2021 that is a direct extension to our NeurIPS 2020 paper. You can check out the video here!
    • Paper: Language-Conditioned Imitation Learning for Robot Manipulation Tasks

      We published a new paper at NeurIPS 2020! Our paper got accepted as a spotlight presentation (top ~4% of accepted papers). View full paper
    • Intel AI Labs: Invited Talk

      I am excited to give a talk, Language for Robotics at Intel AI Labs summarizing our efforts on learning robot policies from natural language instructions.
    • Teaching Introduction to Theoretical Computer Science at ASU

      I will be teaching CSE 355: Introduction to Theoretical Computer Science at Arizona State University as the main instructor during the upcoming Summer 2020 semester!
    • Intel AI: Talk at the Deep Learning Community

      I will be giving a talk at the Deep Learning Community of Practice titled Imitation Learning for Adaptive Robot Control Policies from Language, Vision, and Motion.
    • Workshop Paper: Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration

      We contributed a workshop paper to the Workshop on Robot Learning at NeurIPS 2019!
    • Best Poster Award

      I received the Best Poster Award by Nvidia at the Southwest Robotics Symposium for my work on Neural Policy Translation for Robot Control!
    • Robotics Intern @BOSCH

      I will be joining Robot BOSCH in Sunnyvale for an internship to work on semantic data analysis with a focus on time series segmentation.
    • Paper: Extrinsic Dexterity through Active Slip Control using Deep Predictive Models

      We got our paper accepted to ICRA 2018, and I will be presenting our work in Brisbane, Australia! Paper on IEEE Xplore
    • Best Video Award

      Awarded at the International Conference on Humanoid Robots (Humanoids) 2016 for our work on Learning human-robot interactions from human-human demonstrations Video on YouTube