Research Thrusts
I am on the faculty market for 2024/2025! If you think my work is a good fit for you, please do not hesitate to reach out!
My research is dedicated to developing robots that can intelligently reason about their environments and the humans within them. By considering the affordances, attributes, and relationships of objects in the environment, my work enables robots to efficiently learn behaviors and generalize them to novel environments. To achieve this, I develop neuro-symbolic frameworks that combine the pattern recognition strengths of neural networks with the reasoning power of symbolic AI.
Throughout my work, I leverage neuro-symbolic approaches in two key ways:
- Enhancing Learning Efficiency in Robotics: By integrating symbolic domain knowledge – such as knowledge graphs – as additional context, I improve the sample efficiency of robotic learning systems during training and inference. This enables the creation of personalized robots that can quickly adapt to new tasks and environments with minimal data, even in a zero-shot manner, facilitating lifelong learning and human-robot interaction.
- Improving Interpretability and Ressource Efficiency of Robotic Systems: Leveraging symbolic structures, I improve the transparency and interpretability of AI systems, but also create systems that generate symbolic robot controllers that operate independently – without constant reliance on heavy neural computations. Through this approach, I pave the way for increased trust in AI systems.
My hybrid approaches address these critical thrusts:
- Life-Long Learning for Adaptive Robots: How can robots continuously learn and adapt to individual human preferences and changing environments, providing personalized assistance and improving human-robot interaction over time?
- Efficient and Robust Control: How can we enhance the efficiency and robustness of robot control systems to operate effectively in real-time environments without heavy computational demands, enabling fast and autonomous decision-making?
- Trust and Safety in AI Systems: How can we ensure that robots operate safely and reliably in sensitive applications – such as healthcare – by providing transparency and compliance with safety standards, thereby increasing trust and acceptance of robotic systems?
Neuro-Symbolic Learning
Neuro-symbolic reasoning represents a significant advancement in the field of artificial intelligence, merging the intuitive pattern recognition capabilities of neural networks with the logical and interpretable framework of symbolic AI. This hybrid approach addresses critical limitations of both approaches by combining their respective strengths. By combining the learning efficiency and adaptability of neural networks with the explicit reasoning and rule-based processing of symbolic AI, neuro-symbolic systems can achieve more accurate, transparent, flexible, and reliable decision-making. This integration is especially crucial in applications requiring both data-driven insights and logical, explainable decisions, such as understanding human behavior, recognizing objects, and robotics. Neuro-symbolic reasoning thus paves the way for more robust, understandable, and trustworthy AI systems, aligning machine intelligence more closely with human-like understanding and reasoning while simultaneously providing a platform for lifelong learning adaption through access to symbolic knowledge.
Dexterous Manipulation and Language-Conditioned Robotics
Robotic manipulation is a complex challenge, requiring precision, dexterity, and adaptability to handle the vast variety of objects humans interact with daily. While humans naturally develop these skills over time, teaching robots to replicate this level of finesse remains difficult. My research explores advanced manipulation strategies, focusing on dexterous and bimanual control, to enable robots to interact more fluidly with their environments. These capabilities are key for real-world applications, where robots must handle both delicate and complex tasks in human-centric environments. However, manipulation alone is not enough. Effective human-robot interaction requires intuitive communication, and language plays a pivotal role in bridging that gap. By integrating language understanding with robot control, my work empowers robots to interpret human instructions and adapt their manipulation strategies accordingly. This combination allows for more natural, human-like interactions, where robots not only manipulate objects skillfully but do so in response to nuanced verbal commands. Together, these advancements are driving the development of next-generation assistive robots capable of seamlessly interacting with both their environment and the people in it.