In this project, I primarily focuses on utilizing imitation learning and natural language processing to push the boundaries of Human-Robot interaction for various physical tasks found in home environments. The goal of this project is to bridge the gap between cognition and robotics by using modern approaches to artificial intelligence, specifically by enhancing imitation learning with natural language processing. Natural language is a critical part of efficient human-human interactions. Despite that, models of human-robot teaming are mostly avoiding the use of natural languages due to the inherent complexity and ambiguities. My approach is to teach novel tasks to robots by directly translating natural language into low-level control policies leveraging deep neural networks. Using this approach, robots will be enabled to quickly learn complex tasks and relationships from simpler, previously-learned motions like reaching, grasping, turning and inserting. Since safety is a main concern during human-robot interaction, we need to reason about the uncertainty inherent to the task to avoid dangerous situations. Another benefit of natural language is the ability of the robot to express its intentions to the human collaborator.
The novelty of this project lies in using language as a natural interface for humans to convey their intention to the robot, while it allows the robot to communicate its intention back to the human.