Neural Policy Translation
Throughout my academic career, I have tried to bridge the gap between cognition and robotics by using modern approaches to artificial intelligence, specifically by enhancing imitation learning with natural language processing. Natural language is a critical part of efficient human-human interactions. Despite that, models of human-robot teaming are mostly avoiding the use of natural languages due to the inherent complexity and ambiguities. My approach is to teach novel tasks to robots by directly translating natural language into low-level control policies leveraging deep neural networks. Using this approach, robots will be enabled to quickly learn complex tasks and relationships from simpler, previously-learned motions like reaching, grasping, turning and inserting. Since safety is a main concern during human-robot interaction, we need to reason about the uncertainty inherent to the task to avoid dangerous situations. Another benefit of natural language is the ability of the robot to express its intentions to the human collaborator.
Throughout our lifetime, we learn to delicately grasp, manipulate, and use a wide range of objects. Experience allows us to learn complex interactions with our environment such as increased dexterity in object manipulation. For example, we learn to actively use slip to our advantage, e.g., sliding, rotating, or shifting objects in-hand. However, in robotics, slip is often treated as a negative side effect that complicates interactions and should be actively avoided. Approaches to slip modeling and slip detection often aim at reducing or eliminating the effects of slip.
In this project, we discuss how slip can be actively controlled to increase robot dexterity and capability. Previous approaches to slip control have focused on a theoretical analysis of the underlying forces, torques, and physical constraints. However, in practice, such models are often infeasible since they fail to represent the uncertainty and variability inherent in in-hand manipulation. We argue that a key component of success in active slip control is the acquisition of predictive models which anticipate the behavior of an object under different robot actions. Through repeated physical interactions, a robot arm learns to anticipate how its intended actions produce or reduce slip. This, in turn, leads to a change in the pose of the manipulated object. We propose a Deep Predictive Model (DPM) which can be used to effectively learn the relationship between robot actions, incurred slip, and future object poses. In the project related publications, we perform experiments and provide examples that show how this approach can be leveraged to achieve dexterous object manipulation with lowdegree-of-freedom manipulators.
The Interplanetary Initiative is a joint project between multiple schools and departments at Arizona State University. I am involved in the Human-Robot Connections pilot which addresses the question of how to better connect robotics and human space exploration.
You can find out more about this project at the following link.
Intention projection is a method developed in our lab to give robots the ability to guide human workers when collaborating with robots. By utilizing the environment as a canvas for projection, the system is able to display various signs, icons and texts into the environment that can be used to guide the human or to express the intentions of the robot by using visual cues. For this system, a vision-based object tracker is used to precisely determine the position and rotation of each object in the environment. A projection mapping technique is then used to project various information onto the environment or objects in the environment. Tracking objects and projecting information simultaneously allows the system to provide information in real-time to the human collaborator. The main objective of this methodology is to increase the overall performance of human-robot teams.