Research

Neural Policy Translation

Throughout my academic career, I have tried to bridge the gap between cognition and robotics by using modern approaches to artificial intelligence, specifically by enhancing imitation learning with natural language processing. Natural language is a critical part of efficient human-human interactions. Despite that, models of human-robot teaming tend to avoid the use of natural languages due to its inherent complexity and ambiguities. My approach is to teach novel tasks to robots by directly translating natural language into low-level control policies leveraging deep neural networks. Using this approach, robots will be enabled to quickly learn more complex tasks and their relationships from simpler, previously-learned motions like reaching, grasping, turning and inserting. Additionally, the translation approach allows for easy knowledge transfer between different robots from a mutual task representation.

Interplanetary Initiative

The Interplanetary Initiative is a joint project between multiple schools and departments at Arizona State University. I am working as a research associate in the Human-Robot Connections project which addresses the question of how to better connect robotics and human space exploration. Our main goal in this project is to develop a proactive exo-skeleton that can be utilized in space to support astronauts in their daily activities, as well as encouraging a healthy body posture to prevent problems like muscle loss during exposure to micro gravity environments.

You can find out more about this project at the following link.

Tactile Sensing

Throughout our lifetime, we learn to delicately grasp, manipulate, and use a wide range of objects. Experience allows us to learn complex interactions with our environment such as increased dexterity in object manipulation. For example, we learn to actively use slip to our advantage, e.g., sliding, rotating, or shifting objects in-hand. However, in robotics, slip is often treated as a negative side effect that complicates interactions and should be actively avoided. Approaches to slip modeling and slip detection often aim at reducing or eliminating the effects of slip.
In this project, we discuss how slip can be actively controlled to increase robot dexterity and capability. Previous approaches to slip control have focused on a theoretical analysis of the underlying forces, torques, and physical constraints. However, in practice, such models are often infeasible since they fail to represent the uncertainty and variability inherent in in-hand manipulation. We argue that a key component of success in active slip control is the acquisition of predictive models which anticipate the behavior of an object under different robot actions. Through repeated physical interactions, a robot arm learns to anticipate how its intended actions produce or reduce slip. This, in turn, leads to a change in the pose of the manipulated object. We propose a Deep Predictive Model (DPM) which can be used to effectively learn the relationship between robot actions, incurred slip, and future object poses. In the project related publications, we perform experiments and provide examples that show how this approach can be leveraged to achieve dexterous object manipulation with lowdegree-of-freedom manipulators.