Throughout our lifetime, we learn to delicately grasp, manipulate, and use a wide range of objects. Experience allows us to learn complex interactions with our environment such as increased dexterity in object manipulation. For example, we learn to actively use slip to our advantage, e.g., sliding, rotating, or shifting objects in hand. However, in robotics, slip is often treated as a negative side effect that complicates interactions and should be actively avoided. Approaches to slip modeling and slip detection often aim at reducing or eliminating the effects of slip.
In this project, we discuss how slip can be actively controlled to increase robot dexterity and capability. Previous approaches to slip control have focused on a theoretical analysis of the underlying forces, torques, and physical constraints. However, in practice, such models are often infeasible since they fail to represent the uncertainty and variability inherent in in-hand manipulation. We argue that a key component of success in active slip control is the acquisition of predictive models which anticipate the behavior of an object under different robot actions. Through repeated physical interactions, a robot arm learns to anticipate how its intended actions produce or reduce slip. This, in turn, leads to a change in the pose of the manipulated object. We propose a Deep Predictive Model (DPM) which can be used to effectively learn the relationship between robot actions, incurred slip, and future object poses. In the project-related publications, we perform experiments and provide examples that show how this approach can be leveraged to achieve dexterous object manipulation with low degree-of-freedom manipulators.
Research Project at ASU