Intel AI Talk

On March 3rd I will be giving a talk at Intel on “Imitation Learning for Adaptive Robot Control Policies from Language, Vision, and Motion”

Outline: A significant challenge when designing robots to operate in the real world lies in the generation of control policies that can adapt to changing environments. Programming such policies is a labor and time-consuming process that requires substantial technical expertise. Imitation learning is an appealing methodology that aims at overcoming this challenge — instead of complex programming, the user only provides a set of demonstrations of the intended behavior. Popular approaches largely focus on the motion as the sole input and output modality, i.e., joint angles, forces, or positions. However, critical semantic and visual information regarding the task, such as the appearance of the target object or the type of task performed, is not taken into account during training and reproduction. In this talk, I will present an imitation learning approach that combines language, vision, and motion in order to synthesize natural language-conditioned control policies that have strong generalization capabilities while also capturing the semantics of the task. Such multi-modal teaching approaches enable robots to acquire complex policies that generalize to a wide variety of environmental conditions based on descriptions of the intended task.