Improved Exploration through Latent Trajectory Optimization in Deep Deterministic Policy Gradient

Year
2019
Type(s)
Author(s)
Kevin Sebastian Luck, Mel Vecerik, Simon Stepputtis, Heni Ben Amor, Jonathan Scholz
Source
Conference on Intelligent Robots and Systems (IROS), 2019
BibTeX
BibTeX

Model-free reinforcement learning algorithms such as Deep Deterministic Policy Gradient (DDPG) require often additional exploration strategies, especially if the actor is of deterministic nature. This work evaluates the use of modelbased trajectory optimization methods used for exploration in Deep Deterministic Policy Gradient when trained in a latent image embedding. In addition, an extension of DDPG is derived using a value function as critic, making use of the learned deep dynamics model to compute the policy gradient. This approach leads to a symbiotic relationship between the deep reinforcement learning algorithm and the latent trajectory optimizer. The trajectory optimizer benefits from the critic learned by the RL algorithm and the latter from the enhanced exploration generated by the planner. The developed methods are evaluated on two continuous control tasks, one in simulation and one in the real world. In the latter, a Baxter robot has to insert a cylinder into a tube while receiving sparse rewards and images as observations from the environment.