Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based robotics

Abstract

Scaling end-to-end reinforcement learning to control real robots from vision presents a series of challenges, in particular in terms of sample efficiency. Against end-to-end learning, state representation learning can help learn a compact, efficient and relevant representation of states that speeds up policy learning, reducing the number of samples needed, and that is easier to interpret. We evaluate several state representation learning methods on goal based robotics tasks and propose a new unsupervised model that stacks representations and combines strengths of several of these approaches. This method encodes all the relevant features, performs on par or better than end-to-end learning, and is robust to hyper-parameters change.

Ashley W.D. Hill
Ashley W.D. Hill
PhD Researcher Engineer specialized in machine learning applied to robotics

My research interests include Machine Learning, Robotics, Electronics, and other oddities.