stochastic gradient ascent
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 4)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Zefeng Chen ◽  
Yuren Zhou ◽  
Xiao-yu He ◽  
Siyu Jiang

Evolution strategies have been demonstrated to have the strong ability to roughly train deep neural networks and well accomplish reinforcement learning tasks. However, existing evolution strategies designed specially for deep reinforcement learning only involve the plain variants which can not realize the adaptation of mutation strength or other advanced techniques. The research of applying advanced and effective evolution strategies to reinforcement learning in an efficient way is still a gap. To this end, this paper proposes a restart-based rank-1 evolution strategy for reinforcement learning. When training the neural network, it adapts the mutation strength and updates the principal search direction in a way similar to the momentum method, which is an ameliorated version of stochastic gradient ascent. Besides, two mechanisms, i.e., the adaptation of the number of elitists and the restart procedure, are integrated to deal with the issue of local optima. Experimental results on classic control problems and Atari games show that the proposed algorithm is superior to or competitive with state-of-the-art algorithms for reinforcement learning, demonstrating the effectiveness of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document