Visual Navigation with Asynchronous Proximal Policy Optimization in Artificial Agents

Vanilla policy gradient methods suffer from high variance, leading to unstable policies during training, where the policy’s performance fluctuates drastically between iterations. To address this issue, we analyze the policy optimization process of the navigation method based on deep reinforcement le...

Full description

Saved in:
Bibliographic Details
Main Authors: Fanyu Zeng, Chen Wang
Format: Article
Language:English
Published: Wiley 2020-01-01
Series:Journal of Robotics
Online Access:http://dx.doi.org/10.1155/2020/8702962
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Vanilla policy gradient methods suffer from high variance, leading to unstable policies during training, where the policy’s performance fluctuates drastically between iterations. To address this issue, we analyze the policy optimization process of the navigation method based on deep reinforcement learning (DRL) that uses asynchronous gradient descent for optimization. A variant navigation (asynchronous proximal policy optimization navigation, appoNav) is presented that can guarantee the policy monotonic improvement during the process of policy optimization. Our experiments are tested in DeepMind Lab, and the experimental results show that the artificial agents with appoNav perform better than the compared algorithm.
ISSN:1687-9600
1687-9619