Lowering reinforcement learning barriers for quadruped locomotion in the task space

In contrast to traditional methods like model predictive control (MPC), deep reinforcement learning (DRL) offers a simpler and less model- intensive option to develop quadruped locomotion policies. However, DRL presents a steep learning curve and a large barrier to entry for novice researchers. This...

Full description

Saved in:
Bibliographic Details
Main Authors: Cooke Lauren, Fisher Callen
Format: Article
Language:English
Published: EDP Sciences 2024-01-01
Series:MATEC Web of Conferences
Online Access:https://www.matec-conferences.org/articles/matecconf/pdf/2024/18/matecconf_rapdasa2024_04007.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In contrast to traditional methods like model predictive control (MPC), deep reinforcement learning (DRL) offers a simpler and less model- intensive option to develop quadruped locomotion policies. However, DRL presents a steep learning curve and a large barrier to entry for novice researchers. This is partly due to research that fails to include comprehensive implementation details. Moreover, DRL requires making numerous design choices, such as selecting the appropriate action and observation spaces, designing reward functions, and setting policy update frequencies, which may not be intuitive to new researchers. This paper aims to facilitate entry into reinforcement learning simulations by illuminating design choices and offering comprehensive implementation details. Results demonstrate that training a quadruped robot in the task space yields natural locomotion and increased sample efficiency compared to conventional joint space frameworks. Furthermore, the results highlight the interdependence and interrelation of the action space, observation space, terrain, reward function, policy frequency, and simulation termination conditions.
ISSN:2261-236X