Hierarchical reinforcement learning based on macro actions

Abstract The large action space is a key challenge in reinforcement learning. Although hierarchical methods have been proven to be effective in addressing this issue, they are not fully explored. This paper combines domain knowledge with hierarchical concepts to propose a novel Hierarchical Reinforc...

Full description

Saved in:
Bibliographic Details
Main Authors: Hao Jiang, Gongju Wang, Shengze Li, Jieyuan Zhang, Long Yan, Xinhai Xu
Format: Article
Language:English
Published: Springer 2025-04-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-025-01895-9
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract The large action space is a key challenge in reinforcement learning. Although hierarchical methods have been proven to be effective in addressing this issue, they are not fully explored. This paper combines domain knowledge with hierarchical concepts to propose a novel Hierarchical Reinforcement Learning framework based on macro actions (HRL-MA). This framework includes a macro action mapping model that abstracts sequences of micro actions into macro actions, thereby simplifying the decision-making process. Macro actions are divided into two categories: combat macro actions (CMA) and non-combat macro actions (NO-CMA). NO-CMA are driven by decision tree-based logical rules and provide conditions for the execution of CMA. CMA form the action space of the reinforcement learning algorithm, which dynamically selects actions based on the current state. Comprehensive tests on the StarCraft II maps Simple64 and AbyssalReefLE demonstrate that the HRL-MA framework exhibits superior performance, achieving higher win rates compared to baseline algorithms. Furthermore, in mini-game scenarios, HRL-MA consistently outperforms baseline algorithms in terms of reward scores. The findings highlight the effectiveness of integrating hierarchical structures and macro actions in reinforcement learning to manage complex decision-making tasks in environments with large action spaces.
ISSN:2199-4536
2198-6053