Hierarchical reinforcement learning based on macro actions
Abstract The large action space is a key challenge in reinforcement learning. Although hierarchical methods have been proven to be effective in addressing this issue, they are not fully explored. This paper combines domain knowledge with hierarchical concepts to propose a novel Hierarchical Reinforc...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-04-01
|
| Series: | Complex & Intelligent Systems |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s40747-025-01895-9 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849326637739409408 |
|---|---|
| author | Hao Jiang Gongju Wang Shengze Li Jieyuan Zhang Long Yan Xinhai Xu |
| author_facet | Hao Jiang Gongju Wang Shengze Li Jieyuan Zhang Long Yan Xinhai Xu |
| author_sort | Hao Jiang |
| collection | DOAJ |
| description | Abstract The large action space is a key challenge in reinforcement learning. Although hierarchical methods have been proven to be effective in addressing this issue, they are not fully explored. This paper combines domain knowledge with hierarchical concepts to propose a novel Hierarchical Reinforcement Learning framework based on macro actions (HRL-MA). This framework includes a macro action mapping model that abstracts sequences of micro actions into macro actions, thereby simplifying the decision-making process. Macro actions are divided into two categories: combat macro actions (CMA) and non-combat macro actions (NO-CMA). NO-CMA are driven by decision tree-based logical rules and provide conditions for the execution of CMA. CMA form the action space of the reinforcement learning algorithm, which dynamically selects actions based on the current state. Comprehensive tests on the StarCraft II maps Simple64 and AbyssalReefLE demonstrate that the HRL-MA framework exhibits superior performance, achieving higher win rates compared to baseline algorithms. Furthermore, in mini-game scenarios, HRL-MA consistently outperforms baseline algorithms in terms of reward scores. The findings highlight the effectiveness of integrating hierarchical structures and macro actions in reinforcement learning to manage complex decision-making tasks in environments with large action spaces. |
| format | Article |
| id | doaj-art-cf6601ba17fc4a63b93486a40e461650 |
| institution | Kabale University |
| issn | 2199-4536 2198-6053 |
| language | English |
| publishDate | 2025-04-01 |
| publisher | Springer |
| record_format | Article |
| series | Complex & Intelligent Systems |
| spelling | doaj-art-cf6601ba17fc4a63b93486a40e4616502025-08-20T03:48:06ZengSpringerComplex & Intelligent Systems2199-45362198-60532025-04-0111611710.1007/s40747-025-01895-9Hierarchical reinforcement learning based on macro actionsHao Jiang0Gongju Wang1Shengze Li2Jieyuan Zhang3Long Yan4Xinhai Xu5Chinese Academy of Military ScienceData Intelligence Division, China Unicom Digital Technology CoChinese Academy of Military ScienceChinese Academy of Military ScienceData Intelligence Division, China Unicom Digital Technology CoChinese Academy of Military ScienceAbstract The large action space is a key challenge in reinforcement learning. Although hierarchical methods have been proven to be effective in addressing this issue, they are not fully explored. This paper combines domain knowledge with hierarchical concepts to propose a novel Hierarchical Reinforcement Learning framework based on macro actions (HRL-MA). This framework includes a macro action mapping model that abstracts sequences of micro actions into macro actions, thereby simplifying the decision-making process. Macro actions are divided into two categories: combat macro actions (CMA) and non-combat macro actions (NO-CMA). NO-CMA are driven by decision tree-based logical rules and provide conditions for the execution of CMA. CMA form the action space of the reinforcement learning algorithm, which dynamically selects actions based on the current state. Comprehensive tests on the StarCraft II maps Simple64 and AbyssalReefLE demonstrate that the HRL-MA framework exhibits superior performance, achieving higher win rates compared to baseline algorithms. Furthermore, in mini-game scenarios, HRL-MA consistently outperforms baseline algorithms in terms of reward scores. The findings highlight the effectiveness of integrating hierarchical structures and macro actions in reinforcement learning to manage complex decision-making tasks in environments with large action spaces.https://doi.org/10.1007/s40747-025-01895-9Hierarchical reinforcement learningMacro action mapping modelCombat and non-combat macro actionsRule-based execution logic |
| spellingShingle | Hao Jiang Gongju Wang Shengze Li Jieyuan Zhang Long Yan Xinhai Xu Hierarchical reinforcement learning based on macro actions Complex & Intelligent Systems Hierarchical reinforcement learning Macro action mapping model Combat and non-combat macro actions Rule-based execution logic |
| title | Hierarchical reinforcement learning based on macro actions |
| title_full | Hierarchical reinforcement learning based on macro actions |
| title_fullStr | Hierarchical reinforcement learning based on macro actions |
| title_full_unstemmed | Hierarchical reinforcement learning based on macro actions |
| title_short | Hierarchical reinforcement learning based on macro actions |
| title_sort | hierarchical reinforcement learning based on macro actions |
| topic | Hierarchical reinforcement learning Macro action mapping model Combat and non-combat macro actions Rule-based execution logic |
| url | https://doi.org/10.1007/s40747-025-01895-9 |
| work_keys_str_mv | AT haojiang hierarchicalreinforcementlearningbasedonmacroactions AT gongjuwang hierarchicalreinforcementlearningbasedonmacroactions AT shengzeli hierarchicalreinforcementlearningbasedonmacroactions AT jieyuanzhang hierarchicalreinforcementlearningbasedonmacroactions AT longyan hierarchicalreinforcementlearningbasedonmacroactions AT xinhaixu hierarchicalreinforcementlearningbasedonmacroactions |