Leveraging Organizational Hierarchy to Simplify Reward Design in Cooperative Multi-agent Reinforcement Learning
The effectiveness of multi-agent reinforcement learning (MARL) hinges largely on the meticulous arrangement of objectives. Yet, conventional MARL methods might not completely harness the inherent structures present in environmental states and agent relationships for goal organization. This study is...
Saved in:
| Main Authors: | Lixing Liu, Volkan Ustun, Rajay Kumar |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2024-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/135588 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Training Reinforcement Learning Agents to React to an Ambush for Military Simulations
by: Timothy Aris, et al.
Published: (2024-05-01) -
Leveraging Graph Networks to Model Environments in Reinforcement Learning
by: Viswanath Chadalapaka, et al.
Published: (2023-05-01) -
Learning to Take Cover on Geo-Specific Terrains via Reinforcement Learning
by: Timothy Aris, et al.
Published: (2022-05-01) -
Improving Reinforcement Learning Experiments in Unity through Waypoint Utilization
by: Caleb Koresh, et al.
Published: (2024-05-01) -
Social hierarchy impacts response to reward downshift in sows
by: Thomas Ede, et al.
Published: (2025-05-01)