Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement Learning

This study explores the use of Reinforcement Learning (RL) to develop autonomous agents for Beyond Visual Range (BVR) air combat, addressing the challenges of dynamic and uncertain adversarial scenarios. We propose a novel approach that introduces a task-based layer, leveraging domain expertise to o...

Full description

Saved in:
Bibliographic Details
Main Authors: Andre R. Kuroswiski, Annie S. Wu, Angelo Passaro
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10966908/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850141136189915136
author Andre R. Kuroswiski
Annie S. Wu
Angelo Passaro
author_facet Andre R. Kuroswiski
Annie S. Wu
Angelo Passaro
author_sort Andre R. Kuroswiski
collection DOAJ
description This study explores the use of Reinforcement Learning (RL) to develop autonomous agents for Beyond Visual Range (BVR) air combat, addressing the challenges of dynamic and uncertain adversarial scenarios. We propose a novel approach that introduces a task-based layer, leveraging domain expertise to optimize decision-making and training efficiency. By integrating multi-head attention mechanisms into the policy model and employing an improved DQN algorithm, agents dynamically select context-aware tasks, enabling the learning of efficient emergent behaviors for variable engagement conditions. Evaluations in single- and multi-agent BVR scenarios against adversaries with diverse tactical characteristics demonstrate superior training efficiency and enhanced agent capabilities compared to leading RL algorithms commonly applied in similar domains, including PPO, DDPG, and SAC. A robustness study underscores the critical role of diverse enemy selection in the RL process, showing that adversaries with variable tactical behaviors are essential for developing robust agents. This work advances RL methodologies for autonomous BVR air combat and provides insights applicable to other problems with challenging adversarial scenarios.
format Article
id doaj-art-b5cdfd8bbc0e4ef987643304f4d2df9b
institution OA Journals
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-b5cdfd8bbc0e4ef987643304f4d2df9b2025-08-20T02:29:34ZengIEEEIEEE Access2169-35362025-01-0113704467046310.1109/ACCESS.2025.356125010966908Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement LearningAndre R. Kuroswiski0https://orcid.org/0000-0003-1549-2434Annie S. Wu1Angelo Passaro2https://orcid.org/0000-0002-2421-0657Instituto Tecnológico de Aeronáutica, São José dos Campos, São Paulo, BrazilDepartment of Computer Science, University of Central Florida, Orlando, FL, USAInstituto de Estudos Avançados, São José dos Campos, São Paulo, BrazilThis study explores the use of Reinforcement Learning (RL) to develop autonomous agents for Beyond Visual Range (BVR) air combat, addressing the challenges of dynamic and uncertain adversarial scenarios. We propose a novel approach that introduces a task-based layer, leveraging domain expertise to optimize decision-making and training efficiency. By integrating multi-head attention mechanisms into the policy model and employing an improved DQN algorithm, agents dynamically select context-aware tasks, enabling the learning of efficient emergent behaviors for variable engagement conditions. Evaluations in single- and multi-agent BVR scenarios against adversaries with diverse tactical characteristics demonstrate superior training efficiency and enhanced agent capabilities compared to leading RL algorithms commonly applied in similar domains, including PPO, DDPG, and SAC. A robustness study underscores the critical role of diverse enemy selection in the RL process, showing that adversaries with variable tactical behaviors are essential for developing robust agents. This work advances RL methodologies for autonomous BVR air combat and provides insights applicable to other problems with challenging adversarial scenarios.https://ieeexplore.ieee.org/document/10966908/Adversarial learningartificial intelligenceautonomous agentsbeyond visual range air combatmulti-head attentionreinforcement learning
spellingShingle Andre R. Kuroswiski
Annie S. Wu
Angelo Passaro
Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement Learning
IEEE Access
Adversarial learning
artificial intelligence
autonomous agents
beyond visual range air combat
multi-head attention
reinforcement learning
title Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement Learning
title_full Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement Learning
title_fullStr Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement Learning
title_full_unstemmed Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement Learning
title_short Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement Learning
title_sort enhancing bvr air combat agent development with attention driven reinforcement learning
topic Adversarial learning
artificial intelligence
autonomous agents
beyond visual range air combat
multi-head attention
reinforcement learning
url https://ieeexplore.ieee.org/document/10966908/
work_keys_str_mv AT andrerkuroswiski enhancingbvraircombatagentdevelopmentwithattentiondrivenreinforcementlearning
AT annieswu enhancingbvraircombatagentdevelopmentwithattentiondrivenreinforcementlearning
AT angelopassaro enhancingbvraircombatagentdevelopmentwithattentiondrivenreinforcementlearning