Deep reinforcement learning for active flow control in a turbulent separation bubble
Abstract The control efficacy of deep reinforcement learning (DRL) compared with classical periodic forcing is numerically assessed for a turbulent separation bubble (TSB). We show that a control strategy learned on a coarse grid works on a fine grid as long as the coarse grid captures main flow fea...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-02-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-025-56408-6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823861841583931392 |
---|---|
author | Bernat Font Francisco Alcántara-Ávila Jean Rabault Ricardo Vinuesa Oriol Lehmkuhl |
author_facet | Bernat Font Francisco Alcántara-Ávila Jean Rabault Ricardo Vinuesa Oriol Lehmkuhl |
author_sort | Bernat Font |
collection | DOAJ |
description | Abstract The control efficacy of deep reinforcement learning (DRL) compared with classical periodic forcing is numerically assessed for a turbulent separation bubble (TSB). We show that a control strategy learned on a coarse grid works on a fine grid as long as the coarse grid captures main flow features. This allows to significantly reduce the computational cost of DRL training in a turbulent-flow environment. On the fine grid, the periodic control is able to reduce the TSB area by 6.8%, while the DRL-based control achieves 9.0% reduction. Furthermore, the DRL agent provides a smoother control strategy while conserving momentum instantaneously. The physical analysis of the DRL control strategy reveals the production of large-scale counter-rotating vortices by adjacent actuator pairs. It is shown that the DRL agent acts on a wide range of frequencies to sustain these vortices in time. Last, we also introduce our computational fluid dynamics and DRL open-source framework suited for the next generation of exascale computing machines. |
format | Article |
id | doaj-art-c3b80c44b33e458db447c43d9a4e4a39 |
institution | Kabale University |
issn | 2041-1723 |
language | English |
publishDate | 2025-02-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Nature Communications |
spelling | doaj-art-c3b80c44b33e458db447c43d9a4e4a392025-02-09T12:44:13ZengNature PortfolioNature Communications2041-17232025-02-0116111310.1038/s41467-025-56408-6Deep reinforcement learning for active flow control in a turbulent separation bubbleBernat Font0Francisco Alcántara-Ávila1Jean Rabault2Ricardo Vinuesa3Oriol Lehmkuhl4Faculty of Mechanical Engineering, Delft University of TechnologyFLOW, Engineering Mechanics, KTH Royal Institute of TechnologyIndependent researcherFLOW, Engineering Mechanics, KTH Royal Institute of TechnologyBarcelona Supercomputing CenterAbstract The control efficacy of deep reinforcement learning (DRL) compared with classical periodic forcing is numerically assessed for a turbulent separation bubble (TSB). We show that a control strategy learned on a coarse grid works on a fine grid as long as the coarse grid captures main flow features. This allows to significantly reduce the computational cost of DRL training in a turbulent-flow environment. On the fine grid, the periodic control is able to reduce the TSB area by 6.8%, while the DRL-based control achieves 9.0% reduction. Furthermore, the DRL agent provides a smoother control strategy while conserving momentum instantaneously. The physical analysis of the DRL control strategy reveals the production of large-scale counter-rotating vortices by adjacent actuator pairs. It is shown that the DRL agent acts on a wide range of frequencies to sustain these vortices in time. Last, we also introduce our computational fluid dynamics and DRL open-source framework suited for the next generation of exascale computing machines.https://doi.org/10.1038/s41467-025-56408-6 |
spellingShingle | Bernat Font Francisco Alcántara-Ávila Jean Rabault Ricardo Vinuesa Oriol Lehmkuhl Deep reinforcement learning for active flow control in a turbulent separation bubble Nature Communications |
title | Deep reinforcement learning for active flow control in a turbulent separation bubble |
title_full | Deep reinforcement learning for active flow control in a turbulent separation bubble |
title_fullStr | Deep reinforcement learning for active flow control in a turbulent separation bubble |
title_full_unstemmed | Deep reinforcement learning for active flow control in a turbulent separation bubble |
title_short | Deep reinforcement learning for active flow control in a turbulent separation bubble |
title_sort | deep reinforcement learning for active flow control in a turbulent separation bubble |
url | https://doi.org/10.1038/s41467-025-56408-6 |
work_keys_str_mv | AT bernatfont deepreinforcementlearningforactiveflowcontrolinaturbulentseparationbubble AT franciscoalcantaraavila deepreinforcementlearningforactiveflowcontrolinaturbulentseparationbubble AT jeanrabault deepreinforcementlearningforactiveflowcontrolinaturbulentseparationbubble AT ricardovinuesa deepreinforcementlearningforactiveflowcontrolinaturbulentseparationbubble AT oriollehmkuhl deepreinforcementlearningforactiveflowcontrolinaturbulentseparationbubble |