Developing an explainable deep learning module based on the LSTM framework for flood prediction
Long short-term memory (LSTM) networks have become indispensable tools in hydrological modeling due to their ability to capture long-term dependencies, handle non-linear relationships, and integrate multiple data sources but suffer from limited interpretability due to their black box nature. To addr...
Saved in:
| Main Authors: | Zhi Zhang, Dagang Wang, Yiwen Mei, Jinxin Zhu, Xusha Xiao |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Frontiers Media S.A.
2025-05-01
|
| Series: | Frontiers in Water |
| Subjects: | |
| Online Access: | https://www.frontiersin.org/articles/10.3389/frwa.2025.1562842/full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Improving the explainability of CNN-LSTM-based flood prediction with integrating SHAP technique
by: Hao Huang, et al.
Published: (2024-12-01) -
Explaining the Mechanism of Multiscale Groundwater Drought Events: A New Perspective From Interpretable Deep Learning Model
by: Hejiang Cai, et al.
Published: (2024-07-01) -
Explainability and Interpretability in Concept and Data Drift: A Systematic Literature Review
by: Daniele Pelosi, et al.
Published: (2025-07-01) -
Growing Spatial Scales of Synchronous River Flooding in Europe
by: Wouter R. Berghuijs, et al.
Published: (2019-02-01) -
Evaluating Pavement Deterioration Rates Due to Flooding Events Using Explainable AI
by: Lidan Peng, et al.
Published: (2025-04-01)