A Deep Learning–Based Multimodal F10.7 Prediction with Mamba
F10.7, the solar radiation flux at a wavelength of 10.7 cm, serves as a crucial parameter in various space weather models and plays a significant role in measuring the intensity of solar activity. The study and prediction of F10.7 are of great significance for many applications. The motivation for t...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IOP Publishing
2025-01-01
|
| Series: | The Astrophysical Journal Supplement Series |
| Subjects: | |
| Online Access: | https://doi.org/10.3847/1538-4365/adf102 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | F10.7, the solar radiation flux at a wavelength of 10.7 cm, serves as a crucial parameter in various space weather models and plays a significant role in measuring the intensity of solar activity. The study and prediction of F10.7 are of great significance for many applications. The motivation for this work stems from the close correlation between the F10.7 and various types and levels of solar activity, which can be well informed by solar images: By extracting relevant features from multimodal data sources and integrating them into the F10.7 prediction, we expect to improve prediction accuracy. To this end, we propose a multimodal F10.7 prediction model with Mamba, leveraging both the F10.7 data and several types of solar image data, such as ADAPT-GONG images, Helioseismic and Magnetic Imager magnetograms, and EUV images (AIA 131, 211, and 304 Å). We construct Mamba-based modules for the F10.7 index (MaFI) and sequential image (MaSI) representation learning. The temporal embeddings learned from these two modules are then fused by cross attention to capture relationships between the F10.7 and solar image data. Extensive experimental results demonstrate the superior performance of the proposed multimodal model compared to single-modal models in predicting F10.7 within approximately one solar activity cycle (from 2010 to 2024). We also studied the selection methodology for different types of images and found that, in general, the robustness of multimodal models increases with the number of image types used. However, as long as we select the appropriate data types, we can still achieve excellent prediction results with fewer image types. |
|---|---|
| ISSN: | 0067-0049 |