Improved leaf area index reconstruction in heavily cloudy areas: A novel deep learning approach for SAR-Optical fusion integrating spatiotemporal features

The Leaf Area Index (LAI) is an essential parameter for assessing vegetation growth. LAI derived from optical data can suffer from gaps caused by cloud cover. Synthetic Aperture Radar (SAR) presents a solution with its all-weather observation capability. To address these issues, this study proposes...

Full description

Saved in:
Bibliographic Details
Main Authors: Mingqi Li, Pengxin Wang, Kevin Tansey, Fengwei Guo, Ji Zhou
Format: Article
Language:English
Published: Elsevier 2025-08-01
Series:International Journal of Applied Earth Observations and Geoinformation
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1569843225003929
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850035100234809344
author Mingqi Li
Pengxin Wang
Kevin Tansey
Fengwei Guo
Ji Zhou
author_facet Mingqi Li
Pengxin Wang
Kevin Tansey
Fengwei Guo
Ji Zhou
author_sort Mingqi Li
collection DOAJ
description The Leaf Area Index (LAI) is an essential parameter for assessing vegetation growth. LAI derived from optical data can suffer from gaps caused by cloud cover. Synthetic Aperture Radar (SAR) presents a solution with its all-weather observation capability. To address these issues, this study proposes a new deep learning approach for reconstructing time series LAI using SAR and optical data in two steps. Firstly, the two-dimensional Convolutional Neural Network-Transformer (2D CNN-Transformer) is applied to bridge SAR and optical data. Secondly, the 2D CNN-Transformer predicted LAI and the Sentinel-2 LAI are input into the Enhanced Deep Convolutional Model for Spatiotemporal Image Fusion (EDCSTFN) model to further improve the accuracy. The novelty lies in a two-step framework combining a 2D CNN-Transformer for spatiotemporal feature extraction and a deep learning fusion algorithm refining accurate LAI reconstruction. Results showed that the 2D CNN-Transformer achieved a higher accuracy (R2 = 0.64, RMSE = 0.38 m2/m2) in establishing a relationship between SAR and optical data, compared to 1D CNN, 2D CNN-LSTM, and 1D CNN-Transformer. In the second step, the EDCSTFN reconstructed LAI achieved the highest accuracy of an R2 of 0.81 and an RMSE of 0.22 m2/m2, with an average R2 of 0.61 and RMSE of 0.37 m2/m2 across croplands and forests in millions of pixels, further improving the accuracy based on the first step. The approach effectively fills gaps in spatial details and achieves a more continuous spatial distribution. The proposed approach demonstrates good generalizability in millions of pixels under frequent cloud cover and complex surface conditions and provides a new strategy for the fusion of optical and SAR data.
format Article
id doaj-art-8828d33fae9b4083ba68ecf03e4ed61c
institution DOAJ
issn 1569-8432
language English
publishDate 2025-08-01
publisher Elsevier
record_format Article
series International Journal of Applied Earth Observations and Geoinformation
spelling doaj-art-8828d33fae9b4083ba68ecf03e4ed61c2025-08-20T02:57:35ZengElsevierInternational Journal of Applied Earth Observations and Geoinformation1569-84322025-08-0114210474510.1016/j.jag.2025.104745Improved leaf area index reconstruction in heavily cloudy areas: A novel deep learning approach for SAR-Optical fusion integrating spatiotemporal featuresMingqi Li0Pengxin Wang1Kevin Tansey2Fengwei Guo3Ji Zhou4College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, PR China; Key Laboratory of Agricultural Machinery Monitoring and Big Data Applications, Ministry of Agriculture and Rural Affairs, Beijing 100083, PR ChinaCollege of Information and Electrical Engineering, China Agricultural University, Beijing 100083, PR China; Key Laboratory of Agricultural Machinery Monitoring and Big Data Applications, Ministry of Agriculture and Rural Affairs, Beijing 100083, PR China; Corresponding author at: P.O. Box 116, China Agricultural University, East Campus, Qinghua East Road, No. 17, Haidian, Beijing 100083, PR ChinaSchool of Geography, Geology and the Environment, University of Leicester, Leicester LE1 7RH, UKCollege of Information and Electrical Engineering, China Agricultural University, Beijing 100083, PR China; Key Laboratory of Agricultural Machinery Monitoring and Big Data Applications, Ministry of Agriculture and Rural Affairs, Beijing 100083, PR ChinaSchool of Resources and Environment, University of Electronic Science and Technology of China, Chengdu 611731, PR ChinaThe Leaf Area Index (LAI) is an essential parameter for assessing vegetation growth. LAI derived from optical data can suffer from gaps caused by cloud cover. Synthetic Aperture Radar (SAR) presents a solution with its all-weather observation capability. To address these issues, this study proposes a new deep learning approach for reconstructing time series LAI using SAR and optical data in two steps. Firstly, the two-dimensional Convolutional Neural Network-Transformer (2D CNN-Transformer) is applied to bridge SAR and optical data. Secondly, the 2D CNN-Transformer predicted LAI and the Sentinel-2 LAI are input into the Enhanced Deep Convolutional Model for Spatiotemporal Image Fusion (EDCSTFN) model to further improve the accuracy. The novelty lies in a two-step framework combining a 2D CNN-Transformer for spatiotemporal feature extraction and a deep learning fusion algorithm refining accurate LAI reconstruction. Results showed that the 2D CNN-Transformer achieved a higher accuracy (R2 = 0.64, RMSE = 0.38 m2/m2) in establishing a relationship between SAR and optical data, compared to 1D CNN, 2D CNN-LSTM, and 1D CNN-Transformer. In the second step, the EDCSTFN reconstructed LAI achieved the highest accuracy of an R2 of 0.81 and an RMSE of 0.22 m2/m2, with an average R2 of 0.61 and RMSE of 0.37 m2/m2 across croplands and forests in millions of pixels, further improving the accuracy based on the first step. The approach effectively fills gaps in spatial details and achieves a more continuous spatial distribution. The proposed approach demonstrates good generalizability in millions of pixels under frequent cloud cover and complex surface conditions and provides a new strategy for the fusion of optical and SAR data.http://www.sciencedirect.com/science/article/pii/S1569843225003929SAR dataOptical dataSpatiotemporal featuresLAIDeep learningSpatiotemporal fusion
spellingShingle Mingqi Li
Pengxin Wang
Kevin Tansey
Fengwei Guo
Ji Zhou
Improved leaf area index reconstruction in heavily cloudy areas: A novel deep learning approach for SAR-Optical fusion integrating spatiotemporal features
International Journal of Applied Earth Observations and Geoinformation
SAR data
Optical data
Spatiotemporal features
LAI
Deep learning
Spatiotemporal fusion
title Improved leaf area index reconstruction in heavily cloudy areas: A novel deep learning approach for SAR-Optical fusion integrating spatiotemporal features
title_full Improved leaf area index reconstruction in heavily cloudy areas: A novel deep learning approach for SAR-Optical fusion integrating spatiotemporal features
title_fullStr Improved leaf area index reconstruction in heavily cloudy areas: A novel deep learning approach for SAR-Optical fusion integrating spatiotemporal features
title_full_unstemmed Improved leaf area index reconstruction in heavily cloudy areas: A novel deep learning approach for SAR-Optical fusion integrating spatiotemporal features
title_short Improved leaf area index reconstruction in heavily cloudy areas: A novel deep learning approach for SAR-Optical fusion integrating spatiotemporal features
title_sort improved leaf area index reconstruction in heavily cloudy areas a novel deep learning approach for sar optical fusion integrating spatiotemporal features
topic SAR data
Optical data
Spatiotemporal features
LAI
Deep learning
Spatiotemporal fusion
url http://www.sciencedirect.com/science/article/pii/S1569843225003929
work_keys_str_mv AT mingqili improvedleafareaindexreconstructioninheavilycloudyareasanoveldeeplearningapproachforsaropticalfusionintegratingspatiotemporalfeatures
AT pengxinwang improvedleafareaindexreconstructioninheavilycloudyareasanoveldeeplearningapproachforsaropticalfusionintegratingspatiotemporalfeatures
AT kevintansey improvedleafareaindexreconstructioninheavilycloudyareasanoveldeeplearningapproachforsaropticalfusionintegratingspatiotemporalfeatures
AT fengweiguo improvedleafareaindexreconstructioninheavilycloudyareasanoveldeeplearningapproachforsaropticalfusionintegratingspatiotemporalfeatures
AT jizhou improvedleafareaindexreconstructioninheavilycloudyareasanoveldeeplearningapproachforsaropticalfusionintegratingspatiotemporalfeatures