Multiclass Crop Interpretation via a Lightweight Attentive Feature Fusion Network Using Vehicle-View Images
Automatic crop interpretation can provide important reference information for national agricultural decision-making. However, due to the diverse characteristics and complex spatial relationship of crops, remote sensing images taken from a bird's eye view are insufficient in vertical featu...
Saved in:
| Main Authors: | , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10731986/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850270287282569216 |
|---|---|
| author | Wenyue Li Bingfang Wu Runyu Fan Fuyou Tian Miao Zhang Zhaoying Zhou Jun Hu Ruyi Feng Fangming Wu |
| author_facet | Wenyue Li Bingfang Wu Runyu Fan Fuyou Tian Miao Zhang Zhaoying Zhou Jun Hu Ruyi Feng Fangming Wu |
| author_sort | Wenyue Li |
| collection | DOAJ |
| description | Automatic crop interpretation can provide important reference information for national agricultural decision-making. However, due to the diverse characteristics and complex spatial relationship of crops, remote sensing images taken from a bird's eye view are insufficient in vertical features of crops, making it difficult to interpret crop types and locations accurately. The similar features and blurred edges between different crops make it difficult to extract crop boundaries accurately. Due to the high memory and computational costs, most of the deep learning-based models face efficiency limitations in real-scenario crop interpretation. To address the abovementioned issues, this article proposes a novel lightweight neural network, namely the CropNet, for crop interpretation. Aiming at the problem of feature similarity among different categories of crops, this article designs a global-local path aggregation (GLPA) mechanism, which uses shallow and deep neural networks to extract global detail information and local high-level information to enhance feature discrimination. An edge context feature enhancement module (ECFEM) is proposed to enhance edge and context feature extraction to address the problem of ambiguous crop edges. Finally, a feature fusion module based on an attention mechanism is used to automatically weigh different feature channels to enhance the crop semantics. To demonstrate the effectiveness of the CropNet proposed in this article, we constructed several sets of comparison experiments comparing it with state-of-the-art deep learning models on a manually labeled vehicle-view crop image dataset. The experimental results show that CropNet has better semantic segmentation results with fewer model parameters and lower computational costs. |
| format | Article |
| id | doaj-art-c466cd6bb15146d49083553fdcbb9155 |
| institution | OA Journals |
| issn | 1939-1404 2151-1535 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
| spelling | doaj-art-c466cd6bb15146d49083553fdcbb91552025-08-20T01:52:42ZengIEEEIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing1939-14042151-15352025-01-011849650910.1109/JSTARS.2024.348124810731986Multiclass Crop Interpretation via a Lightweight Attentive Feature Fusion Network Using Vehicle-View ImagesWenyue Li0https://orcid.org/0009-0006-3516-1885Bingfang Wu1https://orcid.org/0000-0001-5546-365XRunyu Fan2https://orcid.org/0000-0002-5259-5670Fuyou Tian3https://orcid.org/0000-0003-1758-8763Miao Zhang4https://orcid.org/0000-0002-4021-2492Zhaoying Zhou5Jun Hu6Ruyi Feng7https://orcid.org/0000-0002-5709-690XFangming Wu8School of Computer Science, China University of Geosciences, Wuhan, ChinaKey Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, ChinaSchool of Computer Science, China University of Geosciences, Wuhan, ChinaKey Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, ChinaKey Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, ChinaSchool of Computer Science, China University of Geosciences, Wuhan, ChinaSchool of Computer Science, China University of Geosciences, Wuhan, ChinaSchool of Computer Science, China University of Geosciences, Wuhan, ChinaKey Laboratory of Remote Sensing and Digital Earth, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, ChinaAutomatic crop interpretation can provide important reference information for national agricultural decision-making. However, due to the diverse characteristics and complex spatial relationship of crops, remote sensing images taken from a bird's eye view are insufficient in vertical features of crops, making it difficult to interpret crop types and locations accurately. The similar features and blurred edges between different crops make it difficult to extract crop boundaries accurately. Due to the high memory and computational costs, most of the deep learning-based models face efficiency limitations in real-scenario crop interpretation. To address the abovementioned issues, this article proposes a novel lightweight neural network, namely the CropNet, for crop interpretation. Aiming at the problem of feature similarity among different categories of crops, this article designs a global-local path aggregation (GLPA) mechanism, which uses shallow and deep neural networks to extract global detail information and local high-level information to enhance feature discrimination. An edge context feature enhancement module (ECFEM) is proposed to enhance edge and context feature extraction to address the problem of ambiguous crop edges. Finally, a feature fusion module based on an attention mechanism is used to automatically weigh different feature channels to enhance the crop semantics. To demonstrate the effectiveness of the CropNet proposed in this article, we constructed several sets of comparison experiments comparing it with state-of-the-art deep learning models on a manually labeled vehicle-view crop image dataset. The experimental results show that CropNet has better semantic segmentation results with fewer model parameters and lower computational costs.https://ieeexplore.ieee.org/document/10731986/Deep learningmultifeature fusionsemantic segmentationcrop interpretation |
| spellingShingle | Wenyue Li Bingfang Wu Runyu Fan Fuyou Tian Miao Zhang Zhaoying Zhou Jun Hu Ruyi Feng Fangming Wu Multiclass Crop Interpretation via a Lightweight Attentive Feature Fusion Network Using Vehicle-View Images IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Deep learning multifeature fusion semantic segmentation crop interpretation |
| title | Multiclass Crop Interpretation via a Lightweight Attentive Feature Fusion Network Using Vehicle-View Images |
| title_full | Multiclass Crop Interpretation via a Lightweight Attentive Feature Fusion Network Using Vehicle-View Images |
| title_fullStr | Multiclass Crop Interpretation via a Lightweight Attentive Feature Fusion Network Using Vehicle-View Images |
| title_full_unstemmed | Multiclass Crop Interpretation via a Lightweight Attentive Feature Fusion Network Using Vehicle-View Images |
| title_short | Multiclass Crop Interpretation via a Lightweight Attentive Feature Fusion Network Using Vehicle-View Images |
| title_sort | multiclass crop interpretation via a lightweight attentive feature fusion network using vehicle view images |
| topic | Deep learning multifeature fusion semantic segmentation crop interpretation |
| url | https://ieeexplore.ieee.org/document/10731986/ |
| work_keys_str_mv | AT wenyueli multiclasscropinterpretationviaalightweightattentivefeaturefusionnetworkusingvehicleviewimages AT bingfangwu multiclasscropinterpretationviaalightweightattentivefeaturefusionnetworkusingvehicleviewimages AT runyufan multiclasscropinterpretationviaalightweightattentivefeaturefusionnetworkusingvehicleviewimages AT fuyoutian multiclasscropinterpretationviaalightweightattentivefeaturefusionnetworkusingvehicleviewimages AT miaozhang multiclasscropinterpretationviaalightweightattentivefeaturefusionnetworkusingvehicleviewimages AT zhaoyingzhou multiclasscropinterpretationviaalightweightattentivefeaturefusionnetworkusingvehicleviewimages AT junhu multiclasscropinterpretationviaalightweightattentivefeaturefusionnetworkusingvehicleviewimages AT ruyifeng multiclasscropinterpretationviaalightweightattentivefeaturefusionnetworkusingvehicleviewimages AT fangmingwu multiclasscropinterpretationviaalightweightattentivefeaturefusionnetworkusingvehicleviewimages |