An Inverted Residual Cross Head Knowledge Distillation Network for Remote Sensing Scene Image Classification
In recent years, remote sensing scene classification (RSSC) has achieved notable advancements. Remote sensing scene images exhibit greater complexity in terms of land features, with large intra class differences and high inter class similarity, posing challenges in effectively extracting discriminat...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10870144/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823857156810604544 |
---|---|
author | Cuiping Shi Mengxiang Ding Liguo Wang |
author_facet | Cuiping Shi Mengxiang Ding Liguo Wang |
author_sort | Cuiping Shi |
collection | DOAJ |
description | In recent years, remote sensing scene classification (RSSC) has achieved notable advancements. Remote sensing scene images exhibit greater complexity in terms of land features, with large intra class differences and high inter class similarity, posing challenges in effectively extracting discriminative features. Convolutional neural networks are extensively used in RSSC tasks, where convolution focuses more on the high-frequency components of the image. Unlike convolution, transformer can model long-distance feature dependencies and mine contextual information in remote sensing scene images. Moreover, in traditional knowledge distillation methods, conflicts sometimes arise between teacher predictions and true labels, which hinder the training of the model. To enable the model to obtain sufficient supervision information while avoiding information conflicts, in this paper, an inverted residual cross head knowledge distillation network (IRCHKD) is proposed. First, an inverted residual attention module is designed to extract and leverage both local and global information effectively, enhancing the model's ability to capture complex details while retaining contextual information. Then, a multiscale spatial attention module is constructed to further extract global and local features of the image through multiple dilated convolutions, using spatial attention to weight important features in each dilated convolution branch. Finally, a cross head knowledge distillation structure is carefully designed to avoid conflicts between real labels and teacher predictions. The experimental results indicate that the proposed IRCHKD outperforms than some state-of-the-art RSSC approaches with a large margin in lower computational complexity. |
format | Article |
id | doaj-art-b93a5644e2aa459dab862ee2737ac197 |
institution | Kabale University |
issn | 1939-1404 2151-1535 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
spelling | doaj-art-b93a5644e2aa459dab862ee2737ac1972025-02-12T00:00:45ZengIEEEIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing1939-14042151-15352025-01-01184881489410.1109/JSTARS.2025.353543710870144An Inverted Residual Cross Head Knowledge Distillation Network for Remote Sensing Scene Image ClassificationCuiping Shi0https://orcid.org/0000-0001-5877-1762Mengxiang Ding1https://orcid.org/0009-0000-5326-5182Liguo Wang2https://orcid.org/0000-0001-9373-6233College of Information Engineering, Huzhou University, Huzhou, ChinaCollege of Communication and Electronic Engineering, Qiqihar University, Qiqihar, ChinaCollege of Information and Communication Engineering, Dalian Nationalities University, Dalian, ChinaIn recent years, remote sensing scene classification (RSSC) has achieved notable advancements. Remote sensing scene images exhibit greater complexity in terms of land features, with large intra class differences and high inter class similarity, posing challenges in effectively extracting discriminative features. Convolutional neural networks are extensively used in RSSC tasks, where convolution focuses more on the high-frequency components of the image. Unlike convolution, transformer can model long-distance feature dependencies and mine contextual information in remote sensing scene images. Moreover, in traditional knowledge distillation methods, conflicts sometimes arise between teacher predictions and true labels, which hinder the training of the model. To enable the model to obtain sufficient supervision information while avoiding information conflicts, in this paper, an inverted residual cross head knowledge distillation network (IRCHKD) is proposed. First, an inverted residual attention module is designed to extract and leverage both local and global information effectively, enhancing the model's ability to capture complex details while retaining contextual information. Then, a multiscale spatial attention module is constructed to further extract global and local features of the image through multiple dilated convolutions, using spatial attention to weight important features in each dilated convolution branch. Finally, a cross head knowledge distillation structure is carefully designed to avoid conflicts between real labels and teacher predictions. The experimental results indicate that the proposed IRCHKD outperforms than some state-of-the-art RSSC approaches with a large margin in lower computational complexity.https://ieeexplore.ieee.org/document/10870144/Remote sensing scene classification (RSSC)convolutionaltransformerknowledge distillation |
spellingShingle | Cuiping Shi Mengxiang Ding Liguo Wang An Inverted Residual Cross Head Knowledge Distillation Network for Remote Sensing Scene Image Classification IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Remote sensing scene classification (RSSC) convolutional transformer knowledge distillation |
title | An Inverted Residual Cross Head Knowledge Distillation Network for Remote Sensing Scene Image Classification |
title_full | An Inverted Residual Cross Head Knowledge Distillation Network for Remote Sensing Scene Image Classification |
title_fullStr | An Inverted Residual Cross Head Knowledge Distillation Network for Remote Sensing Scene Image Classification |
title_full_unstemmed | An Inverted Residual Cross Head Knowledge Distillation Network for Remote Sensing Scene Image Classification |
title_short | An Inverted Residual Cross Head Knowledge Distillation Network for Remote Sensing Scene Image Classification |
title_sort | inverted residual cross head knowledge distillation network for remote sensing scene image classification |
topic | Remote sensing scene classification (RSSC) convolutional transformer knowledge distillation |
url | https://ieeexplore.ieee.org/document/10870144/ |
work_keys_str_mv | AT cuipingshi aninvertedresidualcrossheadknowledgedistillationnetworkforremotesensingsceneimageclassification AT mengxiangding aninvertedresidualcrossheadknowledgedistillationnetworkforremotesensingsceneimageclassification AT liguowang aninvertedresidualcrossheadknowledgedistillationnetworkforremotesensingsceneimageclassification AT cuipingshi invertedresidualcrossheadknowledgedistillationnetworkforremotesensingsceneimageclassification AT mengxiangding invertedresidualcrossheadknowledgedistillationnetworkforremotesensingsceneimageclassification AT liguowang invertedresidualcrossheadknowledgedistillationnetworkforremotesensingsceneimageclassification |