An Inverted Residual Cross Head Knowledge Distillation Network for Remote Sensing Scene Image Classification
In recent years, remote sensing scene classification (RSSC) has achieved notable advancements. Remote sensing scene images exhibit greater complexity in terms of land features, with large intra class differences and high inter class similarity, posing challenges in effectively extracting discriminat...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10870144/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In recent years, remote sensing scene classification (RSSC) has achieved notable advancements. Remote sensing scene images exhibit greater complexity in terms of land features, with large intra class differences and high inter class similarity, posing challenges in effectively extracting discriminative features. Convolutional neural networks are extensively used in RSSC tasks, where convolution focuses more on the high-frequency components of the image. Unlike convolution, transformer can model long-distance feature dependencies and mine contextual information in remote sensing scene images. Moreover, in traditional knowledge distillation methods, conflicts sometimes arise between teacher predictions and true labels, which hinder the training of the model. To enable the model to obtain sufficient supervision information while avoiding information conflicts, in this paper, an inverted residual cross head knowledge distillation network (IRCHKD) is proposed. First, an inverted residual attention module is designed to extract and leverage both local and global information effectively, enhancing the model's ability to capture complex details while retaining contextual information. Then, a multiscale spatial attention module is constructed to further extract global and local features of the image through multiple dilated convolutions, using spatial attention to weight important features in each dilated convolution branch. Finally, a cross head knowledge distillation structure is carefully designed to avoid conflicts between real labels and teacher predictions. The experimental results indicate that the proposed IRCHKD outperforms than some state-of-the-art RSSC approaches with a large margin in lower computational complexity. |
---|---|
ISSN: | 1939-1404 2151-1535 |