An effective dual encoder network with a feature attention large kernel for building extraction
Transformer models boost building extraction accuracy by capturing global features from images. However, convolutional networks’ potential in local feature extraction remains underutilized in CNN + Transformer models, limiting performance. To harness convolutional networks for local feature extracti...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Taylor & Francis Group
2024-01-01
|
| Series: | Geocarto International |
| Subjects: | |
| Online Access: | https://www.tandfonline.com/doi/10.1080/10106049.2024.2375572 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850245779211419648 |
|---|---|
| author | Shaobo Qiu Jingchun Zhou Yuan Liu Xiangrui Meng |
| author_facet | Shaobo Qiu Jingchun Zhou Yuan Liu Xiangrui Meng |
| author_sort | Shaobo Qiu |
| collection | DOAJ |
| description | Transformer models boost building extraction accuracy by capturing global features from images. However, convolutional networks’ potential in local feature extraction remains underutilized in CNN + Transformer models, limiting performance. To harness convolutional networks for local feature extraction, we propose a feature attention large kernel (ALK) module and a dual encoder network for high-resolution image-building extraction. The model integrates an attention-based large kernel encoder, a ResNet50-Transformer encoder, a Channel Transformer (Ctrans) module and a decoder. Efficiently capturing local and global building features from both convolutional and positional perspectives, the dual encoder enhances performance. Moreover, replacing skip connections with the CTrans module mitigates semantic inconsistency during feature fusion, ensuring better multidimensional feature integration. Experimental results demonstrate superior extraction of local and global features compared to other models, showcasing the potential of enhancing local feature extraction in advancing CNN + Transformer models. |
| format | Article |
| id | doaj-art-efb5e07d17b5493680fbca6fd2e58bba |
| institution | OA Journals |
| issn | 1010-6049 1752-0762 |
| language | English |
| publishDate | 2024-01-01 |
| publisher | Taylor & Francis Group |
| record_format | Article |
| series | Geocarto International |
| spelling | doaj-art-efb5e07d17b5493680fbca6fd2e58bba2025-08-20T01:59:21ZengTaylor & Francis GroupGeocarto International1010-60491752-07622024-01-0139110.1080/10106049.2024.2375572An effective dual encoder network with a feature attention large kernel for building extractionShaobo Qiu0Jingchun Zhou1Yuan Liu2Xiangrui Meng3Faculty of Geography, Yunnan Normal University, Kunming, Yunnan, ChinaFaculty of Geography, Yunnan Normal University, Kunming, Yunnan, ChinaFaculty of Geography, Yunnan Normal University, Kunming, Yunnan, ChinaFaculty of Geography, Yunnan Normal University, Kunming, Yunnan, ChinaTransformer models boost building extraction accuracy by capturing global features from images. However, convolutional networks’ potential in local feature extraction remains underutilized in CNN + Transformer models, limiting performance. To harness convolutional networks for local feature extraction, we propose a feature attention large kernel (ALK) module and a dual encoder network for high-resolution image-building extraction. The model integrates an attention-based large kernel encoder, a ResNet50-Transformer encoder, a Channel Transformer (Ctrans) module and a decoder. Efficiently capturing local and global building features from both convolutional and positional perspectives, the dual encoder enhances performance. Moreover, replacing skip connections with the CTrans module mitigates semantic inconsistency during feature fusion, ensuring better multidimensional feature integration. Experimental results demonstrate superior extraction of local and global features compared to other models, showcasing the potential of enhancing local feature extraction in advancing CNN + Transformer models.https://www.tandfonline.com/doi/10.1080/10106049.2024.2375572Buildingsimage semantic segmentationdual encoderfeature attention large kernel |
| spellingShingle | Shaobo Qiu Jingchun Zhou Yuan Liu Xiangrui Meng An effective dual encoder network with a feature attention large kernel for building extraction Geocarto International Buildings image semantic segmentation dual encoder feature attention large kernel |
| title | An effective dual encoder network with a feature attention large kernel for building extraction |
| title_full | An effective dual encoder network with a feature attention large kernel for building extraction |
| title_fullStr | An effective dual encoder network with a feature attention large kernel for building extraction |
| title_full_unstemmed | An effective dual encoder network with a feature attention large kernel for building extraction |
| title_short | An effective dual encoder network with a feature attention large kernel for building extraction |
| title_sort | effective dual encoder network with a feature attention large kernel for building extraction |
| topic | Buildings image semantic segmentation dual encoder feature attention large kernel |
| url | https://www.tandfonline.com/doi/10.1080/10106049.2024.2375572 |
| work_keys_str_mv | AT shaoboqiu aneffectivedualencodernetworkwithafeatureattentionlargekernelforbuildingextraction AT jingchunzhou aneffectivedualencodernetworkwithafeatureattentionlargekernelforbuildingextraction AT yuanliu aneffectivedualencodernetworkwithafeatureattentionlargekernelforbuildingextraction AT xiangruimeng aneffectivedualencodernetworkwithafeatureattentionlargekernelforbuildingextraction AT shaoboqiu effectivedualencodernetworkwithafeatureattentionlargekernelforbuildingextraction AT jingchunzhou effectivedualencodernetworkwithafeatureattentionlargekernelforbuildingextraction AT yuanliu effectivedualencodernetworkwithafeatureattentionlargekernelforbuildingextraction AT xiangruimeng effectivedualencodernetworkwithafeatureattentionlargekernelforbuildingextraction |