A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator
Computed tomography (CT) is essential for diagnosing and managing various diseases, with contrast-enhanced CT (CECT) offering higher contrast images following contrast agent injection. Nevertheless, the usage of contrast agents may cause side effects. Therefore, achieving high-contrast CT images wit...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10973126/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849324461890732032 |
|---|---|
| author | Haozhe Wang Dawei Gong Rongzhen Zhou Junbo Liang Ruili Zhang Wenbin Ji Sailing He |
| author_facet | Haozhe Wang Dawei Gong Rongzhen Zhou Junbo Liang Ruili Zhang Wenbin Ji Sailing He |
| author_sort | Haozhe Wang |
| collection | DOAJ |
| description | Computed tomography (CT) is essential for diagnosing and managing various diseases, with contrast-enhanced CT (CECT) offering higher contrast images following contrast agent injection. Nevertheless, the usage of contrast agents may cause side effects. Therefore, achieving high-contrast CT images without the need for contrast agent injection is highly desirable. The main contributions of this paper are as follows: 1) We designed a GAN-guided CNN-Transformer aggregation network called GCTANet for the CECT image synthesis task. We propose a CNN-Transformer Selective Fusion Module (CTSFM) to fully exploit the interaction between local and global information for CECT image synthesis. 2) We propose a two-stage training strategy. We first train a non-contrast CT (NCCT) image synthesis model to deal with the misalignment between NCCT and CECT images. Then we trained GCTANet to predict real CECT images using synthetic NCCT images. 3) A multi-scale Patch hybrid attention block (MSPHAB) was proposed to obtain enhanced feature representations. MSPHAB consists of spatial self-attention and channel self-attention in parallel. We also propose a spatial channel information interaction module (SCIM) to fully fuse the two kinds of self-attention information to obtain a strong representation ability. We evaluated GCTANet on two private datasets and one public dataset. On the neck dataset, the PSNR and SSIM achieved were <inline-formula> <tex-math notation="LaTeX">$35.46\pm 2.783$ </tex-math></inline-formula> dB and <inline-formula> <tex-math notation="LaTeX">$0.970\pm 0.020$ </tex-math></inline-formula>, respectively; on the abdominal dataset, <inline-formula> <tex-math notation="LaTeX">$25.75\pm 5.153$ </tex-math></inline-formula> dB and <inline-formula> <tex-math notation="LaTeX">$0.827\pm 0.073$ </tex-math></inline-formula>, respectively; and on the MRI-CT dataset, <inline-formula> <tex-math notation="LaTeX">$29.61\pm 1.789$ </tex-math></inline-formula> dB and <inline-formula> <tex-math notation="LaTeX">$0.917\pm 0.032$ </tex-math></inline-formula>, respectively. In particular, in the area around the heart, where obvious movements and disturbances were unavoidable due to the heartbeat and breathing, GCTANet still successfully synthesized high-contrast coronary arteries, demonstrating its potential for assisting in coronary artery disease diagnosis. The results demonstrate that GCTANet outperforms existing methods. |
| format | Article |
| id | doaj-art-3f7a8dc0c9884446a31482e3cdc82ee4 |
| institution | Kabale University |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-3f7a8dc0c9884446a31482e3cdc82ee42025-08-20T03:48:42ZengIEEEIEEE Access2169-35362025-01-0113722027222010.1109/ACCESS.2025.356337510973126A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated GeneratorHaozhe Wang0Dawei Gong1Rongzhen Zhou2Junbo Liang3Ruili Zhang4Wenbin Ji5Sailing He6https://orcid.org/0000-0002-3401-1125National Engineering Research Center for Optical Instruments, Zhejiang University, Hangzhou, ChinaNational Engineering Research Center for Optical Instruments, Zhejiang University, Hangzhou, ChinaTaizhou Hospital, Zhejiang University, Linhai, ChinaTaizhou Hospital, Zhejiang University, Linhai, ChinaTaizhou Hospital, Zhejiang University, Linhai, ChinaTaizhou Hospital, Zhejiang University, Linhai, ChinaNational Engineering Research Center for Optical Instruments, Zhejiang University, Hangzhou, ChinaComputed tomography (CT) is essential for diagnosing and managing various diseases, with contrast-enhanced CT (CECT) offering higher contrast images following contrast agent injection. Nevertheless, the usage of contrast agents may cause side effects. Therefore, achieving high-contrast CT images without the need for contrast agent injection is highly desirable. The main contributions of this paper are as follows: 1) We designed a GAN-guided CNN-Transformer aggregation network called GCTANet for the CECT image synthesis task. We propose a CNN-Transformer Selective Fusion Module (CTSFM) to fully exploit the interaction between local and global information for CECT image synthesis. 2) We propose a two-stage training strategy. We first train a non-contrast CT (NCCT) image synthesis model to deal with the misalignment between NCCT and CECT images. Then we trained GCTANet to predict real CECT images using synthetic NCCT images. 3) A multi-scale Patch hybrid attention block (MSPHAB) was proposed to obtain enhanced feature representations. MSPHAB consists of spatial self-attention and channel self-attention in parallel. We also propose a spatial channel information interaction module (SCIM) to fully fuse the two kinds of self-attention information to obtain a strong representation ability. We evaluated GCTANet on two private datasets and one public dataset. On the neck dataset, the PSNR and SSIM achieved were <inline-formula> <tex-math notation="LaTeX">$35.46\pm 2.783$ </tex-math></inline-formula> dB and <inline-formula> <tex-math notation="LaTeX">$0.970\pm 0.020$ </tex-math></inline-formula>, respectively; on the abdominal dataset, <inline-formula> <tex-math notation="LaTeX">$25.75\pm 5.153$ </tex-math></inline-formula> dB and <inline-formula> <tex-math notation="LaTeX">$0.827\pm 0.073$ </tex-math></inline-formula>, respectively; and on the MRI-CT dataset, <inline-formula> <tex-math notation="LaTeX">$29.61\pm 1.789$ </tex-math></inline-formula> dB and <inline-formula> <tex-math notation="LaTeX">$0.917\pm 0.032$ </tex-math></inline-formula>, respectively. In particular, in the area around the heart, where obvious movements and disturbances were unavoidable due to the heartbeat and breathing, GCTANet still successfully synthesized high-contrast coronary arteries, demonstrating its potential for assisting in coronary artery disease diagnosis. The results demonstrate that GCTANet outperforms existing methods.https://ieeexplore.ieee.org/document/10973126/Medical image synthesistransformerCNNgenerative adversative network |
| spellingShingle | Haozhe Wang Dawei Gong Rongzhen Zhou Junbo Liang Ruili Zhang Wenbin Ji Sailing He A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator IEEE Access Medical image synthesis transformer CNN generative adversative network |
| title | A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator |
| title_full | A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator |
| title_fullStr | A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator |
| title_full_unstemmed | A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator |
| title_short | A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator |
| title_sort | gan guided ncct to cect synthesis with an advanced cnn transformer aggregated generator |
| topic | Medical image synthesis transformer CNN generative adversative network |
| url | https://ieeexplore.ieee.org/document/10973126/ |
| work_keys_str_mv | AT haozhewang aganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT daweigong aganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT rongzhenzhou aganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT junboliang aganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT ruilizhang aganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT wenbinji aganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT sailinghe aganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT haozhewang ganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT daweigong ganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT rongzhenzhou ganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT junboliang ganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT ruilizhang ganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT wenbinji ganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator AT sailinghe ganguidednccttocectsynthesiswithanadvancedcnntransformeraggregatedgenerator |