A Lightweight Residual Network for Unsupervised Deformable Image Registration

Unsupervised deformable volumetric image registration is crucial for various applications, such as medical imaging and diagnosis. Recently, learning-based methods have achieved remarkable success in this domain. Due to their strong global modeling capabilities, transformers outperform convolutional...

Full description

Saved in:
Bibliographic Details
Main Authors: Ahsan Raza Siyal, Astrid Ellen Grams, Markus Haltmeier
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10786016/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850064904020557824
author Ahsan Raza Siyal
Astrid Ellen Grams
Markus Haltmeier
author_facet Ahsan Raza Siyal
Astrid Ellen Grams
Markus Haltmeier
author_sort Ahsan Raza Siyal
collection DOAJ
description Unsupervised deformable volumetric image registration is crucial for various applications, such as medical imaging and diagnosis. Recently, learning-based methods have achieved remarkable success in this domain. Due to their strong global modeling capabilities, transformers outperform convolutional neural networks (CNNs) in registration tasks. However, transformers rely on large models with vast parameter sets, require significant computational resources, and demand extensive amounts of training data to achieve meaningful results. While existing CNN-based image registration methods provide rich local information, their limited global modeling capabilities hinder their ability to capture long-distance interactions, which restricts their overall performance. In this work, we propose a novel CNN-based registration method that improves the receptive field, maintains a low parameter count, and delivers strong results even on limited training datasets. Specifically, we use a residual U-Net architecture, enhanced with embedded parallel dilated-convolutional blocks, to expand the receptive field effectively. The proposed method is evaluated on inter-patient and atlas-to-patient datasets. We show that the performance of the proposed method is comparable to, and slightly better than, transformer-based methods while using only 1.5% of their number of parameters.
format Article
id doaj-art-858ab7936c9a4bd6a12c696e37c8fdf3
institution DOAJ
issn 2169-3536
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-858ab7936c9a4bd6a12c696e37c8fdf32025-08-20T02:49:09ZengIEEEIEEE Access2169-35362024-01-011218687218688210.1109/ACCESS.2024.351344110786016A Lightweight Residual Network for Unsupervised Deformable Image RegistrationAhsan Raza Siyal0https://orcid.org/0000-0002-2708-8001Astrid Ellen Grams1Markus Haltmeier2https://orcid.org/0000-0001-5715-0331Department of Mathematics, Universität Innsbruck, Innsbruck, AustriaDepartment of Radiology, Medical University of Innsbruck, Innsbruck, AustriaDepartment of Mathematics, Universität Innsbruck, Innsbruck, AustriaUnsupervised deformable volumetric image registration is crucial for various applications, such as medical imaging and diagnosis. Recently, learning-based methods have achieved remarkable success in this domain. Due to their strong global modeling capabilities, transformers outperform convolutional neural networks (CNNs) in registration tasks. However, transformers rely on large models with vast parameter sets, require significant computational resources, and demand extensive amounts of training data to achieve meaningful results. While existing CNN-based image registration methods provide rich local information, their limited global modeling capabilities hinder their ability to capture long-distance interactions, which restricts their overall performance. In this work, we propose a novel CNN-based registration method that improves the receptive field, maintains a low parameter count, and delivers strong results even on limited training datasets. Specifically, we use a residual U-Net architecture, enhanced with embedded parallel dilated-convolutional blocks, to expand the receptive field effectively. The proposed method is evaluated on inter-patient and atlas-to-patient datasets. We show that the performance of the proposed method is comparable to, and slightly better than, transformer-based methods while using only 1.5% of their number of parameters.https://ieeexplore.ieee.org/document/10786016/Deformable image registrationunsupervised learningresidual blocksdilated convolutionlimited dataparameter reduction
spellingShingle Ahsan Raza Siyal
Astrid Ellen Grams
Markus Haltmeier
A Lightweight Residual Network for Unsupervised Deformable Image Registration
IEEE Access
Deformable image registration
unsupervised learning
residual blocks
dilated convolution
limited data
parameter reduction
title A Lightweight Residual Network for Unsupervised Deformable Image Registration
title_full A Lightweight Residual Network for Unsupervised Deformable Image Registration
title_fullStr A Lightweight Residual Network for Unsupervised Deformable Image Registration
title_full_unstemmed A Lightweight Residual Network for Unsupervised Deformable Image Registration
title_short A Lightweight Residual Network for Unsupervised Deformable Image Registration
title_sort lightweight residual network for unsupervised deformable image registration
topic Deformable image registration
unsupervised learning
residual blocks
dilated convolution
limited data
parameter reduction
url https://ieeexplore.ieee.org/document/10786016/
work_keys_str_mv AT ahsanrazasiyal alightweightresidualnetworkforunsuperviseddeformableimageregistration
AT astridellengrams alightweightresidualnetworkforunsuperviseddeformableimageregistration
AT markushaltmeier alightweightresidualnetworkforunsuperviseddeformableimageregistration
AT ahsanrazasiyal lightweightresidualnetworkforunsuperviseddeformableimageregistration
AT astridellengrams lightweightresidualnetworkforunsuperviseddeformableimageregistration
AT markushaltmeier lightweightresidualnetworkforunsuperviseddeformableimageregistration