Exploiting a Spatial Attention Mechanism for Improved Depth Completion and Feature Fusion in Novel View Synthesis

Many image-based rendering (IBR) methods rely on depth estimates obtained from structured light or time-of-flight depth sensors to synthesize novel views from sparse camera networks. However, these estimates often contain missing or noisy regions, resulting in an incorrect mapping between source and...

Full description

Saved in:
Bibliographic Details
Main Authors: Anh Minh Truong, Wilfried Philips, Peter Veelaert
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Open Journal of Signal Processing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10345792/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841554021171593216
author Anh Minh Truong
Wilfried Philips
Peter Veelaert
author_facet Anh Minh Truong
Wilfried Philips
Peter Veelaert
author_sort Anh Minh Truong
collection DOAJ
description Many image-based rendering (IBR) methods rely on depth estimates obtained from structured light or time-of-flight depth sensors to synthesize novel views from sparse camera networks. However, these estimates often contain missing or noisy regions, resulting in an incorrect mapping between source and target views. This situation makes the fusion process more challenging, as the visual information is misaligned, inconsistent, or missing. In this work, we first implement a lightweight network based on the transformer, which is well-known for its capability to model long-range relationships within input data, to extract spatial features from color images. These features are then used to enhance the quality of completed depth maps. Furthermore, we combine a sequential deep neural network with a spatial attention mechanism to effectively fuse the projected features from multiple source viewpoints. This approach enables us to integrate information from an arbitrary number of source viewpoints as well as improve accuracy in synthesized views. Experimental results on challenging datasets demonstrate that our method achieves superior synthesized image quality compared to state-of-the-art (SOTA) methods.
format Article
id doaj-art-2e1aa1810db0457a829ee68aeb45e168
institution Kabale University
issn 2644-1322
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Open Journal of Signal Processing
spelling doaj-art-2e1aa1810db0457a829ee68aeb45e1682025-01-09T00:02:51ZengIEEEIEEE Open Journal of Signal Processing2644-13222024-01-01520421210.1109/OJSP.2023.334006410345792Exploiting a Spatial Attention Mechanism for Improved Depth Completion and Feature Fusion in Novel View SynthesisAnh Minh Truong0https://orcid.org/0000-0003-2376-927XWilfried Philips1https://orcid.org/0000-0003-4456-4353Peter Veelaert2https://orcid.org/0000-0003-4746-9087TELIN-IPI, Ghent University—imec, Gent, BelgiumTELIN-IPI, Ghent University—imec, Gent, BelgiumTELIN-IPI, Ghent University—imec, Gent, BelgiumMany image-based rendering (IBR) methods rely on depth estimates obtained from structured light or time-of-flight depth sensors to synthesize novel views from sparse camera networks. However, these estimates often contain missing or noisy regions, resulting in an incorrect mapping between source and target views. This situation makes the fusion process more challenging, as the visual information is misaligned, inconsistent, or missing. In this work, we first implement a lightweight network based on the transformer, which is well-known for its capability to model long-range relationships within input data, to extract spatial features from color images. These features are then used to enhance the quality of completed depth maps. Furthermore, we combine a sequential deep neural network with a spatial attention mechanism to effectively fuse the projected features from multiple source viewpoints. This approach enables us to integrate information from an arbitrary number of source viewpoints as well as improve accuracy in synthesized views. Experimental results on challenging datasets demonstrate that our method achieves superior synthesized image quality compared to state-of-the-art (SOTA) methods.https://ieeexplore.ieee.org/document/10345792/Depth completionnovel view synthesisspatial attention
spellingShingle Anh Minh Truong
Wilfried Philips
Peter Veelaert
Exploiting a Spatial Attention Mechanism for Improved Depth Completion and Feature Fusion in Novel View Synthesis
IEEE Open Journal of Signal Processing
Depth completion
novel view synthesis
spatial attention
title Exploiting a Spatial Attention Mechanism for Improved Depth Completion and Feature Fusion in Novel View Synthesis
title_full Exploiting a Spatial Attention Mechanism for Improved Depth Completion and Feature Fusion in Novel View Synthesis
title_fullStr Exploiting a Spatial Attention Mechanism for Improved Depth Completion and Feature Fusion in Novel View Synthesis
title_full_unstemmed Exploiting a Spatial Attention Mechanism for Improved Depth Completion and Feature Fusion in Novel View Synthesis
title_short Exploiting a Spatial Attention Mechanism for Improved Depth Completion and Feature Fusion in Novel View Synthesis
title_sort exploiting a spatial attention mechanism for improved depth completion and feature fusion in novel view synthesis
topic Depth completion
novel view synthesis
spatial attention
url https://ieeexplore.ieee.org/document/10345792/
work_keys_str_mv AT anhminhtruong exploitingaspatialattentionmechanismforimproveddepthcompletionandfeaturefusioninnovelviewsynthesis
AT wilfriedphilips exploitingaspatialattentionmechanismforimproveddepthcompletionandfeaturefusioninnovelviewsynthesis
AT peterveelaert exploitingaspatialattentionmechanismforimproveddepthcompletionandfeaturefusioninnovelviewsynthesis