Highlight Removal From Wireless Capsule Endoscopy Images

Mirror reflections and highlights are commonly observed in images captured by wireless capsule endoscopy (WCE), leading to image blurring or distortion, which increases the difficulty for doctors to observe and analyze gastrointestinal images. Existing methods fail to satisfactorily address specular...

Full description

Saved in:
Bibliographic Details
Main Authors: Shaojie Zhang, Yinghui Wang, Peixuan Liu, Wei Li, Jinlong Yang, Tao Yan, Liangyi Huang, Yukai Wang, Ibragim R. Atadjanov
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Translational Engineering in Health and Medicine
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11087584/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Mirror reflections and highlights are commonly observed in images captured by wireless capsule endoscopy (WCE), leading to image blurring or distortion, which increases the difficulty for doctors to observe and analyze gastrointestinal images. Existing methods fail to satisfactorily address specular highlights in the gastrointestinal tract, often resulting in loss of image details, texture blurring, or texture continuity errors. Therefore, we propose a highlight removal method for capsule endoscopy images. This approach enhances confidence term calculation in pixel prioritization by leveraging the ratio characteristics between the R channel and the B channel in the RGB color space of WCE images, enabling the evaluation of the repair possibility for each pixel. Furthermore, by dynamically adjusting the size of the sample block window based on variance and optimizing the selection of matching patches in the sample block window using RGB color channel distance and pixel distance, the accuracy of the best matching patch search is improved. Our extensive experimental results demonstrate that compared to the best-performing Criminisi method among existing traditional approaches, our method achieves reductions of 3.10% and 2.75% in standard deviation and coefficient of variation, respectively. Additionally, compared to the DeepGin method in deep learning, reductions of 4.22% and 10.50% are observed in these metrics, respectively. The regions after highlight removal and the surrounding regions near the highlights have more similar colors and continuous textures.
ISSN:2168-2372