FISNET: A Learnable Fusion-Based Iris Segmentation Network Improving Robustness Across NIR and VIS Modalities

Iris is one of the most distinctive biometric traits used for reliable identity verification in applications such as border control, secure access systems, and national ID programs. The primary challenge in iris recognition is the reliable segmentation of the iris region from the eye image. Segmenti...

Full description

Saved in:
Bibliographic Details
Main Authors: Geetanjali Sharma, Gaurav Jaswal, Aditya Nigam, Raghavendra Ramachandra
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11029274/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Iris is one of the most distinctive biometric traits used for reliable identity verification in applications such as border control, secure access systems, and national ID programs. The primary challenge in iris recognition is the reliable segmentation of the iris region from the eye image. Segmenting irises captured under different imaging conditions is challenging for a single model due to variations in spectral features, texture, lighting conditions, and noise patterns between NIR and VIS images. To tackle this problem, we present Fused Iris Segmentation Network (FISNET) that combines segmentation maps from two models to achieve enhanced precision and accuracy. FISNET demonstrates robust generalizability across varying lighting, resolution, and sensor types, consistently outperforming individual models on all NIR and VIS datasets, including smartphone-captured images. The performance of FISNET was evaluated on CASIA-V4 subsets, Dark and Blue iris datasets, and the AFHIRIS dataset, achieving superior segmentation accuracy and recognition performance. The results demonstrate significant improvements over the IrisParseNet, PixlSegNet, and SAM model, achieving remarkable mIoU scores of 0.955, 0.930, 0.945, 0.955, 0.907, 0.815, 0.924, 0.913, 0.842, 0.852, 0.829, and 0.839 on the Lamp-V4, Interval-V4, Thousand-V4, Syn-V4, Twins-V4, UBIRIS-V2, BI-P1, BI-P2, DI-P1, DI-P2, DI-P3, and AFHIRIS-V1 datasets, respectively. Additionally, the Type-I error rate (E1) achieved exceptional results, with values of 0.0014, 0.0067, 0.0016, 0.0012, 0.0027, 0.0023, 0.0015, 0.0019, 0.0014, 0.0015, 0.0020, and 0.0069 across these datasets, further emphasizing the superiority of the proposed approach. Code is available at <uri>https://github.com/GeetanjaliGTZ/FIS-Net-NIR-and-VIS-Iris-Segmentation</uri>
ISSN:2169-3536