FISNET: A Learnable Fusion-Based Iris Segmentation Network Improving Robustness Across NIR and VIS Modalities

Iris is one of the most distinctive biometric traits used for reliable identity verification in applications such as border control, secure access systems, and national ID programs. The primary challenge in iris recognition is the reliable segmentation of the iris region from the eye image. Segmenti...

Full description

Saved in:
Bibliographic Details
Main Authors: Geetanjali Sharma, Gaurav Jaswal, Aditya Nigam, Raghavendra Ramachandra
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11029274/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849689654808281088
author Geetanjali Sharma
Gaurav Jaswal
Aditya Nigam
Raghavendra Ramachandra
author_facet Geetanjali Sharma
Gaurav Jaswal
Aditya Nigam
Raghavendra Ramachandra
author_sort Geetanjali Sharma
collection DOAJ
description Iris is one of the most distinctive biometric traits used for reliable identity verification in applications such as border control, secure access systems, and national ID programs. The primary challenge in iris recognition is the reliable segmentation of the iris region from the eye image. Segmenting irises captured under different imaging conditions is challenging for a single model due to variations in spectral features, texture, lighting conditions, and noise patterns between NIR and VIS images. To tackle this problem, we present Fused Iris Segmentation Network (FISNET) that combines segmentation maps from two models to achieve enhanced precision and accuracy. FISNET demonstrates robust generalizability across varying lighting, resolution, and sensor types, consistently outperforming individual models on all NIR and VIS datasets, including smartphone-captured images. The performance of FISNET was evaluated on CASIA-V4 subsets, Dark and Blue iris datasets, and the AFHIRIS dataset, achieving superior segmentation accuracy and recognition performance. The results demonstrate significant improvements over the IrisParseNet, PixlSegNet, and SAM model, achieving remarkable mIoU scores of 0.955, 0.930, 0.945, 0.955, 0.907, 0.815, 0.924, 0.913, 0.842, 0.852, 0.829, and 0.839 on the Lamp-V4, Interval-V4, Thousand-V4, Syn-V4, Twins-V4, UBIRIS-V2, BI-P1, BI-P2, DI-P1, DI-P2, DI-P3, and AFHIRIS-V1 datasets, respectively. Additionally, the Type-I error rate (E1) achieved exceptional results, with values of 0.0014, 0.0067, 0.0016, 0.0012, 0.0027, 0.0023, 0.0015, 0.0019, 0.0014, 0.0015, 0.0020, and 0.0069 across these datasets, further emphasizing the superiority of the proposed approach. Code is available at <uri>https://github.com/GeetanjaliGTZ/FIS-Net-NIR-and-VIS-Iris-Segmentation</uri>
format Article
id doaj-art-2a54ab1af0ef4ec087c7a4037b71af94
institution DOAJ
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-2a54ab1af0ef4ec087c7a4037b71af942025-08-20T03:21:32ZengIEEEIEEE Access2169-35362025-01-011310147210149010.1109/ACCESS.2025.357829311029274FISNET: A Learnable Fusion-Based Iris Segmentation Network Improving Robustness Across NIR and VIS ModalitiesGeetanjali Sharma0https://orcid.org/0000-0002-2103-853XGaurav Jaswal1Aditya Nigam2https://orcid.org/0000-0003-4755-0619Raghavendra Ramachandra3https://orcid.org/0000-0003-0484-3956School of Computing and Electrical Engineering, Indian Institute of Technology Mandi, Mandi, Himachal Pradesh, IndiaDivision of Digital Forensics, Directorate of Forensic Services, Shimla, Himachal Pradesh, IndiaSchool of Computing and Electrical Engineering, Indian Institute of Technology Mandi, Mandi, Himachal Pradesh, IndiaDepartment of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gj&#x00F8;vik, NorwayIris is one of the most distinctive biometric traits used for reliable identity verification in applications such as border control, secure access systems, and national ID programs. The primary challenge in iris recognition is the reliable segmentation of the iris region from the eye image. Segmenting irises captured under different imaging conditions is challenging for a single model due to variations in spectral features, texture, lighting conditions, and noise patterns between NIR and VIS images. To tackle this problem, we present Fused Iris Segmentation Network (FISNET) that combines segmentation maps from two models to achieve enhanced precision and accuracy. FISNET demonstrates robust generalizability across varying lighting, resolution, and sensor types, consistently outperforming individual models on all NIR and VIS datasets, including smartphone-captured images. The performance of FISNET was evaluated on CASIA-V4 subsets, Dark and Blue iris datasets, and the AFHIRIS dataset, achieving superior segmentation accuracy and recognition performance. The results demonstrate significant improvements over the IrisParseNet, PixlSegNet, and SAM model, achieving remarkable mIoU scores of 0.955, 0.930, 0.945, 0.955, 0.907, 0.815, 0.924, 0.913, 0.842, 0.852, 0.829, and 0.839 on the Lamp-V4, Interval-V4, Thousand-V4, Syn-V4, Twins-V4, UBIRIS-V2, BI-P1, BI-P2, DI-P1, DI-P2, DI-P3, and AFHIRIS-V1 datasets, respectively. Additionally, the Type-I error rate (E1) achieved exceptional results, with values of 0.0014, 0.0067, 0.0016, 0.0012, 0.0027, 0.0023, 0.0015, 0.0019, 0.0014, 0.0015, 0.0020, and 0.0069 across these datasets, further emphasizing the superiority of the proposed approach. Code is available at <uri>https://github.com/GeetanjaliGTZ/FIS-Net-NIR-and-VIS-Iris-Segmentation</uri>https://ieeexplore.ieee.org/document/11029274/Biometricsirissegmentationfoundational modelsSAMfusion
spellingShingle Geetanjali Sharma
Gaurav Jaswal
Aditya Nigam
Raghavendra Ramachandra
FISNET: A Learnable Fusion-Based Iris Segmentation Network Improving Robustness Across NIR and VIS Modalities
IEEE Access
Biometrics
iris
segmentation
foundational models
SAM
fusion
title FISNET: A Learnable Fusion-Based Iris Segmentation Network Improving Robustness Across NIR and VIS Modalities
title_full FISNET: A Learnable Fusion-Based Iris Segmentation Network Improving Robustness Across NIR and VIS Modalities
title_fullStr FISNET: A Learnable Fusion-Based Iris Segmentation Network Improving Robustness Across NIR and VIS Modalities
title_full_unstemmed FISNET: A Learnable Fusion-Based Iris Segmentation Network Improving Robustness Across NIR and VIS Modalities
title_short FISNET: A Learnable Fusion-Based Iris Segmentation Network Improving Robustness Across NIR and VIS Modalities
title_sort fisnet a learnable fusion based iris segmentation network improving robustness across nir and vis modalities
topic Biometrics
iris
segmentation
foundational models
SAM
fusion
url https://ieeexplore.ieee.org/document/11029274/
work_keys_str_mv AT geetanjalisharma fisnetalearnablefusionbasedirissegmentationnetworkimprovingrobustnessacrossnirandvismodalities
AT gauravjaswal fisnetalearnablefusionbasedirissegmentationnetworkimprovingrobustnessacrossnirandvismodalities
AT adityanigam fisnetalearnablefusionbasedirissegmentationnetworkimprovingrobustnessacrossnirandvismodalities
AT raghavendraramachandra fisnetalearnablefusionbasedirissegmentationnetworkimprovingrobustnessacrossnirandvismodalities