Optimizing binary neural network quantization for fixed pattern noise robustness
Abstract This work presents a comprehensive analysis of how extreme data quantization and fixed pattern noise (FPN) from CMOS imagers affect the performance of deep neural networks for image recognition tasks. Binary neural networks (BNN) are particularly attractive for resource-constrained embedded...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-10833-1 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849237745165139968 |
|---|---|
| author | Francisco Javier Andreo-Oliver Gines Domenech-Asensi Jose Angel Diaz-Madrid Ramon Ruiz-Merino Juan Zapata-Perez |
| author_facet | Francisco Javier Andreo-Oliver Gines Domenech-Asensi Jose Angel Diaz-Madrid Ramon Ruiz-Merino Juan Zapata-Perez |
| author_sort | Francisco Javier Andreo-Oliver |
| collection | DOAJ |
| description | Abstract This work presents a comprehensive analysis of how extreme data quantization and fixed pattern noise (FPN) from CMOS imagers affect the performance of deep neural networks for image recognition tasks. Binary neural networks (BNN) are particularly attractive for resource-constrained embedded systems due to their reduced memory footprint and computational requirements. However, these highly quantized networks demonstrate increased sensitivity to sensor imperfections, particularly FPN inherent to CMOS imaging devices. Taking as baseline a BNN with binary weights and 32-bit batch normalization parameters, we systematically investigate performance degradation when these parameters are quantized to lower bit-widths and when various types of FPN are applied to input images. Our experiments with CIFAR-10 and CIFAR-100 datasets reveal that decreasing batch normalization parameters to 4-bit provides a reasonable compromise between resource efficiency and accuracy, although the performance significantly deteriorates at higher noise levels. We demonstrate that this degradation can be effectively mitigated through strategic noise augmentation during training. Specifically, training with moderate (5-10%) noise levels improves resilience to similar noise during inference while models trained with column FPN show remarkable robustness across multiple noise types Our findings provide practical guidance for designing efficient and noise-tolerant BNNs for low-power vision systems, showing that appropriate training strategies can achieve up to 60% accuracy. |
| format | Article |
| id | doaj-art-1d7f1572b30d43b8887bc7696872dbb7 |
| institution | Kabale University |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-1d7f1572b30d43b8887bc7696872dbb72025-08-20T04:01:52ZengNature PortfolioScientific Reports2045-23222025-07-0115111010.1038/s41598-025-10833-1Optimizing binary neural network quantization for fixed pattern noise robustnessFrancisco Javier Andreo-Oliver0Gines Domenech-Asensi1Jose Angel Diaz-Madrid2Ramon Ruiz-Merino3Juan Zapata-Perez4Universidad Politécnica de CartagenaUniversidad Politécnica de CartagenaCentro Universitario de la Defensa UPCTUniversidad Politécnica de CartagenaUniversidad Politécnica de CartagenaAbstract This work presents a comprehensive analysis of how extreme data quantization and fixed pattern noise (FPN) from CMOS imagers affect the performance of deep neural networks for image recognition tasks. Binary neural networks (BNN) are particularly attractive for resource-constrained embedded systems due to their reduced memory footprint and computational requirements. However, these highly quantized networks demonstrate increased sensitivity to sensor imperfections, particularly FPN inherent to CMOS imaging devices. Taking as baseline a BNN with binary weights and 32-bit batch normalization parameters, we systematically investigate performance degradation when these parameters are quantized to lower bit-widths and when various types of FPN are applied to input images. Our experiments with CIFAR-10 and CIFAR-100 datasets reveal that decreasing batch normalization parameters to 4-bit provides a reasonable compromise between resource efficiency and accuracy, although the performance significantly deteriorates at higher noise levels. We demonstrate that this degradation can be effectively mitigated through strategic noise augmentation during training. Specifically, training with moderate (5-10%) noise levels improves resilience to similar noise during inference while models trained with column FPN show remarkable robustness across multiple noise types Our findings provide practical guidance for designing efficient and noise-tolerant BNNs for low-power vision systems, showing that appropriate training strategies can achieve up to 60% accuracy.https://doi.org/10.1038/s41598-025-10833-1Deep neural networkComputer visionData quantizationBatch normalizationFixed pattern noiseCMOS imagers |
| spellingShingle | Francisco Javier Andreo-Oliver Gines Domenech-Asensi Jose Angel Diaz-Madrid Ramon Ruiz-Merino Juan Zapata-Perez Optimizing binary neural network quantization for fixed pattern noise robustness Scientific Reports Deep neural network Computer vision Data quantization Batch normalization Fixed pattern noise CMOS imagers |
| title | Optimizing binary neural network quantization for fixed pattern noise robustness |
| title_full | Optimizing binary neural network quantization for fixed pattern noise robustness |
| title_fullStr | Optimizing binary neural network quantization for fixed pattern noise robustness |
| title_full_unstemmed | Optimizing binary neural network quantization for fixed pattern noise robustness |
| title_short | Optimizing binary neural network quantization for fixed pattern noise robustness |
| title_sort | optimizing binary neural network quantization for fixed pattern noise robustness |
| topic | Deep neural network Computer vision Data quantization Batch normalization Fixed pattern noise CMOS imagers |
| url | https://doi.org/10.1038/s41598-025-10833-1 |
| work_keys_str_mv | AT franciscojavierandreooliver optimizingbinaryneuralnetworkquantizationforfixedpatternnoiserobustness AT ginesdomenechasensi optimizingbinaryneuralnetworkquantizationforfixedpatternnoiserobustness AT joseangeldiazmadrid optimizingbinaryneuralnetworkquantizationforfixedpatternnoiserobustness AT ramonruizmerino optimizingbinaryneuralnetworkquantizationforfixedpatternnoiserobustness AT juanzapataperez optimizingbinaryneuralnetworkquantizationforfixedpatternnoiserobustness |