Uncertainty-based quantization method for stable training of binary neural networks

Binary neural networks (BNNs) have gained attention due to their computational efficiency. However, training BNNs has proven to be challenging. Existing algorithms either fail to produce stable and high-quality results or are overly complex for practical use. In this paper, we introduce a novel quan...

Full description

Saved in:
Bibliographic Details
Main Authors: A.V. Trusov, D.N. Putintsev, E.E. Limonova
Format: Article
Language:English
Published: Samara National Research University 2024-08-01
Series:Компьютерная оптика
Subjects:
Online Access:https://www.computeroptics.ru/eng/KO/Annot/KO48-4/480412e.html
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Binary neural networks (BNNs) have gained attention due to their computational efficiency. However, training BNNs has proven to be challenging. Existing algorithms either fail to produce stable and high-quality results or are overly complex for practical use. In this paper, we introduce a novel quantizer called UBQ (Uncertainty-based quantizer) for BNNs, which combines the advantages of existing methods, resulting in stable training and high-quality BNNs even with a low number of trainable parameters. We also propose a training method involving gradual network freezing and batch normalization replacement, facilitating a smooth transition from training mode to execution mode for BNNs. To evaluate UBQ, we conducted experiments on the MNIST and CIFAR-10 datasets and compared our method to existing algorithms. The results demonstrate that UBQ outperforms previous methods for smaller networks and achieves comparable results for larger networks.
ISSN:0134-2452
2412-6179