Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks Quantization
Capsule Networks (CapsNets) are a class of neural network architectures that can be used to more accurately model hierarchical relationships due to their hierarchical structure and dynamic routing algorithms. However, their high accuracy comes at the cost of significant memory and computational reso...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10854429/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832540506727383040 |
---|---|
author | Mohsen Raji Amir Ghazizadeh Ahsaei Kimia Soroush Behnam Ghavami |
author_facet | Mohsen Raji Amir Ghazizadeh Ahsaei Kimia Soroush Behnam Ghavami |
author_sort | Mohsen Raji |
collection | DOAJ |
description | Capsule Networks (CapsNets) are a class of neural network architectures that can be used to more accurately model hierarchical relationships due to their hierarchical structure and dynamic routing algorithms. However, their high accuracy comes at the cost of significant memory and computational resources, making them less feasible for deployment on resource-constrained devices. In this paper, progressive bitwidth assignment approaches are introduced to efficiently quantize the CapsNets. Initially, a comprehensive and detailed analysis of parameter quantization in CapsNets is performed exploring various granularities, such as block-wise quantization and dynamic routing quantization. Then, three quantization approaches are applied to progressively quantize the CapsNet, considering various insights into the susceptibility of layers to quantization. The proposed approaches include Post-Training Quantization (PTQ) strategies that minimize the dependence on floating-point operations and incorporates layer-specific integer bit-widths based on quantization error analysis. PTQ strategies employ Power-of-Two (PoT) scaling factors to simplify computations, effectively utilizing hardware shifts and significantly reducing the computational complexity. This technique not only reduces the memory footprint but also maintains accuracy by introducing a range clipping method tailored to the hardware’s capabilities, obviating the need for data preprocessing. Our experimental results on ShallowCaps and DeepCaps networks across multiple datasets (MNIST, Fashion-MNIST, CIFAR-10, and SVHN) demonstrate the efficiency of our approach. Specifically, on the CIFAR-10 dataset using the DeepCaps architecture, we achieved a substantial memory reduction (<inline-formula> <tex-math notation="LaTeX">$7.02\times $ </tex-math></inline-formula> for weights and <inline-formula> <tex-math notation="LaTeX">$3.74\times $ </tex-math></inline-formula> for activations) with a minimal accuracy loss of only 0.09%. By using progressive bitwidth assignment and post-training quantization, this work optimizes CapsNets for efficient, real-time visual processing on resource-constrained edge devices, enabling applications in IoT, mobile platforms, and embedded systems. |
format | Article |
id | doaj-art-0a6a2d8fc98c4ee8882d68efb6c5d16d |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-0a6a2d8fc98c4ee8882d68efb6c5d16d2025-02-05T00:01:07ZengIEEEIEEE Access2169-35362025-01-0113215332154610.1109/ACCESS.2025.353443410854429Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks QuantizationMohsen Raji0https://orcid.org/0000-0001-7113-5197Amir Ghazizadeh Ahsaei1Kimia Soroush2Behnam Ghavami3https://orcid.org/0000-0001-5391-383XSchool of Electrical and Computer Engineering, Shiraz University, Shiraz, IranSchool of Electrical and Computer Engineering, Shiraz University, Shiraz, IranSchool of Electrical and Computer Engineering, Shiraz University, Shiraz, IranDepartment of Engineering, Shahid Bahonar University of Kerman, Kerman, IranCapsule Networks (CapsNets) are a class of neural network architectures that can be used to more accurately model hierarchical relationships due to their hierarchical structure and dynamic routing algorithms. However, their high accuracy comes at the cost of significant memory and computational resources, making them less feasible for deployment on resource-constrained devices. In this paper, progressive bitwidth assignment approaches are introduced to efficiently quantize the CapsNets. Initially, a comprehensive and detailed analysis of parameter quantization in CapsNets is performed exploring various granularities, such as block-wise quantization and dynamic routing quantization. Then, three quantization approaches are applied to progressively quantize the CapsNet, considering various insights into the susceptibility of layers to quantization. The proposed approaches include Post-Training Quantization (PTQ) strategies that minimize the dependence on floating-point operations and incorporates layer-specific integer bit-widths based on quantization error analysis. PTQ strategies employ Power-of-Two (PoT) scaling factors to simplify computations, effectively utilizing hardware shifts and significantly reducing the computational complexity. This technique not only reduces the memory footprint but also maintains accuracy by introducing a range clipping method tailored to the hardware’s capabilities, obviating the need for data preprocessing. Our experimental results on ShallowCaps and DeepCaps networks across multiple datasets (MNIST, Fashion-MNIST, CIFAR-10, and SVHN) demonstrate the efficiency of our approach. Specifically, on the CIFAR-10 dataset using the DeepCaps architecture, we achieved a substantial memory reduction (<inline-formula> <tex-math notation="LaTeX">$7.02\times $ </tex-math></inline-formula> for weights and <inline-formula> <tex-math notation="LaTeX">$3.74\times $ </tex-math></inline-formula> for activations) with a minimal accuracy loss of only 0.09%. By using progressive bitwidth assignment and post-training quantization, this work optimizes CapsNets for efficient, real-time visual processing on resource-constrained edge devices, enabling applications in IoT, mobile platforms, and embedded systems.https://ieeexplore.ieee.org/document/10854429/Capsule networksdeep learningneural networkspost-training quantizationcompression |
spellingShingle | Mohsen Raji Amir Ghazizadeh Ahsaei Kimia Soroush Behnam Ghavami Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks Quantization IEEE Access Capsule networks deep learning neural networks post-training quantization compression |
title | Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks Quantization |
title_full | Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks Quantization |
title_fullStr | Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks Quantization |
title_full_unstemmed | Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks Quantization |
title_short | Progressive Bitwidth Assignment Approaches for Efficient Capsule Networks Quantization |
title_sort | progressive bitwidth assignment approaches for efficient capsule networks quantization |
topic | Capsule networks deep learning neural networks post-training quantization compression |
url | https://ieeexplore.ieee.org/document/10854429/ |
work_keys_str_mv | AT mohsenraji progressivebitwidthassignmentapproachesforefficientcapsulenetworksquantization AT amirghazizadehahsaei progressivebitwidthassignmentapproachesforefficientcapsulenetworksquantization AT kimiasoroush progressivebitwidthassignmentapproachesforefficientcapsulenetworksquantization AT behnamghavami progressivebitwidthassignmentapproachesforefficientcapsulenetworksquantization |