Compressing fully connected layers of deep neural networks using permuted features
Abstract Modern deep neural networks typically have some fully connected layers at the final classification stages. These stages have large memory requirements that can be expensive on resource‐constrained embedded devices and also consume significant energy just to read the parameters from external...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2023-07-01
|
Series: | IET Computers & Digital Techniques |
Subjects: | |
Online Access: | https://doi.org/10.1049/cdt2.12060 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Abstract Modern deep neural networks typically have some fully connected layers at the final classification stages. These stages have large memory requirements that can be expensive on resource‐constrained embedded devices and also consume significant energy just to read the parameters from external memory into the processing chip. The authors show that the weights in such layers can be modelled as permutations of a common sequence with minimal impact on recognition accuracy. This allows the storage requirements of FC layer(s) to be significantly reduced, which reflects in the reduction of total network parameters from 1.3× to 36× with a median of 4.45× on several benchmark networks. The authors compare the results with existing pruning, bitwidth reduction, and deep compression techniques and show the superior compression that can be achieved with this method. The authors also showed 7× reduction of parameters on VGG16 architecture with ImageNet dataset. The authors also showed that the proposed method can be used in the classification stage of the transfer learning networks. |
---|---|
ISSN: | 1751-8601 1751-861X |