A low functional redundancy-based network slimming method for accelerating deep neural networks
Deep neural networks (DNNs) have been widely criticized for their large parameters and computation demands, hindering deployment to edge and embedded devices. In order to reduce the floating point operations (FLOPs) running DNNs and accelerate the inference speed, we start from the model pruning, an...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-04-01
|
Series: | Alexandria Engineering Journal |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1110016824017162 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|