A low functional redundancy-based network slimming method for accelerating deep neural networks
Deep neural networks (DNNs) have been widely criticized for their large parameters and computation demands, hindering deployment to edge and embedded devices. In order to reduce the floating point operations (FLOPs) running DNNs and accelerate the inference speed, we start from the model pruning, an...
Saved in:
Main Authors: | Zheng Fang, Bo Yin |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-04-01
|
Series: | Alexandria Engineering Journal |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1110016824017162 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Closed-form interpretation of neural network classifiers with symbolic gradients
by: Sebastian J Wetzel
Published: (2025-01-01) -
MODELING OF SOLAR RADIATION WITH A NEURAL NETWORK
by: VALENTIN STOYANOV, et al.
Published: (2018-09-01) -
GTAT: empowering graph neural networks with cross attention
by: Jiahao Shen, et al.
Published: (2025-02-01) -
Retina Modeling by Artificial Neural Networks
by: Amelia Stewart
Published: (2024-03-01) -
SYMBOLIC ANALYSIS OF CLASSICAL NEURAL NETWORKS FOR DEEP LEARNING
by: Vladimir Milićević, et al.
Published: (2025-03-01)