A low functional redundancy-based network slimming method for accelerating deep neural networks
Deep neural networks (DNNs) have been widely criticized for their large parameters and computation demands, hindering deployment to edge and embedded devices. In order to reduce the floating point operations (FLOPs) running DNNs and accelerate the inference speed, we start from the model pruning, an...
Saved in:
Main Authors: | Zheng Fang, Bo Yin |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-04-01
|
Series: | Alexandria Engineering Journal |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1110016824017162 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Closed-form interpretation of neural network classifiers with symbolic gradients
by: Sebastian J Wetzel
Published: (2025-01-01) -
Barlow Twins deep neural network for advanced 1D drug–target interaction prediction
by: Maximilian G. Schuh, et al.
Published: (2025-02-01) -
MODELING OF SOLAR RADIATION WITH A NEURAL NETWORK
by: VALENTIN STOYANOV, et al.
Published: (2018-09-01) -
A REAL TIME FACE RECOGNITION SYSTEM USING ALEXNET DEEP CONVOLUTIONAL NETWORK TRANSFER LEARNING MODEL
by: LAWRENCE O. OMOTOSHO, et al.
Published: (2021-10-01) -
Retina Modeling by Artificial Neural Networks
by: Amelia Stewart
Published: (2024-03-01)