Complexity of Deep Convolutional Neural Networks in Mobile Computing

Neural networks employ massive interconnection of simple computing units called neurons to compute the problems that are highly nonlinear and could not be hard coded into a program. These neural networks are computation-intensive, and training them requires a lot of training data. Each training exam...

Full description

Saved in:
Bibliographic Details
Main Authors: Saad Naeem, Noreen Jamil, Habib Ullah Khan, Shah Nazir
Format: Article
Language:English
Published: Wiley 2020-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2020/3853780
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832567301038145536
author Saad Naeem
Noreen Jamil
Habib Ullah Khan
Shah Nazir
author_facet Saad Naeem
Noreen Jamil
Habib Ullah Khan
Shah Nazir
author_sort Saad Naeem
collection DOAJ
description Neural networks employ massive interconnection of simple computing units called neurons to compute the problems that are highly nonlinear and could not be hard coded into a program. These neural networks are computation-intensive, and training them requires a lot of training data. Each training example requires heavy computations. We look at different ways in which we can reduce the heavy computation requirement and possibly make them work on mobile devices. In this paper, we survey various techniques that can be matched and combined in order to improve the training time of neural networks. Additionally, we also review some extra recommendations to make the process work for mobile devices as well. We finally survey deep compression technique that tries to solve the problem by network pruning, quantization, and encoding the network weights. Deep compression reduces the time required for training the network by first pruning the irrelevant connections, i.e., the pruning stage, which is then followed by quantizing the network weights via choosing centroids for each layer. Finally, at the third stage, it employs Huffman encoding algorithm to deal with the storage issue of the remaining weights.
format Article
id doaj-art-3e76441623df43afb072453ede90ef27
institution Kabale University
issn 1076-2787
1099-0526
language English
publishDate 2020-01-01
publisher Wiley
record_format Article
series Complexity
spelling doaj-art-3e76441623df43afb072453ede90ef272025-02-03T01:01:52ZengWileyComplexity1076-27871099-05262020-01-01202010.1155/2020/38537803853780Complexity of Deep Convolutional Neural Networks in Mobile ComputingSaad Naeem0Noreen Jamil1Habib Ullah Khan2Shah Nazir3Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad, PakistanDepartment of Computer Science, National University of Computer and Emerging Sciences, Islamabad, PakistanDepartment of Accounting & Information Systems, College of Business & Economics, Qatar University, Doha, QatarDepartment of Computer Science, University of Swabi, Swabi, PakistanNeural networks employ massive interconnection of simple computing units called neurons to compute the problems that are highly nonlinear and could not be hard coded into a program. These neural networks are computation-intensive, and training them requires a lot of training data. Each training example requires heavy computations. We look at different ways in which we can reduce the heavy computation requirement and possibly make them work on mobile devices. In this paper, we survey various techniques that can be matched and combined in order to improve the training time of neural networks. Additionally, we also review some extra recommendations to make the process work for mobile devices as well. We finally survey deep compression technique that tries to solve the problem by network pruning, quantization, and encoding the network weights. Deep compression reduces the time required for training the network by first pruning the irrelevant connections, i.e., the pruning stage, which is then followed by quantizing the network weights via choosing centroids for each layer. Finally, at the third stage, it employs Huffman encoding algorithm to deal with the storage issue of the remaining weights.http://dx.doi.org/10.1155/2020/3853780
spellingShingle Saad Naeem
Noreen Jamil
Habib Ullah Khan
Shah Nazir
Complexity of Deep Convolutional Neural Networks in Mobile Computing
Complexity
title Complexity of Deep Convolutional Neural Networks in Mobile Computing
title_full Complexity of Deep Convolutional Neural Networks in Mobile Computing
title_fullStr Complexity of Deep Convolutional Neural Networks in Mobile Computing
title_full_unstemmed Complexity of Deep Convolutional Neural Networks in Mobile Computing
title_short Complexity of Deep Convolutional Neural Networks in Mobile Computing
title_sort complexity of deep convolutional neural networks in mobile computing
url http://dx.doi.org/10.1155/2020/3853780
work_keys_str_mv AT saadnaeem complexityofdeepconvolutionalneuralnetworksinmobilecomputing
AT noreenjamil complexityofdeepconvolutionalneuralnetworksinmobilecomputing
AT habibullahkhan complexityofdeepconvolutionalneuralnetworksinmobilecomputing
AT shahnazir complexityofdeepconvolutionalneuralnetworksinmobilecomputing