Optimizing Convolutional Neural Network Architectures

Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating...

Full description

Saved in:
Bibliographic Details
Main Authors: Luis Balderas, Miguel Lastra, José M. Benítez
Format: Article
Language:English
Published: MDPI AG 2024-09-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/12/19/3032
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850184992897892352
author Luis Balderas
Miguel Lastra
José M. Benítez
author_facet Luis Balderas
Miguel Lastra
José M. Benítez
author_sort Luis Balderas
collection DOAJ
description Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited resources (e.g., edge devices). Furthermore, a new line of research seeking more sustainable approaches to Artificial Intelligence development and research is increasingly drawing attention: Green AI. Motivated by an interest in optimizing Machine Learning models, in this paper, we propose Optimizing Convolutional Neural Network Architectures (OCNNA). It is a novel CNN optimization and construction method based on pruning designed to establish the importance of convolutional layers. The proposal was evaluated through a thorough empirical study including the best known datasets (CIFAR-10, CIFAR-100, and Imagenet) and CNN architectures (VGG-16, ResNet-50, DenseNet-40, and MobileNet), setting accuracy drop and the remaining parameters ratio as objective metrics to compare the performance of OCNNA with the other state-of-the-art approaches. Our method was compared with more than 20 convolutional neural network simplification algorithms, obtaining outstanding results. As a result, OCNNA is a competitive CNN construction method which could ease the deployment of neural networks on the IoT or resource-limited devices.
format Article
id doaj-art-6d69ffb387924d4c90d14034d794d636
institution OA Journals
issn 2227-7390
language English
publishDate 2024-09-01
publisher MDPI AG
record_format Article
series Mathematics
spelling doaj-art-6d69ffb387924d4c90d14034d794d6362025-08-20T02:16:54ZengMDPI AGMathematics2227-73902024-09-011219303210.3390/math12193032Optimizing Convolutional Neural Network ArchitecturesLuis Balderas0Miguel Lastra1José M. Benítez2Department of Computer Science and Artificial Intelligence, University of Granada, 18071 Granada, SpainDistributed Computational Intelligence and Time Series Lab, University of Granada, 18071 Granada, SpainDepartment of Computer Science and Artificial Intelligence, University of Granada, 18071 Granada, SpainConvolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited resources (e.g., edge devices). Furthermore, a new line of research seeking more sustainable approaches to Artificial Intelligence development and research is increasingly drawing attention: Green AI. Motivated by an interest in optimizing Machine Learning models, in this paper, we propose Optimizing Convolutional Neural Network Architectures (OCNNA). It is a novel CNN optimization and construction method based on pruning designed to establish the importance of convolutional layers. The proposal was evaluated through a thorough empirical study including the best known datasets (CIFAR-10, CIFAR-100, and Imagenet) and CNN architectures (VGG-16, ResNet-50, DenseNet-40, and MobileNet), setting accuracy drop and the remaining parameters ratio as objective metrics to compare the performance of OCNNA with the other state-of-the-art approaches. Our method was compared with more than 20 convolutional neural network simplification algorithms, obtaining outstanding results. As a result, OCNNA is a competitive CNN construction method which could ease the deployment of neural networks on the IoT or resource-limited devices.https://www.mdpi.com/2227-7390/12/19/3032convolutional neural network simplificationneural network pruningefficient machine learningGreen AI
spellingShingle Luis Balderas
Miguel Lastra
José M. Benítez
Optimizing Convolutional Neural Network Architectures
Mathematics
convolutional neural network simplification
neural network pruning
efficient machine learning
Green AI
title Optimizing Convolutional Neural Network Architectures
title_full Optimizing Convolutional Neural Network Architectures
title_fullStr Optimizing Convolutional Neural Network Architectures
title_full_unstemmed Optimizing Convolutional Neural Network Architectures
title_short Optimizing Convolutional Neural Network Architectures
title_sort optimizing convolutional neural network architectures
topic convolutional neural network simplification
neural network pruning
efficient machine learning
Green AI
url https://www.mdpi.com/2227-7390/12/19/3032
work_keys_str_mv AT luisbalderas optimizingconvolutionalneuralnetworkarchitectures
AT miguellastra optimizingconvolutionalneuralnetworkarchitectures
AT josembenitez optimizingconvolutionalneuralnetworkarchitectures