Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized Dropout

<b>Background/Objectives:</b> Convolutional Neural Networks (CNNs), while effective in tasks such as image classification and language processing, often experience overfitting and inefficient training due to static, structure-agnostic regularization techniques like traditional dropout. T...

Full description

Saved in:
Bibliographic Details
Main Author: Mehdi Ghayoumi
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:AI
Subjects:
Online Access:https://www.mdpi.com/2673-2688/6/6/111
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849472601073647616
author Mehdi Ghayoumi
author_facet Mehdi Ghayoumi
author_sort Mehdi Ghayoumi
collection DOAJ
description <b>Background/Objectives:</b> Convolutional Neural Networks (CNNs), while effective in tasks such as image classification and language processing, often experience overfitting and inefficient training due to static, structure-agnostic regularization techniques like traditional dropout. This study aims to address these limitations by proposing a more dynamic and context-sensitive dropout strategy. <b>Methods:</b> We introduce <i>Probabilistic Feature Importance Dropout</i> (PFID), a novel regularization method that assigns dropout rates based on the probabilistic significance of individual features. PFID is integrated with adaptive, structured, and contextual dropout strategies, forming a unified framework for intelligent regularization. <b>Results:</b> Experimental evaluation on standard benchmark datasets including CIFAR-10, MNIST, and Fashion MNIST demonstrated that PFID significantly improves performance metrics such as classification accuracy, training loss, and computational efficiency compared to conventional dropout methods. <b>Conclusions:</b> PFID offers a practical and scalable solution for enhancing CNN generalization and training efficiency. Its dynamic nature and feature-aware design provide a strong foundation for future advancements in adaptive regularization for deep learning models.
format Article
id doaj-art-c460fe7589c645ab95c8b691b95ea4c8
institution Kabale University
issn 2673-2688
language English
publishDate 2025-05-01
publisher MDPI AG
record_format Article
series AI
spelling doaj-art-c460fe7589c645ab95c8b691b95ea4c82025-08-20T03:24:29ZengMDPI AGAI2673-26882025-05-016611110.3390/ai6060111Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized DropoutMehdi Ghayoumi0Department of Cybersecurity, School of Science, Health and Criminal Justice, State University of New York, Canton, NY 13617, USA<b>Background/Objectives:</b> Convolutional Neural Networks (CNNs), while effective in tasks such as image classification and language processing, often experience overfitting and inefficient training due to static, structure-agnostic regularization techniques like traditional dropout. This study aims to address these limitations by proposing a more dynamic and context-sensitive dropout strategy. <b>Methods:</b> We introduce <i>Probabilistic Feature Importance Dropout</i> (PFID), a novel regularization method that assigns dropout rates based on the probabilistic significance of individual features. PFID is integrated with adaptive, structured, and contextual dropout strategies, forming a unified framework for intelligent regularization. <b>Results:</b> Experimental evaluation on standard benchmark datasets including CIFAR-10, MNIST, and Fashion MNIST demonstrated that PFID significantly improves performance metrics such as classification accuracy, training loss, and computational efficiency compared to conventional dropout methods. <b>Conclusions:</b> PFID offers a practical and scalable solution for enhancing CNN generalization and training efficiency. Its dynamic nature and feature-aware design provide a strong foundation for future advancements in adaptive regularization for deep learning models.https://www.mdpi.com/2673-2688/6/6/111convolutional neural networks (CNNs)probabilistic feature importance dropout (PFID)regularization techniquesadaptive learningnetwork efficiency
spellingShingle Mehdi Ghayoumi
Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized Dropout
AI
convolutional neural networks (CNNs)
probabilistic feature importance dropout (PFID)
regularization techniques
adaptive learning
network efficiency
title Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized Dropout
title_full Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized Dropout
title_fullStr Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized Dropout
title_full_unstemmed Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized Dropout
title_short Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized Dropout
title_sort enhancing efficiency and regularization in convolutional neural networks strategies for optimized dropout
topic convolutional neural networks (CNNs)
probabilistic feature importance dropout (PFID)
regularization techniques
adaptive learning
network efficiency
url https://www.mdpi.com/2673-2688/6/6/111
work_keys_str_mv AT mehdighayoumi enhancingefficiencyandregularizationinconvolutionalneuralnetworksstrategiesforoptimizeddropout