Optimization of Direct Convolution Algorithms on ARM Processors for Deep Learning Inference

In deep learning, convolutional layers typically bear the majority of the computational workload and are often the primary contributors to performance bottlenecks. The widely used convolution algorithm is based on the IM2COL transform to take advantage of the highly optimized GEMM (General Matrix Mu...

Full description

Saved in:
Bibliographic Details
Main Authors: Shang Li, Fei Yu, Shankou Zhang, Huige Yin, Hairong Lin
Format: Article
Language:English
Published: MDPI AG 2025-02-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/5/787
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In deep learning, convolutional layers typically bear the majority of the computational workload and are often the primary contributors to performance bottlenecks. The widely used convolution algorithm is based on the IM2COL transform to take advantage of the highly optimized GEMM (General Matrix Multiplication) kernel acceleration, using the highly optimized BLAS (Basic Linear Algebra Subroutine) library, which tends to incur additional memory overhead. Recent studies have indicated that direct convolution approaches can outperform traditional convolution implementations without additional memory overhead. In this paper, we propose a high-performance implementation of the direct convolution algorithm for inference that preserves the channel-first data layout of the convolutional layer inputs/outputs. We evaluate the performance of our proposed algorithm on a multi-core ARM CPU platform and compare it with state-of-the-art convolution optimization techniques. Experimental results demonstrate that our new algorithm performs better across the evaluated scenarios and platforms.
ISSN:2227-7390