SpecBoost: Accelerating Tiled Sparse Matrix Multiplication via Dataflow Speculation

Sparse matrix-sparse matrix multiplication (SpMSpM) is crucial in many fields such as scientific computing, sparse linear algebra, and machine learning due to its computational complexity in the large and extremely sparse datasets. Various applications dealing with the sparse matrix show a variety o...

Full description

Saved in:
Bibliographic Details
Main Authors: Gwanghwi Seo, Sungju Ryu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10921684/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Sparse matrix-sparse matrix multiplication (SpMSpM) is crucial in many fields such as scientific computing, sparse linear algebra, and machine learning due to its computational complexity in the large and extremely sparse datasets. Various applications dealing with the sparse matrix show a variety of sparse matrix patterns, so the inner product, outer product, and Gustavson (row-wise) methods have been selectively used for the acceleration of the sparse matrix computation. Previous works determine a fixed dataflow before the computation. However, such an approach cannot optimize all the input matrice types having various data patterns. To address these limitations, we propose a SpecBoost, a method that dynamically selects an optimal tile-level SpMSpM dataflow by analyzing the sparsity pattern within each matrix tile and speculating the best tiled dataflow scheme before the computational stage. We compared our method with the widely known previous methods (CSSpa, ExTensor, MatRaptor), and experimental results show that on average our method reduced memory accesses by a factor of (<inline-formula> <tex-math notation="LaTeX">$4.01\times $ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$2.86\times $ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$2.22\times $ </tex-math></inline-formula>) and boosts the performance of prior works over the baseline by (<inline-formula> <tex-math notation="LaTeX">$4.62\times $ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$2.40\times $ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$1.59\times $ </tex-math></inline-formula>).
ISSN:2169-3536