SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication
We introduce novel sparsity-aware in-DRAM matrix mapping techniques and a corresponding DRAM-based acceleration framework, termed SpDRAM, which utilizes a triple row activation scheme to efficiently handle sparse matrix-vector multiplication (SpMV). We found that reducing operations by sparsity reli...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10766585/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850138541114261504 |
|---|---|
| author | Jieui Kang Soeun Choi Eunjin Lee Jaehyeong Sim |
| author_facet | Jieui Kang Soeun Choi Eunjin Lee Jaehyeong Sim |
| author_sort | Jieui Kang |
| collection | DOAJ |
| description | We introduce novel sparsity-aware in-DRAM matrix mapping techniques and a corresponding DRAM-based acceleration framework, termed SpDRAM, which utilizes a triple row activation scheme to efficiently handle sparse matrix-vector multiplication (SpMV). We found that reducing operations by sparsity relies heavily on how matrices are mapped into DRAM banks, which operate row by row. These banks operate row by row. From this insight, we developed two distinct matrix mapping techniques aimed at maximizing the reduction of row operations with minimal design overhead: Output-aware Matrix Permutation (OMP) and Zero-aware Matrix Column Sorting (ZMCS). Additionally, we propose a Multiplication Deferring (MD) scheme that leverages the prevalent bit-level sparsity in matrix values to decrease the effective bit-width required for in-bank multiplication operations. Evaluation results demonstrate that the combination of our in-DRAM acceleration methods outperforms the latest DRAM-based PIM accelerator for SpMV, achieving a performance increase of up to <inline-formula> <tex-math notation="LaTeX">$7.54\times $ </tex-math></inline-formula> and a <inline-formula> <tex-math notation="LaTeX">$22.4\times $ </tex-math></inline-formula> improvement in energy efficiency in a wide range of SpMV tasks. |
| format | Article |
| id | doaj-art-ecedac9ab4164294a814972ddd1b1cfa |
| institution | OA Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2024-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-ecedac9ab4164294a814972ddd1b1cfa2025-08-20T02:30:34ZengIEEEIEEE Access2169-35362024-01-011217600917602110.1109/ACCESS.2024.350562210766585SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector MultiplicationJieui Kang0https://orcid.org/0009-0000-7691-0930Soeun Choi1Eunjin Lee2https://orcid.org/0009-0007-8937-2976Jaehyeong Sim3https://orcid.org/0000-0001-8722-8486Artificial Intelligence Convergence, Ewha Womans University, Seoul, South KoreaArtificial Intelligence Convergence, Ewha Womans University, Seoul, South KoreaArtificial Intelligence Convergence, Ewha Womans University, Seoul, South KoreaDepartment of Computer Science and Engineering, Ewha Womans University, Seoul, South KoreaWe introduce novel sparsity-aware in-DRAM matrix mapping techniques and a corresponding DRAM-based acceleration framework, termed SpDRAM, which utilizes a triple row activation scheme to efficiently handle sparse matrix-vector multiplication (SpMV). We found that reducing operations by sparsity relies heavily on how matrices are mapped into DRAM banks, which operate row by row. These banks operate row by row. From this insight, we developed two distinct matrix mapping techniques aimed at maximizing the reduction of row operations with minimal design overhead: Output-aware Matrix Permutation (OMP) and Zero-aware Matrix Column Sorting (ZMCS). Additionally, we propose a Multiplication Deferring (MD) scheme that leverages the prevalent bit-level sparsity in matrix values to decrease the effective bit-width required for in-bank multiplication operations. Evaluation results demonstrate that the combination of our in-DRAM acceleration methods outperforms the latest DRAM-based PIM accelerator for SpMV, achieving a performance increase of up to <inline-formula> <tex-math notation="LaTeX">$7.54\times $ </tex-math></inline-formula> and a <inline-formula> <tex-math notation="LaTeX">$22.4\times $ </tex-math></inline-formula> improvement in energy efficiency in a wide range of SpMV tasks.https://ieeexplore.ieee.org/document/10766585/Processing-in-memorySpMVsparsityDRAM |
| spellingShingle | Jieui Kang Soeun Choi Eunjin Lee Jaehyeong Sim SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication IEEE Access Processing-in-memory SpMV sparsity DRAM |
| title | SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication |
| title_full | SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication |
| title_fullStr | SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication |
| title_full_unstemmed | SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication |
| title_short | SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication |
| title_sort | spdram efficient in dram acceleration of sparse matrix vector multiplication |
| topic | Processing-in-memory SpMV sparsity DRAM |
| url | https://ieeexplore.ieee.org/document/10766585/ |
| work_keys_str_mv | AT jieuikang spdramefficientindramaccelerationofsparsematrixvectormultiplication AT soeunchoi spdramefficientindramaccelerationofsparsematrixvectormultiplication AT eunjinlee spdramefficientindramaccelerationofsparsematrixvectormultiplication AT jaehyeongsim spdramefficientindramaccelerationofsparsematrixvectormultiplication |