ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural Networks
The posit number system aims to be a drop-in replacement of the existing IEEE floating-point standard. Its properties- tapered precision and high dynamic range, allow a smaller size posit to almost match the performance of a much larger size floating-point in representing decimals. This becomes espe...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10445185/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850066417128308736 |
|---|---|
| author | Aditya Anirudh Jonnalagadda Uppugunduru Anil Kumar Rishi Thotli Satvik Sardesai Sreehari Veeramachaneni Syed Ershad Ahmed |
| author_facet | Aditya Anirudh Jonnalagadda Uppugunduru Anil Kumar Rishi Thotli Satvik Sardesai Sreehari Veeramachaneni Syed Ershad Ahmed |
| author_sort | Aditya Anirudh Jonnalagadda |
| collection | DOAJ |
| description | The posit number system aims to be a drop-in replacement of the existing IEEE floating-point standard. Its properties- tapered precision and high dynamic range, allow a smaller size posit to almost match the performance of a much larger size floating-point in representing decimals. This becomes especially useful for performing error-tolerant tasks like deep learning inference computation where low latency and area are a priority. Recent research has found that the performance of deep neural network models saturates beyond a certain level of accuracy of multipliers used for convolutions. Therefore, the extra hardware cost of developing precise arithmetic circuits for such applications becomes an unnecessary overhead. This paper explores approximate posit multipliers in the convolutional layers of deep neural networks and attempts to find an ideal balance between hardware utilization and inference accuracy. Posit multiplication involves several steps, with the mantissa multiplication step utilizing maximum hardware resources. To mitigate this, a posit multiplier circuit using an approximate hybrid-radix Booth encoding for mantissa multiplication and techniques such as truncation and bit masking based on input regime size are proposed. In addition, a novel Booth encoding control scheme to prevent unnecessary bits from switching has been devised to reduce dynamic power dissipation. Compared to existing literature, these optimizations have contributed to a 23% decrease in power dissipation in the mantissa multiplication stage. Further, a novel area and energy-efficient decoder architecture have also been developed with an 11% reduction in dynamic power dissipation and area compared to existing decoders. Overall, the proposed < 16, 2 > posit multiplier offers a 14% reduction in the PDP over the existing approximate posit multiplier designs. The proposed < 16, 2 > multiplier also achieves over 90% accuracy in inferencing deep learning models such as ResNet20, VGG-19 and DenseNet. |
| format | Article |
| id | doaj-art-9a41624ee347477cbe58f08bb6501d3e |
| institution | DOAJ |
| issn | 2169-3536 |
| language | English |
| publishDate | 2024-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-9a41624ee347477cbe58f08bb6501d3e2025-08-20T02:48:45ZengIEEEIEEE Access2169-35362024-01-0112310363104610.1109/ACCESS.2024.336969510445185ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural NetworksAditya Anirudh Jonnalagadda0https://orcid.org/0009-0004-9029-6403Uppugunduru Anil Kumar1https://orcid.org/0000-0003-4328-6953Rishi Thotli2Satvik Sardesai3Sreehari Veeramachaneni4https://orcid.org/0000-0001-7744-4580Syed Ershad Ahmed5https://orcid.org/0000-0003-0333-9387Department of Electrical and Electronics Engineering, Birla Institute of Technology and Science, Pilani, Hyderabad Campus, Hyderabad, Telangana, IndiaDepartment of Electronics and Communication Engineering, Faculty of Science and Technology (IcfaiTech), The ICFAI Foundation for Higher Education (Deemed to be University), Hyderabad, Telangana, IndiaDepartment of Electrical and Electronics Engineering, Birla Institute of Technology and Science, Pilani, Hyderabad Campus, Hyderabad, Telangana, IndiaDepartment of Electrical and Electronics Engineering, Birla Institute of Technology and Science, Pilani, Hyderabad Campus, Hyderabad, Telangana, IndiaDepartment of Electronics and Communication Engineering, Gokaraju Rangaraju Institute of Engineering and Technology (GRIET), Hyderabad, Telangana, IndiaDepartment of Electrical and Electronics Engineering, Birla Institute of Technology and Science, Pilani, Hyderabad Campus, Hyderabad, Telangana, IndiaThe posit number system aims to be a drop-in replacement of the existing IEEE floating-point standard. Its properties- tapered precision and high dynamic range, allow a smaller size posit to almost match the performance of a much larger size floating-point in representing decimals. This becomes especially useful for performing error-tolerant tasks like deep learning inference computation where low latency and area are a priority. Recent research has found that the performance of deep neural network models saturates beyond a certain level of accuracy of multipliers used for convolutions. Therefore, the extra hardware cost of developing precise arithmetic circuits for such applications becomes an unnecessary overhead. This paper explores approximate posit multipliers in the convolutional layers of deep neural networks and attempts to find an ideal balance between hardware utilization and inference accuracy. Posit multiplication involves several steps, with the mantissa multiplication step utilizing maximum hardware resources. To mitigate this, a posit multiplier circuit using an approximate hybrid-radix Booth encoding for mantissa multiplication and techniques such as truncation and bit masking based on input regime size are proposed. In addition, a novel Booth encoding control scheme to prevent unnecessary bits from switching has been devised to reduce dynamic power dissipation. Compared to existing literature, these optimizations have contributed to a 23% decrease in power dissipation in the mantissa multiplication stage. Further, a novel area and energy-efficient decoder architecture have also been developed with an 11% reduction in dynamic power dissipation and area compared to existing decoders. Overall, the proposed < 16, 2 > posit multiplier offers a 14% reduction in the PDP over the existing approximate posit multiplier designs. The proposed < 16, 2 > multiplier also achieves over 90% accuracy in inferencing deep learning models such as ResNet20, VGG-19 and DenseNet.https://ieeexplore.ieee.org/document/10445185/Approximate posit multipliersdeep neural networksenergy-efficient |
| spellingShingle | Aditya Anirudh Jonnalagadda Uppugunduru Anil Kumar Rishi Thotli Satvik Sardesai Sreehari Veeramachaneni Syed Ershad Ahmed ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural Networks IEEE Access Approximate posit multipliers deep neural networks energy-efficient |
| title | ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural Networks |
| title_full | ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural Networks |
| title_fullStr | ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural Networks |
| title_full_unstemmed | ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural Networks |
| title_short | ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural Networks |
| title_sort | adepnet a dynamic precision efficient posit multiplier for neural networks |
| topic | Approximate posit multipliers deep neural networks energy-efficient |
| url | https://ieeexplore.ieee.org/document/10445185/ |
| work_keys_str_mv | AT adityaanirudhjonnalagadda adepnetadynamicprecisionefficientpositmultiplierforneuralnetworks AT uppugunduruanilkumar adepnetadynamicprecisionefficientpositmultiplierforneuralnetworks AT rishithotli adepnetadynamicprecisionefficientpositmultiplierforneuralnetworks AT satviksardesai adepnetadynamicprecisionefficientpositmultiplierforneuralnetworks AT sreehariveeramachaneni adepnetadynamicprecisionefficientpositmultiplierforneuralnetworks AT syedershadahmed adepnetadynamicprecisionefficientpositmultiplierforneuralnetworks |