Switched 32-Bit Fixed-Point Format for Laplacian-Distributed Data
The 32-bit floating-point (FP32) format has many useful applications, particularly in computing and neural network systems. The classic 32-bit fixed-point (FXP32) format often introduces lower quality of representation (i.e., precision), making it unsuitable for real deployment, despite offering fas...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-07-01
|
| Series: | Information |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2078-2489/16/7/574 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849733129691987968 |
|---|---|
| author | Bojan Denić Zoran Perić Milan Dinčić Sofija Perić Nikola Simić Marko Anđelković |
| author_facet | Bojan Denić Zoran Perić Milan Dinčić Sofija Perić Nikola Simić Marko Anđelković |
| author_sort | Bojan Denić |
| collection | DOAJ |
| description | The 32-bit floating-point (FP32) format has many useful applications, particularly in computing and neural network systems. The classic 32-bit fixed-point (FXP32) format often introduces lower quality of representation (i.e., precision), making it unsuitable for real deployment, despite offering faster computations and reduced computational cost, which positively impacts energy efficiency. In this paper, we propose a switched FXP32 format able to compete with or surpass the widely used FP32 format across a wide variance range. It actually proposes switching between the possible values of key parameters according to the variance level of the data modeled with the Laplacian distribution. Precision analysis is achieved using the signal-to-quantization noise ratio (SQNR) as a performance metric, introduced based on the analogy between digital formats and quantization. Theoretical SQNR results provided in a wide range of variance confirm the design objectives. Experimental and simulation results obtained using neural network weights further support the approach. The strong agreement between the experiment, simulation, and theory indicates the efficiency of this proposal in encoding Laplacian data, as well as its potential applicability in neural networks. |
| format | Article |
| id | doaj-art-cf9b95ed8fe142e49abcf3cfad5767c6 |
| institution | DOAJ |
| issn | 2078-2489 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Information |
| spelling | doaj-art-cf9b95ed8fe142e49abcf3cfad5767c62025-08-20T03:08:06ZengMDPI AGInformation2078-24892025-07-0116757410.3390/info16070574Switched 32-Bit Fixed-Point Format for Laplacian-Distributed DataBojan Denić0Zoran Perić1Milan Dinčić2Sofija Perić3Nikola Simić4Marko Anđelković5Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 18000 Niš, SerbiaFaculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 18000 Niš, SerbiaFaculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 18000 Niš, SerbiaFaculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 18000 Niš, SerbiaFaculty of Technical Sciences, University of Novi Sad, Trg Dositeja Obradovića 6, 21102 Novi Sad, SerbiaIHP—Leibniz-Institut für Innovative Mikroelektronik, Im Technologiepark 25, 15236 Frankfurt (Oder), GermanyThe 32-bit floating-point (FP32) format has many useful applications, particularly in computing and neural network systems. The classic 32-bit fixed-point (FXP32) format often introduces lower quality of representation (i.e., precision), making it unsuitable for real deployment, despite offering faster computations and reduced computational cost, which positively impacts energy efficiency. In this paper, we propose a switched FXP32 format able to compete with or surpass the widely used FP32 format across a wide variance range. It actually proposes switching between the possible values of key parameters according to the variance level of the data modeled with the Laplacian distribution. Precision analysis is achieved using the signal-to-quantization noise ratio (SQNR) as a performance metric, introduced based on the analogy between digital formats and quantization. Theoretical SQNR results provided in a wide range of variance confirm the design objectives. Experimental and simulation results obtained using neural network weights further support the approach. The strong agreement between the experiment, simulation, and theory indicates the efficiency of this proposal in encoding Laplacian data, as well as its potential applicability in neural networks.https://www.mdpi.com/2078-2489/16/7/574fixed-point formatuniform quantizationLaplacian sourceSQNR |
| spellingShingle | Bojan Denić Zoran Perić Milan Dinčić Sofija Perić Nikola Simić Marko Anđelković Switched 32-Bit Fixed-Point Format for Laplacian-Distributed Data Information fixed-point format uniform quantization Laplacian source SQNR |
| title | Switched 32-Bit Fixed-Point Format for Laplacian-Distributed Data |
| title_full | Switched 32-Bit Fixed-Point Format for Laplacian-Distributed Data |
| title_fullStr | Switched 32-Bit Fixed-Point Format for Laplacian-Distributed Data |
| title_full_unstemmed | Switched 32-Bit Fixed-Point Format for Laplacian-Distributed Data |
| title_short | Switched 32-Bit Fixed-Point Format for Laplacian-Distributed Data |
| title_sort | switched 32 bit fixed point format for laplacian distributed data |
| topic | fixed-point format uniform quantization Laplacian source SQNR |
| url | https://www.mdpi.com/2078-2489/16/7/574 |
| work_keys_str_mv | AT bojandenic switched32bitfixedpointformatforlaplaciandistributeddata AT zoranperic switched32bitfixedpointformatforlaplaciandistributeddata AT milandincic switched32bitfixedpointformatforlaplaciandistributeddata AT sofijaperic switched32bitfixedpointformatforlaplaciandistributeddata AT nikolasimic switched32bitfixedpointformatforlaplaciandistributeddata AT markoanđelkovic switched32bitfixedpointformatforlaplaciandistributeddata |