Selective learning for sensing using shift-invariant spectrally stable undersampled networks
Abstract The amount of data collected for sensing tasks in scientific computing is based on the Shannon-Nyquist sampling theorem proposed in the 1940s. Sensor data generation will surpass 73 trillion GB by 2025 as we increase the high-fidelity digitization of the physical world. Skyrocketing data in...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2024-12-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-024-83706-8 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841559542442229760 |
---|---|
author | Ankur Verma Ayush Goyal Sanjay Sarma Soundar Kumara |
author_facet | Ankur Verma Ayush Goyal Sanjay Sarma Soundar Kumara |
author_sort | Ankur Verma |
collection | DOAJ |
description | Abstract The amount of data collected for sensing tasks in scientific computing is based on the Shannon-Nyquist sampling theorem proposed in the 1940s. Sensor data generation will surpass 73 trillion GB by 2025 as we increase the high-fidelity digitization of the physical world. Skyrocketing data infrastructure costs and time to maintain and compute on all this data are increasingly common. To address this, we introduce a selective learning approach, where the amount of data collected is problem dependent. We develop novel shift-invariant and spectrally stable neural networks to solve real-time sensing problems formulated as classification or regression problems. We demonstrate that (i) less data can be collected while preserving information, and (ii) test accuracy improves with data augmentation (size of training data), rather than by collecting more than a certain fraction of raw data, unlike information theoretic approaches. While sampling at Nyquist rates, every data point does not have to be resolved at Nyquist and the network learns the amount of data to be collected. This has significant implications (orders of magnitude reduction) on the amount of data collected, computation, power, time, bandwidth, and latency required for several embedded applications ranging from low earth orbit economy to unmanned underwater vehicles. |
format | Article |
id | doaj-art-758b7d5350e5403b8c3bb45a67f00762 |
institution | Kabale University |
issn | 2045-2322 |
language | English |
publishDate | 2024-12-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj-art-758b7d5350e5403b8c3bb45a67f007622025-01-05T12:24:39ZengNature PortfolioScientific Reports2045-23222024-12-011411910.1038/s41598-024-83706-8Selective learning for sensing using shift-invariant spectrally stable undersampled networksAnkur Verma0Ayush Goyal1Sanjay Sarma2Soundar Kumara3Department of Industrial and Manufacturing Engineering, The Pennsylvania State UniversityDepartment of Computer Science and Engineering, The Pennsylvania State UniversityDepartment of Mechanical Engineering, Massachusetts Institute of TechnologyDepartment of Industrial and Manufacturing Engineering, The Pennsylvania State UniversityAbstract The amount of data collected for sensing tasks in scientific computing is based on the Shannon-Nyquist sampling theorem proposed in the 1940s. Sensor data generation will surpass 73 trillion GB by 2025 as we increase the high-fidelity digitization of the physical world. Skyrocketing data infrastructure costs and time to maintain and compute on all this data are increasingly common. To address this, we introduce a selective learning approach, where the amount of data collected is problem dependent. We develop novel shift-invariant and spectrally stable neural networks to solve real-time sensing problems formulated as classification or regression problems. We demonstrate that (i) less data can be collected while preserving information, and (ii) test accuracy improves with data augmentation (size of training data), rather than by collecting more than a certain fraction of raw data, unlike information theoretic approaches. While sampling at Nyquist rates, every data point does not have to be resolved at Nyquist and the network learns the amount of data to be collected. This has significant implications (orders of magnitude reduction) on the amount of data collected, computation, power, time, bandwidth, and latency required for several embedded applications ranging from low earth orbit economy to unmanned underwater vehicles.https://doi.org/10.1038/s41598-024-83706-8 |
spellingShingle | Ankur Verma Ayush Goyal Sanjay Sarma Soundar Kumara Selective learning for sensing using shift-invariant spectrally stable undersampled networks Scientific Reports |
title | Selective learning for sensing using shift-invariant spectrally stable undersampled networks |
title_full | Selective learning for sensing using shift-invariant spectrally stable undersampled networks |
title_fullStr | Selective learning for sensing using shift-invariant spectrally stable undersampled networks |
title_full_unstemmed | Selective learning for sensing using shift-invariant spectrally stable undersampled networks |
title_short | Selective learning for sensing using shift-invariant spectrally stable undersampled networks |
title_sort | selective learning for sensing using shift invariant spectrally stable undersampled networks |
url | https://doi.org/10.1038/s41598-024-83706-8 |
work_keys_str_mv | AT ankurverma selectivelearningforsensingusingshiftinvariantspectrallystableundersamplednetworks AT ayushgoyal selectivelearningforsensingusingshiftinvariantspectrallystableundersamplednetworks AT sanjaysarma selectivelearningforsensingusingshiftinvariantspectrallystableundersamplednetworks AT soundarkumara selectivelearningforsensingusingshiftinvariantspectrallystableundersamplednetworks |