SymbolNet: neural symbolic regression with adaptive dynamic pruning for compression
Compact symbolic expressions have been shown to be more efficient than neural network (NN) models in terms of resource consumption and inference speed when implemented on custom hardware such as field-programmable gate arrays (FPGAs), while maintaining comparable accuracy (Tsoi et al 2024 EPJ Web Co...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2025-01-01
|
Series: | Machine Learning: Science and Technology |
Subjects: | |
Online Access: | https://doi.org/10.1088/2632-2153/adaad8 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832582676239876096 |
---|---|
author | Ho Fung Tsoi Vladimir Loncar Sridhara Dasu Philip Harris |
author_facet | Ho Fung Tsoi Vladimir Loncar Sridhara Dasu Philip Harris |
author_sort | Ho Fung Tsoi |
collection | DOAJ |
description | Compact symbolic expressions have been shown to be more efficient than neural network (NN) models in terms of resource consumption and inference speed when implemented on custom hardware such as field-programmable gate arrays (FPGAs), while maintaining comparable accuracy (Tsoi et al 2024 EPJ Web Conf. 295 09036). These capabilities are highly valuable in environments with stringent computational resource constraints, such as high-energy physics experiments at the CERN Large Hadron Collider. However, finding compact expressions for high-dimensional datasets remains challenging due to the inherent limitations of genetic programming (GP), the search algorithm of most symbolic regression (SR) methods. Contrary to GP, the NN approach to SR offers scalability to high-dimensional inputs and leverages gradient methods for faster equation searching. Common ways of constraining expression complexity often involve multistage pruning with fine-tuning, which can result in significant performance loss. In this work, we propose $\tt{SymbolNet}$ , a NN approach to SR specifically designed as a model compression technique, aimed at enabling low-latency inference for high-dimensional inputs on custom hardware such as FPGAs. This framework allows dynamic pruning of model weights, input features, and mathematical operators in a single training process, where both training loss and expression complexity are optimized simultaneously. We introduce a sparsity regularization term for each pruning type, which can adaptively adjust its strength, leading to convergence at a target sparsity ratio. Unlike most existing SR methods that struggle with datasets containing more than $\mathcal{O}(10)$ inputs, we demonstrate the effectiveness of our model on the LHC jet tagging task (16 inputs), MNIST (784 inputs), and SVHN (3072 inputs). |
format | Article |
id | doaj-art-463ec25e57094b70954095ce6b9f1500 |
institution | Kabale University |
issn | 2632-2153 |
language | English |
publishDate | 2025-01-01 |
publisher | IOP Publishing |
record_format | Article |
series | Machine Learning: Science and Technology |
spelling | doaj-art-463ec25e57094b70954095ce6b9f15002025-01-29T10:50:20ZengIOP PublishingMachine Learning: Science and Technology2632-21532025-01-016101502110.1088/2632-2153/adaad8SymbolNet: neural symbolic regression with adaptive dynamic pruning for compressionHo Fung Tsoi0https://orcid.org/0000-0002-2550-2184Vladimir Loncar1https://orcid.org/0000-0003-3651-0232Sridhara Dasu2https://orcid.org/0000-0001-5993-9045Philip Harris3https://orcid.org/0000-0001-8189-3741University of Wisconsin-Madison , Madison, WI, 53706, United States of AmericaMassachusetts Institute of Technology , Cambridge, MA, 02139, United States of America; Institute of Physics , Belgrade, SerbiaUniversity of Wisconsin-Madison , Madison, WI, 53706, United States of AmericaMassachusetts Institute of Technology , Cambridge, MA, 02139, United States of America; Institute for Artificial Intelligence and Fundamental Interactions , Cambridge, MA, 02139, United States of AmericaCompact symbolic expressions have been shown to be more efficient than neural network (NN) models in terms of resource consumption and inference speed when implemented on custom hardware such as field-programmable gate arrays (FPGAs), while maintaining comparable accuracy (Tsoi et al 2024 EPJ Web Conf. 295 09036). These capabilities are highly valuable in environments with stringent computational resource constraints, such as high-energy physics experiments at the CERN Large Hadron Collider. However, finding compact expressions for high-dimensional datasets remains challenging due to the inherent limitations of genetic programming (GP), the search algorithm of most symbolic regression (SR) methods. Contrary to GP, the NN approach to SR offers scalability to high-dimensional inputs and leverages gradient methods for faster equation searching. Common ways of constraining expression complexity often involve multistage pruning with fine-tuning, which can result in significant performance loss. In this work, we propose $\tt{SymbolNet}$ , a NN approach to SR specifically designed as a model compression technique, aimed at enabling low-latency inference for high-dimensional inputs on custom hardware such as FPGAs. This framework allows dynamic pruning of model weights, input features, and mathematical operators in a single training process, where both training loss and expression complexity are optimized simultaneously. We introduce a sparsity regularization term for each pruning type, which can adaptively adjust its strength, leading to convergence at a target sparsity ratio. Unlike most existing SR methods that struggle with datasets containing more than $\mathcal{O}(10)$ inputs, we demonstrate the effectiveness of our model on the LHC jet tagging task (16 inputs), MNIST (784 inputs), and SVHN (3072 inputs).https://doi.org/10.1088/2632-2153/adaad8symbolic regressionneural networkdynamic pruningmodel compressionlow latencyFPGA |
spellingShingle | Ho Fung Tsoi Vladimir Loncar Sridhara Dasu Philip Harris SymbolNet: neural symbolic regression with adaptive dynamic pruning for compression Machine Learning: Science and Technology symbolic regression neural network dynamic pruning model compression low latency FPGA |
title | SymbolNet: neural symbolic regression with adaptive dynamic pruning for compression |
title_full | SymbolNet: neural symbolic regression with adaptive dynamic pruning for compression |
title_fullStr | SymbolNet: neural symbolic regression with adaptive dynamic pruning for compression |
title_full_unstemmed | SymbolNet: neural symbolic regression with adaptive dynamic pruning for compression |
title_short | SymbolNet: neural symbolic regression with adaptive dynamic pruning for compression |
title_sort | symbolnet neural symbolic regression with adaptive dynamic pruning for compression |
topic | symbolic regression neural network dynamic pruning model compression low latency FPGA |
url | https://doi.org/10.1088/2632-2153/adaad8 |
work_keys_str_mv | AT hofungtsoi symbolnetneuralsymbolicregressionwithadaptivedynamicpruningforcompression AT vladimirloncar symbolnetneuralsymbolicregressionwithadaptivedynamicpruningforcompression AT sridharadasu symbolnetneuralsymbolicregressionwithadaptivedynamicpruningforcompression AT philipharris symbolnetneuralsymbolicregressionwithadaptivedynamicpruningforcompression |