Scaling of hardware-compatible perturbative training algorithms
In this work, we explore the capabilities of multiplexed gradient descent (MGD), a scalable and efficient perturbative zeroth-order training method for estimating the gradient of a loss function in hardware and training it via stochastic gradient descent. We extend the framework to include both weig...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
AIP Publishing LLC
2025-06-01
|
| Series: | APL Machine Learning |
| Online Access: | http://dx.doi.org/10.1063/5.0258271 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|