Deep representation learning using layer-wise VICReg losses
Abstract This paper presents a layer-wise training procedure of neural networks by minimizing a Variance-Invariance-Covariance Regularization (VICReg) loss at each layer. The procedure is beneficial when annotated data are scarce but enough unlabeled data are present. Being able to update the parame...
Saved in:
| Main Authors: | Joy Datta, Rawhatur Rabbi, Puja Saha, Aniqua Nusrat Zereen, M. Abdullah-Al-Wadud, Jia Uddin |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-08504-2 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Local Back-Propagation for Forward-Forward Networks: Independent Unsupervised Layer-Wise Training
by: Taewook Hwang, et al.
Published: (2025-07-01) -
Replacing Backpropagation with the Forward-Forward (FF) Algorithm in Transformer Models: A Theoretical and Empirical Study on Scalable and Efficient Gradient-Free Training
by: Hyun Jung Kim, et al.
Published: (2025-01-01) -
FORWARD TRANSACTIONS CONCLUSION
by: G.N. KUKOVINETS, et al.
Published: (2010-04-01) -
Accompaniment, Silence, or Tolerance of the Islamic Legislator with the Conduct of the Wise
by: Mahdi Montazerqaem
Published: (2024-12-01) -
Ensuring Fair Compensation: Analyzing and Adjusting Freight Forwarder Liability Limits
by: Miloš Poliak, et al.
Published: (2024-04-01)