Learning atomic forces from uncertainty-calibrated adversarial attacks
Abstract Adversarial approaches, which intentionally challenge machine learning models by generating difficult examples, are increasingly being adopted to improve machine learning interatomic potentials (MLIPs). While already providing great practical value, little is known about the actual predicti...
Saved in:
| Main Authors: | Henrique Musseli Cezar, Tilmann Bodenstein, Henrik Andersen Sveinsson, Morten Ledum, Simen Reine, Sigbjørn Løland Bore |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | npj Computational Materials |
| Online Access: | https://doi.org/10.1038/s41524-025-01703-5 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Point Cloud Adversarial Perturbation Generation for Adversarial Attacks
by: Fengmei He, et al.
Published: (2023-01-01) -
Distinct creep regimes of methane hydrates predicted by a monatomic water model
by: Henrik Andersen Sveinsson, et al.
Published: (2025-01-01) -
An Adversarial Attack via Penalty Method
by: Jiyuan Sun, et al.
Published: (2025-01-01) -
DOG: An Object Detection Adversarial Attack Method
by: Jinpeng Li, et al.
Published: (2025-01-01) -
Incremental Adversarial Learning for Polymorphic Attack Detection
by: Ulya Sabeel, et al.
Published: (2024-01-01)