How to beat a Bayesian adversary
Deep neural networks and other modern machine learning models are often susceptible to adversarial attacks. Indeed, an adversary may often be able to change a model’s prediction through a small, directed perturbation of the model’s input – an issue in safety-critical applications. Adversarially robu...
Saved in:
| Main Authors: | Zihan Ding, Kexin Jin, Jonas Latz, Chenguang Liu |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Cambridge University Press
|
| Series: | European Journal of Applied Mathematics |
| Subjects: | |
| Online Access: | https://www.cambridge.org/core/product/identifier/S0956792525000105/type/journal_article |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A limit theorem of nonlinear filtering for multiscale McKean–Vlasov stochastic systems
by: Qiao, Huijie, et al.
Published: (2024-11-01) -
Global stability for McKean–Vlasov equations on large networks
by: Christian Kuehn, et al. -
The Estimation of a Signal Generated by a Dynamical System Modeled by McKean–Vlasov Stochastic Differential Equations Under Sampled Measurements
by: Vasile Dragan, et al.
Published: (2025-05-01) -
Convergence and Stability of the Truncated Stochastic Theta Method for McKean-Vlasov Stochastic Differential Equations Under Local Lipschitz Conditions
by: Hongxia Chu, et al.
Published: (2025-07-01) -
Characters and transfer maps via categorified traces
by: Shachar Carmeli, et al.
Published: (2025-01-01)