How to beat a Bayesian adversary

Deep neural networks and other modern machine learning models are often susceptible to adversarial attacks. Indeed, an adversary may often be able to change a model’s prediction through a small, directed perturbation of the model’s input – an issue in safety-critical applications. Adversarially robu...

Full description

Saved in:
Bibliographic Details
Main Authors: Zihan Ding, Kexin Jin, Jonas Latz, Chenguang Liu
Format: Article
Language:English
Published: Cambridge University Press
Series:European Journal of Applied Mathematics
Subjects:
Online Access:https://www.cambridge.org/core/product/identifier/S0956792525000105/type/journal_article
Tags: Add Tag
No Tags, Be the first to tag this record!