Adversarial Sparse Teacher: Defense Against Distillation-Based Model Stealing Attacks Using Adversarial Examples

We introduce Adversarial Sparse Teacher (AST), a robust defense method against distillation-based model stealing attacks. Our approach trains a teacher model using adversarial examples to produce sparse logit responses and increase the entropy of the output distribution. Typically, a model generates...

Full description

Saved in:
Bibliographic Details
Main Authors: Eda Yilmaz, Hacer Yalim Keles
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11014106/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We introduce Adversarial Sparse Teacher (AST), a robust defense method against distillation-based model stealing attacks. Our approach trains a teacher model using adversarial examples to produce sparse logit responses and increase the entropy of the output distribution. Typically, a model generates a peak in its output corresponding to its prediction. By leveraging adversarial examples, AST modifies the teacher model&#x2019;s original response, embedding a few altered logits into the output, while keeping the primary response slightly higher. Concurrently, all remaining logits are elevated to further increase the output distribution&#x2019;s entropy. All these complex manipulations are performed using an optimization function with our proposed Exponential Predictive Divergence (EPD) loss function. EPD allows us to maintain higher entropy levels compared to traditional KL divergence, effectively confusing attackers. Experiments on the CIFAR-10 and CIFAR-100 datasets demonstrate that AST outperforms state-of-the-art methods, providing effective defense against model stealing, while preserving high accuracy. The source codes are publicly available at <uri>https://github.com/codeofanon/AdversarialSparseTeacher</uri>
ISSN:2169-3536