Perturb, Attend, Detect, and Localize (PADL): Robust Proactive Image Defense

Image manipulation detection has gained significant attention due to the rise of Generative Models (GMs). Passive detection methods often overfit to specific GMs, limiting their effectiveness. Recently, proactive approaches have been introduced to overcome this limitation. However, these methods suf...

Full description

Saved in:
Bibliographic Details
Main Authors: Filippo Bartolucci, Iacopo Masi, Giuseppe Lisanti
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10980274/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850272217496027136
author Filippo Bartolucci
Iacopo Masi
Giuseppe Lisanti
author_facet Filippo Bartolucci
Iacopo Masi
Giuseppe Lisanti
author_sort Filippo Bartolucci
collection DOAJ
description Image manipulation detection has gained significant attention due to the rise of Generative Models (GMs). Passive detection methods often overfit to specific GMs, limiting their effectiveness. Recently, proactive approaches have been introduced to overcome this limitation. However, these methods suffer from two vulnerabilities: i) the manipulation detector is not robust to noise and hence can be easily fooled; ii) they rely on fixed perturbations for image protection, which offers an exploit for malicious attackers, enabling them to evade detection. To overcome these issues, we propose PADL, a novel solution that is able to create image-specific perturbations for protecting images. PADL’s key objective is to provide a secure and adaptive protection mechanism that ensures the authenticity of images by detecting and localizing manipulations, drastically reducing the possibility of reverse engineering. The method consists of two key components: an encoder, which conditions a learnable perturbation on the input image to ensure uniqueness and robustness against attacks, and a decoder, which extracts the perturbation and leverages it for manipulation detection and localization. PADL can detect manipulation of a protected image and pinpoint regions that have undergone alterations. Unlike previous proactive defenses that rely on a finite set of perturbations, PADL’s tailored protection significantly reduces the risk of reverse engineering. Although being trained only on images of faces manipulated with STGAN, PADL generalizes to a range of unseen models with diverse architectural designs, such as StarGANv2, CycleGAN, BlendGAN, DiffAE, StableDiffusion, and StableDiffusionXL and also to unseen data domains. Finally, we propose a novel evaluation protocol that fairly assesses localization performance in relation to detection accuracy, providing a better reflection of real-world scenarios. Future research will aim to extend PADL to work on more challenging scenarios, including video content protection and high-resolution images, ensuring its effectiveness across diverse media formats and real-world applications. The source code will be publicly released.
format Article
id doaj-art-e8fdada8d7094883abdc04576baa416f
institution OA Journals
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-e8fdada8d7094883abdc04576baa416f2025-08-20T01:51:54ZengIEEEIEEE Access2169-35362025-01-0113817558176810.1109/ACCESS.2025.356582410980274Perturb, Attend, Detect, and Localize (PADL): Robust Proactive Image DefenseFilippo Bartolucci0https://orcid.org/0009-0001-4182-5477Iacopo Masi1https://orcid.org/0000-0003-0444-7646Giuseppe Lisanti2https://orcid.org/0000-0002-0785-9972Computer Science and Engineering Department, CVLab, University of Bologna, Bologna, ItalyComputer Science Department, OmnAI Laboratory, University of Rome Sapienza, Rome, ItalyComputer Science and Engineering Department, CVLab, University of Bologna, Bologna, ItalyImage manipulation detection has gained significant attention due to the rise of Generative Models (GMs). Passive detection methods often overfit to specific GMs, limiting their effectiveness. Recently, proactive approaches have been introduced to overcome this limitation. However, these methods suffer from two vulnerabilities: i) the manipulation detector is not robust to noise and hence can be easily fooled; ii) they rely on fixed perturbations for image protection, which offers an exploit for malicious attackers, enabling them to evade detection. To overcome these issues, we propose PADL, a novel solution that is able to create image-specific perturbations for protecting images. PADL’s key objective is to provide a secure and adaptive protection mechanism that ensures the authenticity of images by detecting and localizing manipulations, drastically reducing the possibility of reverse engineering. The method consists of two key components: an encoder, which conditions a learnable perturbation on the input image to ensure uniqueness and robustness against attacks, and a decoder, which extracts the perturbation and leverages it for manipulation detection and localization. PADL can detect manipulation of a protected image and pinpoint regions that have undergone alterations. Unlike previous proactive defenses that rely on a finite set of perturbations, PADL’s tailored protection significantly reduces the risk of reverse engineering. Although being trained only on images of faces manipulated with STGAN, PADL generalizes to a range of unseen models with diverse architectural designs, such as StarGANv2, CycleGAN, BlendGAN, DiffAE, StableDiffusion, and StableDiffusionXL and also to unseen data domains. Finally, we propose a novel evaluation protocol that fairly assesses localization performance in relation to detection accuracy, providing a better reflection of real-world scenarios. Future research will aim to extend PADL to work on more challenging scenarios, including video content protection and high-resolution images, ensuring its effectiveness across diverse media formats and real-world applications. The source code will be publicly released.https://ieeexplore.ieee.org/document/10980274/Image manipulation detectionimage manipulation localizationmedia forensicsproactive image defense
spellingShingle Filippo Bartolucci
Iacopo Masi
Giuseppe Lisanti
Perturb, Attend, Detect, and Localize (PADL): Robust Proactive Image Defense
IEEE Access
Image manipulation detection
image manipulation localization
media forensics
proactive image defense
title Perturb, Attend, Detect, and Localize (PADL): Robust Proactive Image Defense
title_full Perturb, Attend, Detect, and Localize (PADL): Robust Proactive Image Defense
title_fullStr Perturb, Attend, Detect, and Localize (PADL): Robust Proactive Image Defense
title_full_unstemmed Perturb, Attend, Detect, and Localize (PADL): Robust Proactive Image Defense
title_short Perturb, Attend, Detect, and Localize (PADL): Robust Proactive Image Defense
title_sort perturb attend detect and localize padl robust proactive image defense
topic Image manipulation detection
image manipulation localization
media forensics
proactive image defense
url https://ieeexplore.ieee.org/document/10980274/
work_keys_str_mv AT filippobartolucci perturbattenddetectandlocalizepadlrobustproactiveimagedefense
AT iacopomasi perturbattenddetectandlocalizepadlrobustproactiveimagedefense
AT giuseppelisanti perturbattenddetectandlocalizepadlrobustproactiveimagedefense