RADAR: Reasoning AI-Generated Image Detection for Semantic Fakes
As modern generative models advance rapidly, AI-generated images exhibit higher resolution and lifelike details. However, the generated images may not adhere to world knowledge and common sense, as there is no such awareness and supervision in the generative models. For instance, the generated image...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-07-01
|
| Series: | Technologies |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2227-7080/13/7/280 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | As modern generative models advance rapidly, AI-generated images exhibit higher resolution and lifelike details. However, the generated images may not adhere to world knowledge and common sense, as there is no such awareness and supervision in the generative models. For instance, the generated images could feature a penguin walking in the desert or a man with three arms, scenarios that are highly unlikely to occur in real life. Current AI-generated image detection methods mainly focus on low-level features, such as detailed texture patterns and frequency domain inconsistency, which are specific to certain generative models, making it challenging to identify the above-mentioned general semantic fakes. In this work, (1) we propose a new task, reasoning AI-generated image detection, which focuses on identifying semantic fakes in generative images that violate world knowledge and common sense. (2) To benchmark the new task, we collect a new dataset Spot the Semantic Fake (STSF). STSF contains 358 images with clear semantic fakes generated by three different modern diffusion models and provides bounding boxes as well as text annotations to locate the fakes. (3) We propose RADAR, a reasoning AI-generated image detection assistor, to locate semantic fakes in the generative images and output corresponding text explanations. Specifically, RADAR contains a specialized multimodal LLM to process given images and detect semantic fakes. To improve the generalization ability, we further incorporate ChatGPT as an assistor to detect unrealistic components in grounded text descriptions. The experiments on the STSF dataset show that RADAR effectively detects semantic fakes in modern generative images. |
|---|---|
| ISSN: | 2227-7080 |