Cross-stream attention enhanced central difference convolutional network for CG image detection
With the maturation of computer graphics (CG) technology in the field of image generation, the realism of created images has been improved significantly. Although these technologies are widely used in daily life and bring many conveniences, they also come with many security risks. If forged images g...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
POSTS&TELECOM PRESS Co., LTD
2024-12-01
|
Series: | 网络与信息安全学报 |
Subjects: | |
Online Access: | http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2024083 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | With the maturation of computer graphics (CG) technology in the field of image generation, the realism of created images has been improved significantly. Although these technologies are widely used in daily life and bring many conveniences, they also come with many security risks. If forged images generated using CG technology are maliciously used and widely spread on the Internet and social media, they may harm the rights of individuals and enterprises. Therefore, an innovative cross-stream attention enhanced central difference convolutional network was proposed, aiming at improving the accuracy of CG image detection. A dual-stream structure was constructed in the model, in order to extract semantic features and non-semantic residual texture features from the image. Vanilla convolutional layers in each stream were replaced by central difference convolutions, which allowed the model to simultaneously extract pixel intensity information and pixel gradient information from the image. Furthermore, by introducing a cross-stream attention enhancement module, the model enhanced feature extraction capability at the global level and promoted complementarity between the two feature streams. Experimental results demonstrate that this method outperforms existing methods. Additionally, a series of ablation experiments further verify the rationality of the proposed model design. |
---|---|
ISSN: | 2096-109X |