ENHANCING EXPLAINABILITY IN DEEPFAKE DETECTION WITH GRAPH ATTENTION NETWORKS
Understanding how artificial intelligence models make decisions is important, especially for difficult tasks like detecting deepfakes, where it's not enough to just get a result – it needs to know why the model made that choice. Many current methods, like Shapley additive explanations (SHAP) an...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Joint Stock Company "Experimental Scientific and Production Association SPELS
2025-05-01
|
| Series: | Безопасность информационных технологий |
| Subjects: | |
| Online Access: | https://bit.spels.ru/index.php/bit/article/view/1776 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Understanding how artificial intelligence models make decisions is important, especially for difficult tasks like detecting deepfakes, where it's not enough to just get a result – it needs to know why the model made that choice. Many current methods, like Shapley additive explanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM), help explain these decisions, but they often aren't detailed enough for tasks involving complex data like human faces. In this paper, it’s introduces a new method that uses Graph Attention Networks (GATs) to explain deepfake detection. Instead of looking at images as a whole, it’s turn the face into a graph, where each key part of the face (like the eyes, nose, and mouth) is a separate node. This helps the model focus on the most important areas. Using attention mechanisms, the model highlights which parts of the face influenced its decision, making the process easier to understand. It’s compares two versions of the model, GATv1 and GATv2, and show how both create clear visual explanations while still performing well in detecting deepfakes. This approach makes it easier to see how the model reaches its conclusions, improving trust and transparency. The code is freely available at https://github.com/aleksandrpikul/ResGAT/tree/main. |
|---|---|
| ISSN: | 2074-7128 2074-7136 |