An edge sensitivity based gradient attack on graph isomorphic networks for graph classification problems
Abstract Graph Neural Networks have gained popularity over the past few years. Their ability to model relationships between entities of the same and different kind, represent molecules, model flow etc. have made them a go to tool for researchers. However, owing to the abstract nature of graphs, ther...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-04-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-97956-7 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Graph Neural Networks have gained popularity over the past few years. Their ability to model relationships between entities of the same and different kind, represent molecules, model flow etc. have made them a go to tool for researchers. However, owing to the abstract nature of graphs, there exists no ideal transformation to represent nodes and edges in the euclidean space. Moreover, GNNs are highly susceptible to adversarial attacks. However, a gradient based attack based on latent space embeddings does not exist in the GNN literature. Such attacks, classified as white box attacks, tamper with latent space representation of graphs without creating any noticeable difference in the overall distribution. Developing and testing GNN models based on such attacks on graph classification tasks would enable researchers to understand and develop stronger and more robust classification systems. Further, adversarial attack tests in the GNN literature have been performed on weaker, less representative neural network architectures. In order to tackle these gaps in literature, we propose a white box gradient based attack developed from contrastive latent space representations. Further, we develop a strong base(victim) learning spectral and spatial properties of graphs with consideration of isomorphic properties. We experimentally validate this model on 4 benchmark datasets in the molecular property prediction literature where our model outperformed over 75% of all LLM-based architectures. On attacking this model with our proposed adversarial attack strategy, the overall performance drops at an average of 25% thereby clearing a few gaps in the existent literature. The code for our paper can be found at https://github.com/Deceptrax123/An-edge-sensitivity-based-gradient-attack-on-GIN-for-inductive-problems |
|---|---|
| ISSN: | 2045-2322 |