Label flipping adversarial attack on graph neural network

To expand the adversarial attack types of graph neural networks and fill the relevant research gaps, label flipping attack methods were proposed to evaluate the robustness of graph neural network aimed at label noise.The effectiveness mechanisms of adversarial attacks were summarized as three basic...

Full description

Saved in:
Bibliographic Details
Main Authors: Yiteng WU, Wei LIU, Hongtao YU
Format: Article
Language:zho
Published: Editorial Department of Journal on Communications 2021-09-01
Series:Tongxin xuebao
Subjects:
Online Access:http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2021167/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To expand the adversarial attack types of graph neural networks and fill the relevant research gaps, label flipping attack methods were proposed to evaluate the robustness of graph neural network aimed at label noise.The effectiveness mechanisms of adversarial attacks were summarized as three basic hypotheses, contradictory data hypothesis, parameter discrepancy hypothesis and identically distributed hypothesis.Based on the three hypotheses, label flipping attack models were established.Using the gradient oriented attack methods, it was theoretically proved that attack gradients based on the parameter discrepancy hypothesis were the same as gradients of identically distributed hypothesis, and the equivalence between two attack methods was established.Advantages and disadvantages of proposed models based on different hypotheses were compared and analyzed by experiments.Extensive experimental results verify the effectiveness of the proposed attack models.
ISSN:1000-436X