Loading…

Feature-Based Graph Backdoor Attack in the Node Classification Task

Graph neural networks (GNNs) have shown significant performance in various practical applications due to their strong learning capabilities. Backdoor attacks are a type of attack that can produce hidden attacks on machine learning models. GNNs take backdoor datasets as input to produce an adversary-...

Full description

Saved in:
Bibliographic Details
Published in:International journal of intelligent systems 2023-02, Vol.2023, p.1-13
Main Authors: Chen, Yang, Ye, Zhonglin, Zhao, Haixing, Wang, Ying
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Graph neural networks (GNNs) have shown significant performance in various practical applications due to their strong learning capabilities. Backdoor attacks are a type of attack that can produce hidden attacks on machine learning models. GNNs take backdoor datasets as input to produce an adversary-specified output on poisoned data but perform normally on clean data, which can have grave implications for applications. Backdoor attacks are under-researched in the graph domain, and almost existing graph backdoor attacks focus on the graph-level classification task. To close this gap, we propose a novel graph backdoor attack that uses node features as triggers and does not need knowledge of the GNNs parameters. In the experiments, we find that feature triggers can destroy the feature spaces of the original datasets, resulting in GNNs inability to identify poisoned data and clean data well. An adaptive method is proposed to improve the performance of the backdoor model by adjusting the graph structure. We conducted extensive experiments to validate the effectiveness of our model on three benchmark datasets.
ISSN:0884-8173
1098-111X
DOI:10.1155/2023/5418398