Chenhan Zhang, Zhiyi Tian, James J.Q. Yu, and Shui Yu
Proc. IEEE International Conference on Communications, Rome, Italy, May 2023
Graphs provide a unique representation of real-world data. However, recent studies found that inference attacks can extract private property information of graph data from trained graph neural networks (GNNs), which arouses privacy concerns about graph data, especially in collaborative learning systems where model information is more accessible. While there has been a few research efforts on the property inference attacks against GNNs, how to defend against such attacks has seldom been studied. In this paper, we propose to leverage the information bottleneck (IB) principle to defend against the property inference attacks. Particularly, we involve a threat model, where the attacker can extract graph property from the graph embedding developed by GNNs. To defend against the attacks, we use IB to construct new graph structures from the original graphs. The change in graph structures enables the new graphs to contain less information related to the property information of the original graphs, making it harder for attackers to infer property information of the original graphs from the graph embeddings. Meantime, the IB principle enables task-relevant information to be sufficiently contained the new graph, enabling GNNs to develop accurate predictions. The experimental results demonstrate the efficacy of the proposed approach in resisting property inference attacks and developing accurate predictions.
[ Download PDF ] [ Copy Citation ]