Graph Unlearning
Developing techniques for selective information removal in graph-based models
(Kolipaka* et al., 2024). Graph neural networks (GNNs) are increasingly being used on sensitive graph-structured data, necessitating techniques for handling unlearning requests on the trained models, particularly node unlearning. However, unlearning nodes on GNNs is challenging due to the interdependence between the nodes in a graph. We compare MEGU, a state-of-the-art graph unlearning method, and SCRUB, a general unlearning method for classification, to investigate the efficacy of graph unlearning methods over traditional unlearning methods. Surprisingly, we find that SCRUB performs comparably or better than MEGU on random node removal and on removing an adversarial node injection attack. Our results suggest that 1. graph unlearning studies should incorporate general unlearning methods like SCRUB as baselines, and 2. there is a need for more rigorous behavioral evaluations that reveal the differential advantages of proposed graph unlearning methods. Our work, therefore, motivates future research into more comprehensive evaluations for assessing the true utility of graph unlearning algorithms.
Related Publications
2024
-
CoLLAs-WSanity Checks for Evaluating Graph UnlearningIn Third Conference on Lifelong Learning Agents - Workshop Track, 2024