The article delves into the use of graph neural networks (GNN) in dealing with complex relationships and dependencies in graph-structured data. It introduces the concept of graph explanation, which is a subgraph that can almost sufficiently explain the classification label of an input graph. The article discusses two methods to leverage such perturbation invariances in the design and training of GNNs: explanation-assisted learning and explanation-assisted data augmentation. It concludes that while these methods can improve performance, there may be drawbacks if the augmented data is out-of-distribution.
Publication date: 8 Feb 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2402.05039