This paper investigates the concept of graph condensation, which is a technique used to reduce the size of large-scale graph datasets without compromising the performance of Graph Neural Networks (GNNs) trained on it. The authors propose a new approach to lossless graph condensation. They use a curriculum learning strategy to train expert trajectories with more diverse supervision signals from the original graph, and then transfer this information into the condensed graph with expanding window matching. They also design a loss function to further extract knowledge from the expert trajectories. The proposed method is shown to perform better across different datasets.

 

Publication date: 8 Feb 2024
Project Page: https://github.com/NUS-HPC-AI-Lab/GEOM
Paper: https://arxiv.org/pdf/2402.05011