The study focuses on the challenges of training on large-scale graphs in graph representation learning. It highlights how gradient matching methods, used to condense information-rich synthetic sets, often lead to deviations in training trajectories. These deviations can accumulate errors and affect the performance of the condensed graphs. To address this, the authors propose a novel graph condensation method, CrafTingRationaL trajectory (CTRL). CTRL provides an optimized starting point that aligns closely with the original dataset’s feature distribution and offers a refined strategy for gradient matching. The method is designed to neutralize the impact of accumulated errors on condensed graphs’ performance. Experimental evidence is provided to support CTRL’s effectiveness.

 

Publication date: 8 Feb 2024
Project Page: https://github.com/NUS-HPC-AI-Lab/CTRL
Paper: https://arxiv.org/pdf/2402.04924