The study focuses on the challenge of structural divergence in pre-training and fine-tuning Graph Neural Networks (GNNs). It identifies the root cause as the discrepancy of generative patterns between pre-training and downstream graphs. To address this issue, the study introduces G-TUNING, a method that fine-tunes a pre-trained GNN to reconstruct the generative patterns of the downstream graph. The study also provides a theoretical analysis on the existence of alternative graphons, called graphon bases, to overcome the computational complexity of graphon reconstruction. The G-TUNING method shows an average improvement in in-domain and out-of-domain transfer learning experiments compared to existing algorithms.

 

Publication date: 22 Dec 2023
Project Page: Unavailable
Paper: https://arxiv.org/pdf/2312.13583