The article discusses a new graph knowledge distillation framework, named LinguGKD, using Large Language Models (LLMs) as teacher models and Graph Neural Networks (GNNs) as student models for knowledge distillation. This approach helps overcome the high computational and storage requirements of LLMs while enhancing the ability of GNNs to understand complex semantics in Text-Attributed Graphs (TAGs). The proposed framework improves the GNN’s predictive accuracy and convergence rate without extra data or model parameters and also offers superior inference speed with fewer computing and storage demands.

 

Publication date: 9 Feb 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2402.05894