The article discusses the challenge of verifying relational explanations, particularly those generated by GNNExplainer, a popular method for explaining Graph Neural Networks. The authors propose a probabilistic approach which involves generating explanations for several counterfactual examples. These examples are created as symmetric approximations of the relational structure in the original data. From these explanations, a factor graph model is developed to quantify the uncertainty in an explanation. The results suggest that this approach can reliably estimate the uncertainty of a relation specified in the explanation, thus aiding in the verification of explanations generated by GNNExplainer.

 

Publication date: 8 Jan 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2401.02703