This research investigates the ability of Large Language Models (LLMs) to reason with knowledge graphs they learned during pre-training. The study aims to understand the accuracy of LLMs in recalling information from these knowledge graphs and their ability to deduce relations from context. In addition, the paper identifies two types of hallucinations that may occur during knowledge reasoning with LLMs: content and ontology hallucination. The findings show that LLMs can handle both simple and complex knowledge graph reasoning tasks using their own memory and contextual input.

 

Publication date: 4 Dec 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2312.00353