The study investigates the capacity of large language models to store and extract knowledge. It questions whether the model’s responses are based on exposure to similar questions during training or if it genuinely extracts knowledge from the source. The research uses a controlled set of semi-synthetic biography data to analyze this issue, revealing a correlation between the model’s knowledge extraction ability and the diversity measures of the training data. The findings suggest that memorizing all sentences in the training data doesn’t ensure that the model can extract or manipulate the factual knowledge from the sentences during inference.
Publication date: 26 Sep 2023
Project Page: https://arxiv.org/abs/2309.14316v1
Paper: https://arxiv.org/pdf/2309.14316