This article focuses on the problem of factually incorrect responses generated by Language Models (LMs). The proposed solution involves augmenting LMs with knowledge retrieved from an external source. However, this approach often results in suboptimal text generation due to irrelevant knowledge retrieval or failure to reflect the retrieved knowledge in the generated text. To address these issues, the researchers propose verifying the output and knowledge of the knowledge-augmented LMs with a separate verifier. This verifier, a smaller LM, is trained to detect the mentioned errors. The proposed verifier proved effective in identifying retrieval and generation errors, allowing LMs to provide more factually correct outputs.
Publication date: 20 Oct 2023
Project Page: https://github.com/JinheonBaek/KALMV
Paper: https://arxiv.org/pdf/2310.12836