This research paper explores the role of Large Language Models (LLMs) in assisting users to fact-check information as compared to traditional search engines. It found that while LLMs can make the fact-checking process more efficient, there is a tendency for users to over-rely on LLMs, especially when the information provided is incorrect. To mitigate this, the researchers proposed providing contrastive information, presenting reasons why a claim could be both true and false. However, this method did not significantly outperform search engines. The study concludes that LLMs may not yet be a reliable substitute for reading retrieved passages, particularly in situations where reliance on incorrect AI explanations could lead to serious consequences.

 

Publication date: 23 Oct 2023
Project Page: https://doi.org/XXXXXXX.XXXXXXX
Paper: https://arxiv.org/pdf/2310.12558