The research addresses the potential threat posed by Large Language Models (LLMs) such as ChatGPT, which can be exploited to generate misinformation. The study examines whether LLM-generated misinformation is more harmful and harder to detect than human-written misinformation. The researchers first build a taxonomy of LLM-generated misinformation, then categorize and validate potential methods for generating misinformation with LLMs. Findings suggest LLM-generated misinformation can be more deceptive and potentially more harmful. The research also discusses implications for combating misinformation in the era of LLMs.

 

Publication date: 25 Sep 2023
Project Page: https://llm-misinformation.github.io/
Paper: https://arxiv.org/pdf/2309.13788