This article discusses the potential of Large Language Models (LLMs) in detecting fake news, compared to Small Language Models (SLMs). The study finds that while LLMs like GPT 3.5 can expose fake news and provide multi-perspective rationales, they underperform compared to fine-tuned SLMs like BERT. The authors suggest that LLMs can be used as advisors to SLMs, providing them with multi-perspective rationales. They propose an Adaptive Rationale Guidance network for fake news detection (ARG), where SLMs selectively acquire insights from LLMs. A rationale-free version of ARG (ARG-D), designed for cost-sensitive scenarios, is also proposed. Experiments show that both ARG and ARG-D outperform SLM-based, LLM-based, and combined models.

 

Publication date: 22 Sep 2023
Project Page: Not Provided
Paper: https://arxiv.org/pdf/2309.12247