The study focuses on improving the performance of large language models (LLMs) in tasks that involve repetitive sub-tasks or deceptive contents. Existing prompting strategies often fall short in these areas due to insufficient expressive power or errors triggered by hallucination. The researchers propose guiding the LLM with a divide-and-conquer program that enhances expressive power and disentangles task decomposition, sub-task resolution, and resolution assembly process. The proposed method outperforms typical prompting strategies in tasks like large integer multiplication, hallucination detection, and misinformation detection.

 

Publication date: 9 Feb 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2402.05359