This academic article by Anshuman Chhabra, Hadi Askari, and Prasant Mohapatra explores the concept of position bias in large language models (LLMs) and its influence on zero-shot abstractive summarization. The authors propose position bias as a general formulation of the previously studied lead bias, arguing that it can provide a more holistic evaluation of zero-shot summarization models. The study utilizes multiple models such as GPT 3.5-Turbo, Llama-2, and Dolly-v2 to measure position bias through numerous experiments. The findings offer new insights into the performance and position bias of models used for zero-shot summarization tasks.
Publication date: 3 Jan 2024
Project Page: https://arxiv.org/abs/2401.01989v1
Paper: https://arxiv.org/pdf/2401.01989