This academic article investigates the cognitive abilities and confidence dynamics of Large Language Models (LLMs). The authors dive deep into understanding the alignment between these models’ self-assessed confidence and actual performance. The study reveals that these models sometimes show high confidence even when their answers are incorrect, reminiscent of the Dunning-Kruger effect observed in human psychology. Conversely, there are instances where these models exhibit low confidence while providing correct answers, indicating potential underestimation biases. The findings highlight the need for a deeper understanding of the cognitive processes of LLMs and serve to advance the functionalities and broaden the potential applications of these formidable language models.

 

Publication date: 28 Sep 2023
Project Page: https://arxiv.org/abs/2309.16145v1
Paper: https://arxiv.org/pdf/2309.16145