The article investigates the effect of AI’s overconfidence and underconfidence on human trust, their acceptance of AI suggestions, and collaboration outcomes. It reveals that disclosing AI confidence levels and performance feedback helps in better recognition of AI confidence misalignments. However, participants tend to withhold their trust perceiving such misalignments, leading to a rejection of AI suggestions and subsequently poorer performance in collaborative tasks. The study emphasizes the importance of aligning AI’s expressed confidence with its actual performance and calibrating human trust towards AI confidence.

 

Publication date: 12 Feb 2024
Project Page: https://arxiv.org/abs/2402.07632v1
Paper: https://arxiv.org/pdf/2402.07632