The article discusses the development of user trust in AI systems, focusing on how reliability and system confidence affect this trust. It notes that ‘trust-eroding’ events, such as incorrect predictions, can severely damage user trust, leading to a slow recovery process. The authors found that different types of miscalibration have varying impacts on user trust, highlighting the importance of calibration in user-facing AI applications. The study provides insights into how users decide whether to trust an AI system and highlights the need for a well-defined mental model that guides users’ trust in these systems.

 

Publication date: 20 Oct 2023
Project Page: github.com/zouharvi/trust-intervention
Paper: https://arxiv.org/pdf/2310.13544