The article explores the gap between the information provided by AI systems and the information needs of people affected by algorithmic decision-making. It introduces the ‘XAI Novice Question Bank’, a catalog of information needs in two use cases: employment prediction and health monitoring. The study showed that while confidence increased after explanations, participants also faced understanding challenges. The work aims to support the inclusion of affected stakeholders in explainability by providing an overview of relevant information and challenges when adopting such systems. Six key implications for future explanation design were summarized.
Publication date: 24 Jan 2024
Project Page: https://arxiv.org/abs/2401.13324v1
Paper: https://arxiv.org/pdf/2401.13324