The authors discuss the need for explainable AI systems, focusing on counterfactual explanations that are often used post-decision. They argue that these explanations often lack plausibility, hence limiting their value. To address this, they introduce a system that provides high-likelihood explanations using a Sum-Product Network (SPN) and mixed-integer optimization (MIO). The SPN estimates the likelihood of a counterfactual, and the MIO models the search for the most likely explanations. They benchmark their system against several methods for generating counterfactual explanations.

 

Publication date: 26 Jan 2024
Project Page: https://arxiv.org/abs/2401.14086
Paper: https://arxiv.org/pdf/2401.14086