The research paper presents a novel uncertainty modeling framework for self-explaining neural networks. The framework aims to enhance the interpretability of deep neural networks (DNNs), which are often seen as ‘black boxes’ due to their complex internal workings. The authors note that while DNNs have made significant progress in terms of accuracy, they often lack transparency, making it difficult to understand the rationale behind their predictions. The proposed framework addresses this issue by providing clear and intuitive insights into why a particular decision was made by the DNN, and by quantifying the uncertainty of these decisions. The framework was evaluated through extensive experiments, which confirmed its effectiveness.
Publication date: 4 Jan 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2401.01549