The article presents F AIRDEBUGGER, a system designed to identify and explain instances of fairness violations in the outcomes of random forest classifiers. It uses machine unlearning to estimate changes in tree structures when parts of the training data are removed. The system then uses the apriori algorithm to reduce the subset search space. The system was tested on three real-world datasets, showing that the explanations it generated were consistent with insights from prior studies. The system aims to enhance transparency and accountability in AI-based decision-making systems.

 

Publication date: 8 Feb 2024
Project Page: ?
Paper: https://arxiv.org/pdf/2402.05007