Falcon is a novel framework that aims to address the issue of bias in machine learning models. The framework employs a data-centric approach, focusing on the curation and labeling of data rather than model training. Falcon uses a strategic sample selection method to improve fairness. If a predicted label differs from the expected one and falls outside the target group, the sample’s usage is postponed. This approach offers a balance between informativeness and postpone rate. The framework uses adversarial multi-armed bandit methods to automatically select the best policy. Falcon showed superior performance in terms of fairness, accuracy, and efficiency compared to other fair active learning approaches.

 

Publication date: 24 Jan 2024
Project Page: https://github.com/khtae8250/Falcon
Paper: https://arxiv.org/pdf/2401.12722