This article presents a study examining the fairness of self-supervised learning (SSL) models compared to their supervised counterparts. The researchers hypothesized that SSL models would be less biased, and their findings confirmed this, demonstrating that SSL models can achieve performance on par with supervised methods while significantly enhancing fairness. The study also revealed that self-supervision can increase fairness by up to 27% with only a 1% loss in performance, highlighting the potential of SSL in high-stakes, data-scarce application domains like healthcare.

 

Publication date: 4 Jan 2024
Project Page: ?
Paper: https://arxiv.org/pdf/2401.01640