The paper titled ‘Fair Canonical Correlation Analysis’ focuses on the issue of fairness and bias in Canonical Correlation Analysis (CCA). The authors present a framework that aims to reduce unfairness by minimizing the correlation disparity error associated with protected attributes like sex or race. They propose a method that allows CCA to learn global projection matrices from all data points while ensuring that these matrices yield comparable correlation levels to group-specific projection matrices. The efficacy of this method is demonstrated through experimental evaluation on both synthetic and real-world datasets. The paper highlights the importance of fairness in machine learning and introduces new approaches to mitigate bias in CCA.

 

Publication date: 27 Sep 2023
Project Page: arXiv:2309.15809v1
Paper: https://arxiv.org/pdf/2309.15809