This article investigates the unwanted bias in pretrained vision-and-language (V&L) models, with a focus on gender bias. The authors quantified bias amplification in pretraining and after fine-tuning on three families of V&L models. They found that bias amplification in pretraining and after fine-tuning are independent. Continued pretraining on gender-neutral data was found to reduce group disparities and promote fairness without significantly compromising task performance.
Publication date: 26 Oct 2023
Project Page: https://arxiv.org/abs/2310.17530
Paper: https://arxiv.org/pdf/2310.17530
Leave a comment