The article titled ‘All Should Be Equal in the Eyes of Language Models: Counterfactually Aware Fair Text Generation’, discusses the issue of biases in Language Models (LMs) and introduces a new framework – Counterfactually Aware Fair InferencE (CAFIE), that aims to generate more equitable sentences. It highlights the inherent biases in training data of LMs that can affect downstream tasks. The authors posit that generating unbiased output for one demographic under a context ensues from being aware of outputs for other demographics under the same context. The article also provides an empirical evaluation of CAFIE, showing it outperforms other methods and produces fairer text while maintaining language modeling capability.

 

Publication date: 10 Nov 2023
Project Page: not provided
Paper: https://arxiv.org/pdf/2311.05451