This article by Weizi Liu of the University of Illinois explores the gender biases in the design and interaction of conversational agents (CAs) like voice assistants and chatbots. Many of these CAs are designed with female personas, which can inadvertently trigger gender biases and perpetuate gender stereotypes. The study aims to understand how these gender designs in CAs trigger pre-existing biases and how they might extend to human-human communication. The findings will contribute to the ethical design of conversational agents and promote gender equality in design.

 

Publication date: 5 Jan 2024
Project Page: https://doi.org/XXXXXXX.XXXXXXX
Paper: https://arxiv.org/pdf/2401.03030