This research investigates the biases present in case judgment summaries created by legal datasets and large language models (LLMs). The study highlights the significant influence these biases can have on legal decision-making and the potential ethical implications. Particular attention is given to biases related to gender, race, crime against women, countries, and religious terms. The study highlights the presence of biases towards female-related keywords and certain country names. However, no strong evidence of biases related to religious or race-related keywords was found. The reasoning behind these biases requires further investigation.
Publication date: 4 Dec 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2312.00554