GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models

Page view(s)
18
Checked on Aug 10, 2025
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
Title:
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
Journal Title:
Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security
Keywords:
Publication Date:
09 December 2024
Citation:
Tang, K., Zhou, W., Zhang, J., Liu, A., Deng, G., Li, S., Qi, P., Zhang, W., Zhang, T., Yu, N. (2024). GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models. Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, 1196–1210. https://doi.org/10.1145/3658644.3670284
Abstract:
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but they have also been observed to magnify societal biases, particularly those related to gender. In response to this issue, several benchmarks have been proposed to assess gender bias in LLMs. However, these benchmarks often lack practical flexibility or inadvertently introduce biases. To address these shortcomings, we introduce GenderCARE, a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics for quantifying and mitigating gender bias in LLMs. To begin, we establish pioneering criteria for gender equality benchmarks, spanning dimensions such as inclusivity, diversity, explainability, objectivity, robustness, and realisticity. Guided by these criteria, we construct GenderPair, a novel pair-based benchmark designed to assess gender bias in LLMs comprehensively. Our benchmark provides standardized and realistic evaluations, including previously overlooked gender groups such as transgender and non-binary individuals. Furthermore, we develop effective debiasing techniques that incorporate counterfactual data augmentation and specialized fine-tuning strategies to reduce gender bias in LLMs without compromising their overall performance. Extensive experiments demonstrate a significant reduction in various gender bias benchmarks, with reductions peaking at over 90% and averaging above 35% across 17 different LLMs. Importantly, these reductions come with minimal variability in mainstream language tasks, remaining below 2%. By offering a realistic assessment and tailored reduction of gender biases, we hope that our GenderCARE can represent a significant step towards achieving fairness and equity in LLMs. More details are available at https://github.com/kstanghere/GenderCARE-ccs24.
License type:
Attribution 4.0 International (CC BY 4.0)
Funding Info:
This research / project is supported by the National Research Foundation, Singapore and Infocomm Media Development Authority - Trust Tech Funding Initiative
Grant Reference no. : DTC-RGC-04
Description:
ISBN:
9798400706363
Files uploaded:

File Size Format Action
20241014-ccs2024-tang.pdf 2.33 MB PDF Open