Abstract
Large Language Models (LLMs) are increasingly deployed in various applications, raising critical concerns about fairness and potential biases in their outputs. This paper explores the prioritization of metamorphic relations (MRs) in metamorphic testing as a strategy to efficiently detect fairness issues within LLMs. Given the exponential growth of possible test cases, exhaustive testing is impractical; therefore, prioritizing MRs based on their effectiveness in detecting fairness violations is crucial. We apply a sentence diversity-based approach to compute and rank MRs to optimize fault detection. Experimental results demonstrate that our proposed prioritization approach improves fault detection rates by 22% compared to random prioritization and 12% compared to distance-based prioritization, while reducing the time to the first failure by 15% and 8%, respectively. Furthermore, our approach performs within 5% of fault-based prioritization in effectiveness, while significantly reducing the computational cost associated with fault labeling. These results validate the effectiveness of diversity-based MR prioritization in enhancing fairness testing for LLMs..
Author supplied keywords
Cite
CITATION STYLE
Giramata, S., Srinivasan, M., Gudivada, V. N., & Kanewala, U. (2025). Efficient Fairness Testing in Large Language Models: Prioritizing Metamorphic Relations for Bias Detection. In Proceedings - 2025 IEEE International Conference on Artificial Intelligence Testing, AITest 2025 (pp. 191–200). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/AITest66680.2025.00031
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.