Abstract
Large Language Models (LLMs) are increasingly integrated into software applications. Downstream application developers often access LLMs through APIs provided as a service. However, LLM APIs are often updated silently and scheduled to be deprecated, forcing users to continuously adapt to evolving models. This can cause performance regression and affect prompt design choices, as evidenced by our case study on toxicity detection. Based on our case study, we emphasize the need for and re-examine the concept of regression testing for evolving LLM APIs. We argue that regression testing LLMs requires fundamental changes to traditional testing approaches, due to different correctness notions, prompting brittleness, and non-determinism in LLM APIs.
Author supplied keywords
Cite
CITATION STYLE
Ma, W., Yang, C., & Kästner, C. (2024). (why) is my prompt getting worse? Rethinking regression testing for evolving llm apis. In Proceedings - 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI, CAIN 2024 (pp. 166–171). Association for Computing Machinery, Inc. https://doi.org/10.1145/3644815.3644950
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.