Leveraging Large Language Models to Improve REST API Testing

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

The widespread adoption of REST APIs, coupled with their growing complexity and size, has led to the need for automated REST API testing tools. Current tools focus on the structured data in REST API specifications but often neglect valuable insights available in unstructured natural-language descriptions in the specifications, which leads to suboptimal test coverage. Recently, to address this gap, researchers have developed techniques that extract rules from these human-readable descriptions and query knowledge bases to derive meaningful input values. However, these techniques are limited in the types of rules they can extract and prone to produce inaccurate results. This paper presents RESTGPT, an innovative approach that leverages the power and intrinsic context-awareness of Large Language Models (LLMs) to improve REST API testing. RESTGPT takes as input an API specification, extracts machineinterpretable rules, and generates example parameter values from natural-language descriptions in the specification. It then augments the original specification with these rules and values. Our evaluations indicate that RESTGPT outperforms existing techniques in both rule extraction and value generation. Given these promising results, we outline future research directions for advancing REST API testing through LLMs.

Cite

CITATION STYLE

APA

Kim, M., Stennett, T., Shah, D., Sinha, S., & Orso, A. (2024). Leveraging Large Language Models to Improve REST API Testing. In Proceedings - International Conference on Software Engineering (pp. 37–41). IEEE Computer Society. https://doi.org/10.1145/3639476.3639769

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free