It is well known that valuable information is contained in the natural language components of software systems, like comments and manual, and such information can be used to improve system performance and reliability. Past research has attempted to extract such information through task-specific machine learning models and tool chains. Here, we investigate a general, one-model-fit-all solution through a state-of-the-art large language model (e.g., the GPT series). Our investigation covers three representative tasks: extracting locking rules from comments, synthesizing exception predicates from comments, and identifying performance-related configurations; it reveals challenges and opportunities in applying large language models to system maintenance tasks.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Su, Y., Wan, C., Sethi, U., Lu, S., Musuvathi, M., & Nath, S. (2023). HotGPT: How to Make Software Documentation More Useful with a Large Language Model? In HotOS 2023 - Proceedings of the 19th Workshop on Hot Topics in Operating Systems (pp. 87–93). Association for Computing Machinery, Inc. https://doi.org/10.1145/3593856.3595910