Abstract
Background: The intricate architecture of container orchestration systems like Kubernetes relies on the critical role of declarative manifest files that serve as the blueprints for orchestration. However, managing these manifest files often presents complex challenges requiring significant DevOps expertise. Methodology: This position paper explores using Large Language Models (LLMs) to automate the generation of Kubernetes manifest files through natural language specifications and prompt engineering, aiming to simplify Kubernetes management. The study evaluates these LLMs using Zero-Shot, Few-Shot, and Prompt-Chaining techniques against DevOps requirements and the ability to support fully automated deployment pipelines. Results show that LLMs can produce Kubernetes manifests with varying degrees of manual intervention, with GPT-4 and GPT-3.5 showing potential for fully automated deployments. Interestingly, smaller models sometimes outperform larger ones, questioning the assumption that bigger is always better. Conclusion: The study emphasizes that prompt engineering is critical to optimizing LLM outputs for Kubernetes. It suggests further research into prompt strategies and LLM comparisons and highlights a promising research direction for integrating LLMs into automatic deployment pipelines.
Author supplied keywords
Cite
CITATION STYLE
Kratzke, N., & Drews, A. (2024). Don’t Train, Just Prompt: Towards a Prompt Engineering Approach for a More Generative Container Orchestration Management. In International Conference on Cloud Computing and Services Science, CLOSER - Proceedings (pp. 248–256). Science and Technology Publications, Lda. https://doi.org/10.5220/0012710300003711
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.