Self-adaptive systems increasingly rely on machine learning techniques as black-box models to make decisions even when the target world of interest includes uncertainty and unknowns. Because of the lack of transparency, adaptation decisions, as well as their effect on the world, are hard to explain. This often hinders the ability to trace unsuccessful adaptations back to understandable root causes. In this paper, we introduce our vision of explainable self-adaptation. We demonstrate our vision by instantiating our ideas on a running example in the robotics domain and by showing an automated proof-of-concept process providing human-understandable explanations for successful and unsuccessful adaptations in critical scenarios.
CITATION STYLE
Camilli, M., Mirandola, R., & Scandurra, P. (2022). XSA: EXplainable Self-Adaptation. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3551349.3559552
Mendeley helps you to discover research relevant for your work.