Using reinforcement learning to handle the runtime uncertainties in self-adaptive software

5Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The growth of scale and complexity of software as well as the complex environment with high dynamic lead to the uncertainties in self-adaptive software’s decision making at run time. Self-adaptive software needs the ability to avoid negative effects of uncertainties to the quality attributes of the software. However, existing planning methods cannot handle the two types of runtime uncertainties caused by complexity of system and running environment. This paper proposes a planning method to handle these two types of runtime uncertainties based on reinforcement learning. To handle the uncertainty from the system, the proposed method can exchange ineffective self-adaptive strategies to effective ones according to the iterations of execution effects at run time. It can plan dynamically to handle uncertainty from environment by learning knowledge of relationship between system states and actions. This method can also generate new strategies to deal with unknown situations. Finally, we use a complex distributed e-commerce system, Bookstore, to validate the ability of proposed method.

Cite

CITATION STYLE

APA

Wu, T., Li, Q., Wang, L., He, L., & Li, Y. (2018). Using reinforcement learning to handle the runtime uncertainties in self-adaptive software. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11176 LNCS, pp. 387–393). Springer Verlag. https://doi.org/10.1007/978-3-030-04771-9_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free