Development of the method of features learning and training decision rules for the prediction of violation of service level agreement in a cloudbased environment

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

We developed the algorithm of learning of the multilayer feature extractor based on ideas and methods of neural gas and sparse encoding, for the problem of prediction of violation of agreement conditions on the service level in a cloud-based environment. Effectiveness of the proposed extractor and autoencoder was compared by the results of physical simulation. It is shown that the proposed extractor requires approximately 1.6 times as few learning samples as the autoencoder for construction of error-free decision rules for learning and test samples. This allows us previously put into effect prediction mechanisms of controlling appropriate cloud-based services. To build up decision rules, it is proposed to use transformation of the space of primary features using computationally efficient operations of comparison and "excluding OR" for construction in the radial basis of the binary space of secondary features of separate class containers. In this case, for binary feature encoding, it is proposed to use modification of the population algorithm of search for maximum value of the Kullback's information criterion. Modification implies consideration of compactness of images in the space of secondary features, which allows increasing the gap between distributions of classes and decreasing the negative effect of overfitting. The authors explored dependence of decision accuracy for training and test samples of the system of prediction of violation of SLA conditions on parameters of the feature extractor and those of the classifier. The extractor configuration, acceptable in terms of accuracy and complexity, was selected. In this case, two time windows, which intersect in time by 50 % and read through 50 features, were used at the entrance of the extractor. The first layer of extractor coding contains 30 basis vectors, and the second layer - 20. Thus, the intralayer pooling and non-linearity were formed by concatenation of sparse codes of each window and by continuation of the resulting code twice as much in order to separate positive and negative code components and to transform the resulting code into the vector of sign-positive features.

Cite

CITATION STYLE

APA

Moskalenko, V., Moskalenko, A., Pimonenko, S., & Korobov, A. (2017). Development of the method of features learning and training decision rules for the prediction of violation of service level agreement in a cloudbased environment. Eastern-European Journal of Enterprise Technologies, 5(2–89), 26–33. https://doi.org/10.15587/1729-4061.2017.110073

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free