Env2Vec: Accelerating VNF testing with deep learning

3Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The adoption of fast-paced practices for developing virtual network functions (VNFs) allows for continuous software delivery and creates a market advantage for network operators. This adoption, however, is problematic for testing engineers that need to assure, in shorter development cycles, certain quality of highly-configurable product releases running on heterogeneous clouds. Machine learning (ML) can accelerate testing workflows by detecting performance issues in new software builds. However, the overhead of maintaining several models for all combinations of build types, network configurations, and other stack parameters, can quickly become prohibitive and make the application of ML infeasible. We propose Env2Vec, a deep learning architecture that combines contextual features with historical resource usage, and characterizes the various stack parameters that influence the test execution within an embedding space, which allows it to generalize model predictions to previously unseen environments. We integrate a single ML model in the testing workflow to automatically debug errors and pinpoint performance bottlenecks. Results obtained with real testing data show an accuracy between 86.2%-100%, while reducing the false alarm rate by 20.9%-38.1% when reporting performance issues compared to state-of-The-Art approaches.

Cite

CITATION STYLE

APA

Piao, G., Nicholson, P. K., & Lugones, D. (2020). Env2Vec: Accelerating VNF testing with deep learning. In Proceedings of the 15th European Conference on Computer Systems, EuroSys 2020. Association for Computing Machinery. https://doi.org/10.1145/3342195.3387525

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free