Predicting and Testing Latencies with Deep Learning: An IoT Case Study

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Internet of things (IoT) is spreading into the everyday life of millions of people. However, the quality of the underlying communication technologies is still questionable. In this work, we are analysing the performance of an implementation of MQTT, which is a major communication protocol of the IoT. We perform model-based test-case generation to generate log data for training a neural network. This neural network is applied to predict latencies depending on different features, like the number of active clients. The predictions are integrated into our initial functional model, and we exploit the resulting timed model for statistical model checking. This allows us to answer questions about the expected performance for various usage scenarios. The benefit of our approach is that it enables a convenient extension of a functional model with timing aspects using deep learning. A comparison to our previous work with linear regression shows that deep learning needs less manual effort in data preprocessing and provides significantly better predictions.

Cite

CITATION STYLE

APA

Aichernig, B. K., Pernkopf, F., Schumi, R., & Wurm, A. (2019). Predicting and Testing Latencies with Deep Learning: An IoT Case Study. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11823 LNCS, pp. 93–111). Springer. https://doi.org/10.1007/978-3-030-31157-5_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free