Explainable Machine Learning for LoRaWAN Link Budget Analysis and Modeling

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

This article explores the convergence of artificial intelligence and its challenges for precise planning of LoRa networks. It examines machine learning algorithms in conjunction with empirically collected data to develop an effective propagation model for LoRaWAN. We propose decoupling feature extraction and regression analysis, which facilitates training data requirements. In our comparative analysis, decision-tree-based gradient boosting achieved the lowest root-mean-squared error of 5.53 dBm. Another advantage of this model is its interpretability, which is exploited to qualitatively observe the governing propagation mechanisms. This approach provides a unique opportunity to practically understand the dependence of signal strength on other variables. The analysis revealed a 1.5 dBm sensitivity improvement as the LoR’s spreading factor changed from 7 to 12. The impact of clutter was revealed to be highly non-linear, with high attenuations as clutter increased until a certain point, after which it became ineffective. The outcome of this work leads to a more accurate estimation and a better understanding of the LoRa’s propagation. Consequently, mitigating the challenges associated with large-scale and dense LoRaWAN deployments, enabling improved link budget analysis, interference management, quality of service, scalability, and energy efficiency of Internet of Things networks.

Cite

CITATION STYLE

APA

Hosseinzadeh, S., Ashawa, M., Owoh, N., Larijani, H., & Curtis, K. (2024). Explainable Machine Learning for LoRaWAN Link Budget Analysis and Modeling. Sensors, 24(3). https://doi.org/10.3390/s24030860

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free