In this paper, we investigate dynamic resource selection in dense deployments of the recent 6G mobile in-X subnetworks (inXSs). We cast resource selection in inXSs as a multi-objective optimization problem involving maximization of the minimum capacity per inXS while minimizing overhead from intra-subnetwork signaling. Since inXSs are expected to be autonomous, selection decisions are made by each inXS based on its local information without signaling from other inXSs. A multi-agent Q-learning (MAQL) method based on limited sensing information (SI) is then developed, resulting in low intra-subnetwork SI signaling. We further propose a rule-based algorithm termed Q-Heuristics for performing resource selection based on similar limited information as the MAQL method. We perform simulations with a focus on joint channel and transmit power selection. The results indicate that: (1) appropriate settings of Q-learning parameters lead to fast convergence of the MAQL method even with two-level quantization of the SI, and (2) the proposed MAQL approach has significantly better performance and is more robust to sensing and switching delays than the best baseline heuristic. The proposed Q-Heuristic shows similar performance to the baseline greedy method at the 50th percentile of the per-user capacity and slightly better at lower percentiles. The Q-Heuristic method shows high robustness to sensing interval, quantization threshold and switching delay.
CITATION STYLE
Adeogun, R., & Berardinelli, G. (2022). Multi-Agent Dynamic Resource Allocation in 6G in-X Subnetworks with Limited Sensing Information. Sensors, 22(13). https://doi.org/10.3390/s22135062
Mendeley helps you to discover research relevant for your work.