Scalarized Q multi-objective reinforcement learning for area coverage control and light control implementation

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Coverage control is crucial for the deployment of wireless sensor networks (WSNs). However, most coverage control schemes are based on single objective optimization such as coverage area only, which do not consider other contradicting objectives such as energy consumption, the number of working nodes, wasteful overlapping areas. This paper proposes a Multi-Objective Optimization (MOO) coverage control called Scalarized Q Multi-Objective Reinforcement Learning (SQMORL). The two objectives are to maximize area coverage and to minimize the overlapping area to reduce energy consumption. Performance evaluation is conducted for both simulation and multi-agent lighting control testbed experiments. Simulation results show that SQMORL can obtain more efficient area coverage with fewer working nodes than other existing schemes. The hardware testbed results show that SQMORL algorithm can find the optimal policy with good accuracy from the repeated runs.

Cite

CITATION STYLE

APA

Phuphanin, A., & Usaha, W. (2018). Scalarized Q multi-objective reinforcement learning for area coverage control and light control implementation. ECTI Transactions on Electrical Engineering, Electronics, and Communications, 16(2), 72–82. https://doi.org/10.37936/ecti-eec.2018162.171333

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free