Abstract
Learning action policy for autonomous agents in a decentralized multi-agent environment has remained an interesting but difficult research problem. We propose to model this problem in a contextual bandit setting with delayed reward signals, in particular an individual short-term reward signal and a shared long-term reward signal. Our algorithm utilizes an approach with reward oracles to directly model these delayed reward signals and also relies on a learning scheme benefiting from the sampling guidance of an expert-designed policy. This algorithm is expected to apply to a wide range of problems, including those with constraints on accessing state transitions and those with implicit reward information. A demonstration, deployed by deep learning regressors, shows the effectiveness of the proposed algorithm in learning offensive action policy in the RoboCup Soccer 2D Simulation (RCSS) environment against a well-known adversary benchmark team compared to a baseline policy.
Author supplied keywords
Cite
CITATION STYLE
Li, M., & Nguyen, Q. D. (2021). Contextual Bandit Learning with Reward Oracles and Sampling Guidance in Multi-Agent Environments. IEEE Access, 9, 96641–96657. https://doi.org/10.1109/ACCESS.2021.3094623
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.