Triple-GAIL: A multi-modal imitation learning framework with generative adversarial nets

26Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generative adversarial imitation learning (GAIL) has shown promising results by taking advantage of generative adversarial nets, especially in the field of robot learning. However, the requirement of isolated single modal demonstrations limits the scalability of the approach to real world scenarios such as autonomous vehicles' demand for a proper understanding of human drivers' behavior. In this paper, we propose a novel multi-modal GAIL framework, named Triple-GAIL, that is able to learn skill selection and imitation jointly from both expert demonstrations and continuously generated experiences with data augmentation purpose by introducing an auxiliary skill selector. We provide theoretical guarantees on the convergence to optima for both of the generator and the selector respectively. Experiments on real driver trajectories and real-time strategy game datasets demonstrate that Triple-GAIL can better fit multi-modal behaviors close to the demonstrators and outperforms state-of-the-art methods.

Cite

CITATION STYLE

APA

Fei, C., Wang, B., Zhuang, Y., Zhang, Z., Hao, J., Zhang, H., … Liu, W. (2020). Triple-GAIL: A multi-modal imitation learning framework with generative adversarial nets. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2929–2935). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/405

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free