Bandit structured prediction describes a stochastic optimization framework where learning is performed from partial feedback. This feedback is received in the form of a task loss evaluation to a predicted output structure, without having access to gold standard structures. We advance this framework by lifting linear bandit learning to neural sequence-to-sequence learning problems using attention-based recurrent neural networks. Furthermore, we show how to incorporate control variates into our learning algorithms for variance reduction and improved generalization. We present an evaluation on a neural machine translation task that shows improvements of up to 5.89 BLEU points for domain adaptation from simulated bandit feedback.
CITATION STYLE
Kreutzer, J., Sokolov, A., & Riezler, S. (2017). Bandit structured prediction for neural sequence-to-sequence learning. In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 1, pp. 1503–1513). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/P17-1138
Mendeley helps you to discover research relevant for your work.