Abstract
We present a theoretical analysis of online parameter tuning in statistical machine translation (SMT) from a coactive learning view. This perspective allows us to give regret and generalization bounds for latent perceptron algorithms that are common in SMT, but fall outside of the standard convex optimization scenario. Coactive learning also introduces the concept of weak feedback, which we apply in a proof-of-concept experiment to SMT, showing that learning from feedback that consists of slight improvements over predictions leads to convergence in regret and translation error rate. This suggests that coactive learning might be a viable framework for interactive machine translation. Furthermore, we find that surrogate translations replacing references that are unreachable in the decoder search space can be interpreted as weak feedback and lead to convergence in learning, if they admit an underlying linear model.
Cite
CITATION STYLE
Sokolov, A., Riezler, S., & Cohen, S. B. (2015). A coactive learning view of online structured prediction in statistical machine translation. In CoNLL 2015 - 19th Conference on Computational Natural Language Learning, Proceedings (pp. 1–11). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k15-1001
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.