Abstract
We propose a boosting method for conversational models to generate more human-like dialogs. In our method, we consider the existing conversational models as weak generators and apply the Adaboost to update those models. However, conventional Adaboost cannot be directly applied on conversational models, since conventional Adaboost cannot adaptively adjust the weight on the instance for subsequent learning. This results from the conventional methods based on the simple comparison between the true output y (to an input x) and its corresponding predicted output y', cannot effectively evaluate the learning performance on x. To address this issue, we develop the Adaboost with Auto-Evaluation (called A wE). In AwE, an auto-evaluator is proposed to evaluate the predicted results, which makes Adaboost applicable to conversational models. Furthermore, we present the theoretical analysis that the training error drops exponentially fast only if certain assumption over the proposed autoevaluator holds. Finally, we empirically show that AwE visibly boosts the performance of existing single conversational models and also outperforms the other ensemble methods for conversational models.
Cite
CITATION STYLE
Li, J., Luo, P., Zhou, G., Lin, F., & Niu, C. (2018). Adaboost with auto-evaluation for conversational models. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 4173–4179). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/580
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.