Unifed Contextual Query Rewriting

6Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Query rewriting (QR) is an important technique for user friction reduction (i.e. recovering ASR error or system error) and contextual carryover (i.e. ellipsis and co-reference) in conversational AI systems. Recently, generation-based QR models have achieved promising results on these two tasks separately. Although these two tasks have many similarities such as they both use the previous dialogue along with the current request as model input, there is no unifed model to solve them jointly. To this end, we propose a unifed contextual query rewriting model that unifes QR for both reducing friction and contextual carryover purpose. Moreover, we involve multiple auxiliary tasks such as trigger prediction and NLU interpretation tasks to boost the performance of the rewrite. We leverage the text-to-text unifed framework which uses independent tasks with weighted loss to account for task importance. Then we propose new unifed multitask learning strategies including a sequential model which outputs one sentence for multi-tasks, and a hybrid model where some tasks are independent and some tasks are sequentially generated. Our experimental results demonstrate the effectiveness of the proposed unifed learning methods.

Cite

CITATION STYLE

APA

Zhou, Y., Hao, J., Rungta, M., Liu, Y., Cho, E., Fan, X., … Tur, G. (2023). Unifed Contextual Query Rewriting. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 608–615). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free