Entity Contrastive Learning in a Large-Scale Virtual Assistant System

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Conversational agents are typ of domain (DC) and intent cla identify the general subject a longs to and the specifc action achieve. In addition, named en (NER) performs per token lab specifc entities of interest in a s We investigate improving joi models using entity contrasti attempts to cluster similar ent a learned representation space full virtual assistant system trai contrastive learning to a baseline system that does not use contrastive learning. We present both offine results, using retrospective test sets, as well as online results from an A/B test that compared the two systems. In both the offine and online settings, entity contrastive training improved overall performance against baseline systems. Furthermore, we provide a detailed analysis of learned entity embeddings, including both qualitative analysis via dimensionality-reduced visualizations and quantitative analysis by computing alignment and uniformity metrics. We show that entity contrastive learning improves alignment metrics and produces well-formed embedding clusters in representation space.

Cite

CITATION STYLE

APA

Rubin, J., Leung, G., Crowley, J., Ziyadi, M., & Minakova, M. (2023). Entity Contrastive Learning in a Large-Scale Virtual Assistant System. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 159–171). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free