Cross-Language Learning for Product Matching

7Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Transformer-based entity matching methods have significantly moved the state of the art for less-structured matching tasks such as matching product offers in e-commerce. In order to excel at these tasks, Transformer-based matching methods require a decent amount of training pairs. Providing enough training data can be challenging, especially if a matcher for non-English product descriptions should be learned. This poster explores along the use case of matching product offers from different e-shops to which extent it is possible to improve the performance of Transformer-based matchers by complementing a small set of training pairs in the target language, German in our case, with a larger set of English-language training pairs. Our experiments using different Transformers show that extending the German set with English pairs improves the matching performance in all cases. The impact of adding the English pairs is especially high in low-resource settings in which only a rather small number of non-English pairs is available. As it is often possible to automatically gather English training pairs from the Web by exploiting schema.org annotations, our results are relevant for many product matching scenarios targeting low-resource languages.

Cite

CITATION STYLE

APA

Peeters, R., & Bizer, C. (2022). Cross-Language Learning for Product Matching. In WWW 2022 - Companion Proceedings of the Web Conference 2022 (pp. 236–238). Association for Computing Machinery, Inc. https://doi.org/10.1145/3487553.3524234

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free