Is a picture worth a thousand words? A deep multi-modal architecture for product classification in e-commerce

3Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

Classifying products precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification based on text and image neural network classifiers. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves classification accuracy over both networks on a real-world largescale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce businesses, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc.

Cite

CITATION STYLE

APA

Zahavy, T., Krishnan, A., Magnani, A., & Mannor, S. (2018). Is a picture worth a thousand words? A deep multi-modal architecture for product classification in e-commerce. In Proceedings of the 30th Innovative Applications of Artificial Intelligence Conference, IAAI 2018 (pp. 7873–7880). The AAAI Press. https://doi.org/10.1609/aaai.v32i1.11419

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free