Automatic Methods for Online Review Classification: An Empirical Investigation of Review Usefulness—An Abstract

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, academic and practitioner interest in consumer-generated online reviews (OCR) has increased. One of the potential reasons for such growth in interest is their categorization as a valuable source of information for consumers making buying decisions online (e.g., Archak et al. 2011). OCRs consist of elements dealing with review valence (e.g., number of stars), review volume (total number of reviews over the same product), a textual portion (where reviewers are allowed to openly provide further information), usefulness of the review (number of yes/no to the question “was this review helpful to you?”), and verification that the reviewer actually purchased the product. In addition to these variables, readers can also deduce the variance of the review valence, since the distribution of reviews among the different valence levels is also reported. This study presents the available methods for text classification and empirically tests their performance sorting OCRs based on their usefulness variable. Previous research has shown the importance of the usefulness variable of the review, since it correlates with sales impact (Chen and Xie 2008; Ghose and Ipeirotis 2011; Ghose et al. 2012). Useful reviews were shown to more likely impact product sales than non-useful reviews, and this effect is more important over less popular products (Chen and Xie 2008). As a general conclusion, there is no single method that performs significantly better over other methods. The global classification capability of SVM with Class Weights is the best one of all the methods, but it fails to classify the non-useful and useful OCRs. S k-means is the most accurate method to classify useful reviews, but it fails to classify the other two categories. Additionally, its total performance is lower than other methods. Finally, SLDA shows the best performance classifying non-useful reviews; however, it fails in its classification of the other two categories and it also produces a low total performance. As an additional contribution, this work documents the inability of some methods to perform this analysis, as is the case for two of the unsupervised learning techniques: LSA and CTM. The results suggest that these methods fail to address some of the particular characteristics of OCRs, such as the comprehension of the text, its readability, and the type and novelty of the information contained (Li and Zhan 2011; Ludwig et al. 2013). Additionally, none of these methods evaluate the temporal variable or when the information was released (e.g., Purnawirawan et al. 2012). Further research in automatic methods of classification should address these characteristics, which tend to be context dependent and product type dependent as it has been suggested in previous research (Hong et al. 2014). In this sense, methods that combine semantics and an algorithm or a probabilistic approach can potentially solve the aforementioned shortcomings.

Cite

CITATION STYLE

APA

Fresneda, J., & Gefen, D. (2017). Automatic Methods for Online Review Classification: An Empirical Investigation of Review Usefulness—An Abstract. In Developments in Marketing Science: Proceedings of the Academy of Marketing Science (pp. 1331–1332). Springer Nature. https://doi.org/10.1007/978-3-319-45596-9_242

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free