In this work we investigate the impact of encoding mechanisms used in neural aspect extraction models on the quality of the resulting aspects. We concentrate on the neural attention-based aspect extraction (ABAE) model and evaluate five different types of encoding mechanisms: simple averaging, self-attention with and without positional encoding, recurrent, and convolutional architectures. Our experiments on four datasets of user reviews demonstrate that, in the family of ABAE-like architectures, all models with different encoding mechanisms show the similar results in terms of standard coherence metrics for English and Russian data. Our qualitative study shows that all models yield interpretable aspects as well, and the difference in quality is often very minor.
CITATION STYLE
Malykh, V., Alekseev, A., Tutubalina, E., Shenbin, I., & Nikolenko, S. (2019). Wear the right head: Comparing strategies for encoding sentences for aspect extraction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11832 LNCS, pp. 166–178). Springer. https://doi.org/10.1007/978-3-030-37334-4_15
Mendeley helps you to discover research relevant for your work.