Abstract
We focus on a recently deployed system built for summarizing academic articles by concept tagging. The system has shown great coverage and high accuracy of concept identification which could be contributed by the knowledge acquired from millions of publications. Provided with the interpretable concepts and knowledge encoded in a pre-trained neural model, we investigate whether the tagged concepts can be applied to a broader class of applications. We propose transforming the tagged concepts into sparse vectors as representations of academic documents. The effectiveness of the representations is analyzed theoretically by a proposed framework. We also empirically show that the representations can have advantages on academic topic discovery and paper recommendation. On these applications, we reveal that the knowledge encoded in the tagging system can be effectively utilized and can help infer additional features from data with limited information.
Cite
CITATION STYLE
Liao, K. T., Shen, Z., Huang, C., Wu, C. H., Chen, P. C., Wang, K., & Lin, S. D. (2020). Explainable and Sparse Representations of Academic Articles for Knowledge Exploration. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 6207–6216). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.546
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.