A Deep Neural Network Architecture for Extracting Contextual Information

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The exponential growth of textual data makes it challenging to retrieve pertinent information. Numerous methods for automating keyphrase extraction have emerged from earlier studies. Keyphrases have been used extensively to analyze, organize, and retrieve text content across various domains. Previous works have yielded numerous viable strategies for automated keyphrase extraction. They rely on domain-specific knowledge and features and select and rank the most relevant keyphrases. In this paper, we propose a deep neural network architecture based on word embedding and a Bidirectional Long Short-Term Memory Recurrent Neural Network “Bi-LTSM”. This architecture can capture the hidden context and the main topics of the document. Experimental analysis of benchmark datasets reveals that our proposed model achieves noteworthy performance compared to baselines and previous approaches for keyphrase extraction.

Cite

CITATION STYLE

APA

Alami Merrouni, Z., Frikh, B., & Ouhbi, B. (2023). A Deep Neural Network Architecture for Extracting Contextual Information. In Lecture Notes on Data Engineering and Communications Technologies (Vol. 164, pp. 107–116). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-27762-7_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free