Mono Versus Multilingual BERT: A Case Study in Hindi and Marathi Named Entity Recognition

3Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Named entity recognition (NER) is the process of recognizing and classifying important information (entities) in text. Proper nouns, such as a person’s name, an organization’s name, or a location’s name, are examples of entities. The NER is one of the important modules in applications like human resources, customer support, search engines, content classification, and academia. In this work, we consider NER for low-resource Indian languages like Hindi and Marathi. The transformer-based models have been widely used for NER tasks. We consider different variations of BERT like base-BERT, RoBERTa, and AlBERT and benchmark them on publicly available Hindi and Marathi NER datasets. We provide an exhaustive comparison of different monolingual and multilingual transformer-based models and establish simple baselines currently missing in the literature. We show that the monolingual MahaRoBERTa model performs the best for Marathi NER whereas the multilingual XLM-RoBERTa performs the best for Hindi NER. We also perform cross-language evaluation and present mixed observations.

Cite

CITATION STYLE

APA

Litake, O., Sabane, M., Patil, P., Ranade, A., & Joshi, R. (2023). Mono Versus Multilingual BERT: A Case Study in Hindi and Marathi Named Entity Recognition. In Lecture Notes in Networks and Systems (Vol. 540, pp. 607–618). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-19-6088-8_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free