Abstract
Our team silp_nlp participated in SemEval2023 Task 2: MultiCoNER II. Our work made systems for 11 mono-lingual tracks. For leveraging the advantage of all track knowledge we chose transformer-based pretrained models, which have strong cross-lingual transferability. Hence our model trained in two stages, the first stage for multi-lingual learning from all tracks and the second for fine-tuning individual tracks. Our work highlights that the knowledge of all tracks can be transferred to an individual track if the baseline language model has cross-lingual features. Our system positioned itself in the top 10 for 4 tracks by scoring 0.7432 macro F1 score for the Hindi track (7th rank) and 0.7322 macro F1 score for the Bangla track (9th rank).
Cite
CITATION STYLE
Singh, S., & Tiwary, U. S. (2023). silp_nlp at SemEval-2023 Task 2: Cross-lingual Knowledge Transfer for Mono-lingual Learning. In 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop (pp. 1183–1189). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.semeval-1.164
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.