CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark

118Citations
Citations of this article
96Readers
Mendeley users who have this article in their library.

Abstract

With the development of biomedical language understanding benchmarks, Artificial Intelligence applications are widely used in the medical field. However, most benchmarks are limited to English, which makes it challenging to replicate many of the successes in English for other languages. To facilitate research in this direction, we collect real-world biomedical data and present the first Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark: a collection of natural language understanding tasks including named entity recognition, information extraction, clinical diagnosis normalization, and an associated online platform for model evaluation, comparison, and analysis. To establish evaluation on these tasks, we report empirical results with the current 11 pre-trained Chinese models, and experimental results show that state-of-the-art neural models perform far worse than the human ceiling. Our benchmark is released at https://tianchi.aliyun.com/dataset/dataDetail?dataId= 95414&lang=en-us.

Cite

CITATION STYLE

APA

Zhang, N., Chen, M., Bi, Z., Liang, X., Li, L., Shang, X., … Chen, Q. (2022). CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 7888–7915). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.544

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free