CJRC: A Reliable Human-Annotated Benchmark DataSet for Chinese Judicial Reading Comprehension

41Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a Chinese judicial reading comprehension (CJRC) dataset which contains approximately 10K documents and almost 50K questions with answers. The documents come from judgment documents and the questions are annotated by law experts. The CJRC dataset can help researchers extract elements by reading comprehension technology. Element extraction is an important task in the legal field. However, it is difficult to predefine the element types completely due to the diversity of document types and causes of action. By contrast, machine reading comprehension technology can quickly extract elements by answering various questions from the long document. We build two strong baseline models based on BERT and BiDAF. The experimental results show that there is enough space for improvement compared to human annotators.

Cite

CITATION STYLE

APA

Duan, X., Wang, B., Wang, Z., Ma, W., Cui, Y., Wu, D., … Liu, Z. (2019). CJRC: A Reliable Human-Annotated Benchmark DataSet for Chinese Judicial Reading Comprehension. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11856 LNAI, pp. 439–451). Springer. https://doi.org/10.1007/978-3-030-32381-3_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free