Abstract
To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE (Wang et al., 2018), has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE (Xu et al., 2020) for Chinese and FLUE (Le et al., 2020) for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.
Author supplied keywords
Cite
CITATION STYLE
Kurihara, K., Kawahara, D., & Shibata, T. (2022). JGLUE: Japanese General Language Understanding Evaluation. In 2022 Language Resources and Evaluation Conference, LREC 2022 (pp. 2957–2966). European Language Resources Association (ELRA). https://doi.org/10.5715/jnlp.31.733
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.