Abstract
Supervised learning has traditionally focused on inductive learning by observing labeled examples of a task. In contrast, humans have the ability to learn new concepts from language. Here, we explore learning zero-shot classifiers for structured data purely from language from natural language explanations as supervision. For this, we introduce CLUES, a benchmark for Classifier Learning Using natural language ExplanationS, consisting of a range of classification tasks over structured data along with natural language supervision in the form of explanations. CLUES consists of 36 real-world and 144 synthetic classification tasks. It contains crowdsourced explanations describing real-world tasks from multiple teachers and programmatically generated explanations for the synthetic tasks. We also introduce ExEnt, an entailment-based method for training classifiers from language explanations, which explicitly models the influence of individual explanations in making a prediction. ExEnt generalizes up to 18% better (relative) on novel tasks than a baseline that does not use explanations. We identify key challenges in learning from explanations, addressing which can lead to progress on CLUES in the future. Our code and datasets are available at: https://clues-benchmark.github.io.
Cite
CITATION STYLE
Menon, R. R., Ghosh, S., & Srivastava, S. (2022). CLUES: A Benchmark for Learning Classifiers using Natural Language Explanations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 6523–6546). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.451
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.