PolicyQA: A reading comprehension dataset for privacy policies

33Citations
Citations of this article
96Readers
Mendeley users who have this article in their library.

Abstract

Privacy policy documents are long and verbose. A question answering (QA) system can assist users in finding the information that is relevant and important to them. Prior studies in this domain frame the QA task as retrieving the most relevant text segment or a list of sentences from the policy document given a question. On the contrary, we argue that providing users with a short text span from policy documents reduces the burden of searching the target information from a lengthy text segment. In this paper, we present PolicyQA, a dataset that contains 25,017 reading comprehension style examples curated from an existing corpus of 115 website privacy policies. PolicyQA provides 714 human-annotated questions written for a wide range of privacy practices. We evaluate two existing neural QA models and perform rigorous analysis to reveal the advantages and challenges offered by PolicyQA.

Cite

CITATION STYLE

APA

Ahmad, W. U., Chi, J., Tian, Y., & Chang, K. W. (2020). PolicyQA: A reading comprehension dataset for privacy policies. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 743–749). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.66

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free