RECIPE: Applying Open Domain Question Answering to Privacy Policies

N/ACitations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

We describe our experiences in using an open domain question answering model (Chen et al., 2017) to evaluate an out-of-domain QA task of assisting in analyzing privacy policies of companies. Specifically, Relevant CI Parameters Extractor (RECIPE) seeks to answer questions posed by the theory of contextual integrity (CI) regarding the information flows described in the privacy statements. These questions have a simple syntactic structure and the answers are factoids or descriptive in nature. The model achieved an F1 score of 72.33, but we noticed that combining the results of this model with a neural dependency parser based approach yields a significantly higher F1 score of 92.35 compared to manual annotations. This indicates that future work which incorporates signals from parsing like NLP tasks more explicitly can generalize better on out-of-domain tasks.

Cite

CITATION STYLE

APA

Shvartzshnaider, Y., Balashankar, A., Wies, T., & Subramanian, L. (2018). RECIPE: Applying Open Domain Question Answering to Privacy Policies. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 71–77). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-2608

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free