Argument Retrieval from Web

N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We are well beyond the days of expecting search engines to help us find documents containing the answer to a question or information about a query. We expect a search engine to help us in the decision-making process. Argument retrieval task in Touché Track at CLEF2020 has been defined to address this problem. The user is looking for information about several alternatives to make a choice between them. The search engine should retrieve opinionated documents containing comparisons between the alternatives rather than documents about one option or documents including personal opinions or no suggestion at all. In this paper, we discuss argument retrieval from web documents. In order to retrieve argumentative documents from the web, we use three features (PageRank scores, domains, argumentative classifier) and try to strike a balance between them. We evaluate the method based on three dimensions: relevance, argumentativeness, and trustworthiness. Since the labeled data and final results for Toucheé Track have not been out yet, the evaluation has been done by manually labeling documents for 5 queries.

Author supplied keywords

Cite

CITATION STYLE

APA

Shahshahani, M. S., & Kamps, J. (2020). Argument Retrieval from Web. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12260 LNCS, pp. 75–81). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58219-7_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free