Evaluating and predicting the quality of answers factors in the research gate’s question and answer system: A case study of the thematic domain of knowledge management

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or a list of documents. These communities allow users to submit queries and receive answers or responses from other members. But, there is no clear way of evaluating the quality of that information. The purpose of this study was to evaluate and predict the quality of responses in the Research Gate Social Science Network. To achieve this goal, the questions and answers entered in the field from January to May 2019 surveyed in the field and the required information was collected by the site crawler. Finally, 54 questions and 443 answers were analyzed in descriptive and inferential levels by SPSS 22 software. The results show that the relevance, adequacy, and concordance variables with the odds ratios of 3.626, 3.440 and 3.148 have the most power to predict the correct or incorrect responses, respectively.

Cite

CITATION STYLE

APA

Anbaraki, S. (2021, March 1). Evaluating and predicting the quality of answers factors in the research gate’s question and answer system: A case study of the thematic domain of knowledge management. Iranian Journal of Information Processing and Management. Iranian Research Institute for Scientific Information and Documentation. https://doi.org/10.52547/jipm.36.3.709

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free