Abstract
People have always asked questions of their friends, but now, with social media, they can broadcast their questions to their entire social network. In this paper we study the replies received via Twitter question asking, and use what we learn to create a system that augments naturally occurring "friendsourced" answers with crowdsourced answers. By analyzing of thousands of public Twitter questions and answers, we build a picture of which questions receive answers and the content of their answers. Because many questions seek subjective responses but go unanswered, we use crowdsourcing to augment the Twitter question asking experience. We deploy a system that uses the crowd to identify question tweets, create candidate replies, and vote on the best reply from among different crowd- and friend-generated answers. We find that crowdsourced answers are similar in nature and quality to friendsourced answers, and that almost a third of all question askers provided unsolicited positive feedback upon receiving answers from this novel information agent. Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Cite
CITATION STYLE
Jeong, J. W., Morris, M. R., Teevan, J., & Liebling, D. (2013). A crowd-powered socially embedded search engine. In Proceedings of the 7th International Conference on Weblogs and Social Media, ICWSM 2013 (pp. 263–272). AAAI press. https://doi.org/10.1609/icwsm.v7i1.14382
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.