Abstract
Recommendations are often rated for their subjective quality, but few researchers have studied quality in terms of objective utility. We explore quality assessment with respect to both subjective (i.e. users’ ratings) and objective (i.e., did it influence? did it improve decisions?) metrics in a massive online geopolitical forecasting system, ultimately comparing linguistic characteristics of each quality metric. Using a variety of features, we predict all types of quality with better accuracy than the simple yet strong baseline of recommendation length. For example, more complex sentence constructions, as evidenced by subordinate conjunctions, are characteristic of recommendations leading to objective improvements in forecasting. Our analyses also reveal rater biases; for example, forecasters are subjectively biased in favor of recommendations mentioning business deals and material things, even though such recommendations do not indeed prove any more useful objectively.
Cite
CITATION STYLE
Andrew Schwartz, H., Rouhizadeh, M., Bishop, M., Tetlock, P., Mellers, B., & Ungar, L. H. (2017). Assessing objective recommendation quality through political forecasting. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2348–2357). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1250
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.