Abstract
This paper demonstrates that aggregating crowdsourced forecasts benefits from modeling the written justifications provided by forecasters. Our experiments show that the majority and weighted vote baselines are competitive, and that the written justifications are beneficial to call a question throughout its life except in the last quarter. We also conduct an error analysis shedding light into the characteristics that make a justification unreliable.
Cite
CITATION STYLE
Kotamraju, S., & Blanco, E. (2021). Written Justifications are Key to Aggregate Crowdsourced Forecasts. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 4206–4216). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.355
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.