We present an open-source toolkit that allows the easy comparison of the performance of active learning methods over a series of datasets. The toolkit allows such strategies to be constructed by combining a judgement aggregation model, task selection method and worker selection method. The toolkit also provides a user interface which allows researchers to gain insight into worker performance and task classification at runtime.
CITATION STYLE
Venanzi, M., Parson, O., Rogers, A., & Jennings, N. (2015). The ACTIVECROWDTOOLKIT: An Open-Source Tool for Benchmarking Active Learning Algorithms for Crowdsourcing Research. In Proceedings of the 3rd AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2015 (pp. 44–45). AAAI Press. https://doi.org/10.1609/hcomp.v3i1.13256
Mendeley helps you to discover research relevant for your work.