We design and validate simulators for generating queries and relevance judgments for retrieval system evaluation. We develop a simulation framework that incorporates existing and new simulation strategies. To validate a simulator, we assess whether evaluation using its output data ranks retrieval systems in the same way as evaluation using real-world data. The real-world data is obtained using logged commercial searches and associated purchase decisions. While no simulator reproduces an ideal ranking, there is a large variation in simulator performance that allows us to distinguish those that are better suited to creating artificial testbeds for retrieval experiments. Incorporating knowledge about document structure in the query generation process helps create more realistic simulators. © 2010 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Huurnink, B., Hofmann, K., De Rijke, M., & Bron, M. (2010). Validating query simulators: An experiment using commercial searches and purchases. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6360 LNCS, pp. 40–51). https://doi.org/10.1007/978-3-642-15998-5_6
Mendeley helps you to discover research relevant for your work.