Simulation: A methodology to evaluate recommendation systems in software engineering

9Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Scientists and engineers have long used simulation as a technique for exploring and evaluating complex systems. Direct interaction with a real, complex system requires that the system be already constructed and operational, that people be trained in its use, and that its dangers already be known and mitigated. Simulation can avoid these issues, reducing costs, reducing risks, and allowing an imagined system to be studied before it is created. The explorations supported by simulation serve two purposes in the realm of evaluation: to determine whether and where undesired behavior will arise and to predict the outcomes of interactions with the real system. This chapter examines the use of simulation to evaluate recommendation systems in software engineering (RSSEs). We provide a general model of simulation for evaluation and review a small set of examples to examine how the model has been applied in practice. From these examples, we extract some general strengths and weaknesses of the use of simulation to evaluate RSSEs. We also explore prospects for making more extensive use of simulation in the future.

Cite

CITATION STYLE

APA

Walker, R. J., & Holmes, R. (2014). Simulation: A methodology to evaluate recommendation systems in software engineering. In Recommendation Systems in Software Engineering (pp. 301–327). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-45135-5_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free