Research in computational biology has given rise to a vast number of methods developed to solve scientific problems. For areas in which many approaches exist, researchers have a hard time deciding which tool to select to address a scientific challenge, as essentially all publications introducing a new method will claim better performance than all others. Not all of these claims can be correct. Equally, for this same reason, developers struggle to demonstrate convincingly that they created a new and superior algorithm or implementation. Moreover, the developer community often has difficulty discerning which new approaches constitute true scientific advances for the field. The obvious answer to this conundrum is to develop benchmarks—meaning standard points of reference that facilitate evaluating the performance of different tools—allowing both users and developers to compare multiple tools in an unbiased fashion.
CITATION STYLE
Peters, B., Brenner, S. E., Wang, E., Slonim, D., & Kann, M. G. (2018, November 1). Putting benchmarks in their rightful place: The heart of computational biology. PLoS Computational Biology. Public Library of Science. https://doi.org/10.1371/journal.pcbi.1006494
Mendeley helps you to discover research relevant for your work.