How to do an evaluation: Pitfalls and traps

122Citations
Citations of this article
198Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The recent literature is replete with papers evaluating computational tools (often those operating on 3D structures) for their performance in a certain set of tasks. Most commonly these papers compare a number of docking tools for their performance in cognate re-docking (pose prediction) and/or virtual screening. Related papers have been published on ligand-based tools: pose prediction by conformer generators and virtual screening using a variety of ligand-based approaches. The reliability of these comparisons is critically affected by a number of factors usually ignored by the authors, including bias in the datasets used in virtual screening, the metrics used to assess performance in virtual screening and pose prediction and errors in crystal structures used. © The Author(s) 2008.

Cite

CITATION STYLE

APA

Hawkins, P. C. D., Warren, G. L., Skillman, A. G., & Nicholls, A. (2008). How to do an evaluation: Pitfalls and traps. Journal of Computer-Aided Molecular Design, 22(3–4), 179–190. https://doi.org/10.1007/s10822-007-9166-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free