This article is free to access.
The quality of data used for QSAR model derivation is extremely important as it strongly affects the final robustness and predictive power of the model. Ambiguous or wrong structures need to be carefully checked, because they lead to errors in calculation of descriptors, hence leading to meaningless results. The increasing amounts of data, however, have often made it hard to check of very large databases manually. In the light of this, we designed and implemented a semi-automated workflow integrating structural data retrieval from several web-based databases, automated comparison of these data, chemical structure cleaning, selection and standardization of data into a consistent, ready-to-use format that can be employed for modeling. The workflow integrates best practices for data curation that have been suggested in the recent literature. The workflow has been implemented with the freely available KNIME software and is freely available to the cheminformatics community for improvement and application to a broad range of chemical datasets.[Figure not available: see fulltext.]
Gadaleta, D., Lombardo, A., Toma, C., & Benfenati, E. (2018). A new semi-automated workflow for chemical data retrieval and quality checking for modeling applications. Journal of Cheminformatics, 10(1). https://doi.org/10.1186/s13321-018-0315-6