Background: Evaluation of gene interaction models in cancer genomics is challenging, as the true distribution is uncertain. Previous analyses have benchmarked models using synthetic data or databases of experimentally verified interactions – approaches which are susceptible to misrepresentation and incompleteness, respectively. The objectives of this analysis are to (1) provide a real-world data-driven approach for comparing performance of genomic model inference algorithms, (2) compare the performance of LASSO, elastic net, best-subset selection, (Formula presented.) penalisation and (Formula presented.) penalisation in real genomic data and (3) compare algorithmic preselection according to performance in our benchmark datasets to algorithmic selection by internal cross-validation. Methods: Five large (Formula presented.) genomic datasets were extracted from Gene Expression Omnibus. ‘Gold-standard’ regression models were trained on subspaces of these datasets ((Formula presented.), (Formula presented.)). Penalised regression models were trained on small samples from these subspaces ((Formula presented.)) and validated against the gold-standard models. Variable selection performance and out-of-sample prediction were assessed. Penalty ‘preselection’ according to test performance in the other 4 datasets was compared to selection internal cross-validation error minimisation. Results: (Formula presented.) -penalisation achieved the highest cosine similarity between estimated coefficients and those of gold-standard models. (Formula presented.) -penalised models explained the greatest proportion of variance in test responses, though performance was unreliable in low signal:noise conditions. (Formula presented.) also attained the highest overall median variable selection F1 score. Penalty preselection significantly outperformed selection by internal cross-validation in each of 3 examined metrics. Conclusions: This analysis explores a novel approach for comparisons of model selection approaches in real genomic data from 5 cancers. Our benchmarking datasets have been made publicly available for use in future research. Our findings support the use of (Formula presented.) penalisation for structural selection and (Formula presented.) penalisation for coefficient recovery in genomic data. Evaluation of learning algorithms according to observed test performance in external genomic datasets yields valuable insights into actual test performance, providing a data-driven complement to internal cross-validation in genomic regression tasks.
CITATION STYLE
O’Shea, R. J., Tsoka, S., Cook, G. J. R., & Goh, V. (2021). Sparse Regression in Cancer Genomics: Comparing Variable Selection and Predictions in Real World Data. Cancer Informatics, 20. https://doi.org/10.1177/11769351211056298
Mendeley helps you to discover research relevant for your work.