Codon language embeddings provide strong signals for use in protein engineering

45Citations
Citations of this article
90Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Protein representations from deep language models have yielded state-of-the-art performance across many tasks in computational protein engineering. In recent years, progress has primarily focused on parameter count, with recent models’ capacities surpassing the size of the very datasets they were trained on. Here we propose an alternative direction. We show that large language models trained on codons, instead of amino acid sequences, provide high-quality representations that outperform comparable state-of-the-art models across a variety of tasks. In some tasks, such as species recognition, prediction of protein and transcript abundance or melting point estimation, we show that a language model trained on codons outperforms every other published protein language model, including some that contain over 50 times more parameters. These results indicate that, in addition to commonly studied scale and model complexity, the information content of biological data provides an orthogonal direction to improve the power of machine learning in biology.

Cite

CITATION STYLE

APA

Outeiral, C., & Deane, C. M. (2024). Codon language embeddings provide strong signals for use in protein engineering. Nature Machine Intelligence, 6(2), 170–179. https://doi.org/10.1038/s42256-024-00791-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free