Improving prediction performance of general protein language model by domain-adaptive pretraining on DNA-binding protein

2Citations
Citations of this article
115Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

DNA-protein interactions exert the fundamental structure of many pivotal biological processes, such as DNA replication, transcription, and gene regulation. However, accurate and efficient computational methods for identifying these interactions are still lacking. In this study, we propose a method ESM-DBP through refining the DNA-binding protein sequence repertory and domain-adaptive pretraining based the general protein language model. Our method considers the lacking exploration of general language model for DNA-binding protein domain-specific knowledge, so we screen out 170,264 DNA-binding protein sequences to construct the domain-adaptive language model. Experimental results on four downstream tasks show that ESM-DBP provides a better feature representation of DNA-binding protein compared to the original language model, resulting in improved prediction performance and outperforming the state-of-the-art methods. Moreover, ESM-DBP can still perform well even for those sequences with only a few homologous sequences. ChIP-seq on two predicted cases further support the validity of the proposed method.

Cite

CITATION STYLE

APA

Zeng, W., Dou, Y., Pan, L., Xu, L., & Peng, S. (2024). Improving prediction performance of general protein language model by domain-adaptive pretraining on DNA-binding protein. Nature Communications , 15(1). https://doi.org/10.1038/s41467-024-52293-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free