Pertinent background knowledge for learning protein grammars

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We are interested in using Inductive Logic Programming (ILP) to infer grammars representing sets of protein sequences. ILP takes as input both examples and background knowledge predicates. This work is a first step in optimising the choice of background knowledge predicates for predicting the function of proteins. We propose methods to obtain different sets of background knowledge. We then study the impact of these sets on inference results through a hard protein function inference task: the prediction of the coupling preference of GPCR proteins. All but one of the proposed sets of background knowledge are statistically shown to have positive impacts on the predictive power of inferred rules, either directly or through interactions with other sets. In addition, this work provides further confirmation, after the work of Muggleton et al., 2001 that ILP can help to predict protein functions. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Bryant, C. H., Fredouille, D. C., Wilson, A., Jayawickreme, C. K., Jupe, S., & Topp, S. (2006). Pertinent background knowledge for learning protein grammars. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4212 LNAI, pp. 54–65). Springer Verlag. https://doi.org/10.1007/11871842_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free