Learning through hypothesis refinement using answer set programming

15Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent work has shown how a meta-level approach to inductive logic programming, which uses a semantic-preserving transformation of a learning task into an abductive reasoning problem, can address a large class of multi-predicate, nonmonotonic learning in a sound and complete manner. An Answer Set Programming (ASP) implementation, called ASPAL, has been proposed that uses ASP fixed point computation to solve a learning task, thus delegating the search to the ASP solver. Although this meta-level approach has been shown to be very general and flexible, the scalability of its ASP implementation is constrained by the grounding of the meta-theory. In this paper we build upon these results and propose a new meta-level learning approach that overcomes the scalability problem of ASPAL by breaking the learning process up into small manageable steps and using theory revision over the metalevel representation of the hypothesis space to improve the hypothesis computed at each step. We empirically evaluate the computational gain with respect to ASPAL using two different answer set solvers.

Cite

CITATION STYLE

APA

Athakravi, D., Corapi, D., Broda, K., & Russo, A. (2014). Learning through hypothesis refinement using answer set programming. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8812, 31–46. https://doi.org/10.1007/978-3-662-44923-3_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free