Exploring learnability between exact and PAC

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We study a model of Probably Exactly Correct (PExact) learning that can be viewed either as the Exact model (learning from Equivalence Queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the Probably Approximately Correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of Probably Almost Exactly Correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.

Cite

CITATION STYLE

APA

Bshouty, N. H., Jackson, J. C., & Tamon, C. (2002). Exploring learnability between exact and PAC. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2375, pp. 244–254). Springer Verlag. https://doi.org/10.1007/3-540-45435-7_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free