Lower bounds on learning decision lists and trees

7Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

k-decision lists and decision trees play important roles in learning theory as well as in practical learning systems, k-decision lists generalize classes such as monomials, k-DNF, and k-CNF and like these subclasses is polynomially PAC-learnable [19]. This leaves open the question of whether k-decision lists can be learned as efficiently as k-DNF. We answer this question negatively in a certain sense, thus disproving a claim in a popular textbook [2]. Decision trees, on the other hand, are not even known to be polynomially PAC-learnable, despite their widespread practical application. We will show that decision trees are not likely to be efficiently PAC-learnable. We summarize our specific results. The following problems cannot be approximated in polynomial time within a factor of 2log δ n for any δ < 1, unless NP ⊂ DTIME[2polylog n]: a generalized set cover, k-decision lists, k-decision lists by monotone decision lists, and decision trees. Decision lists cannot be approximated in polynomial time within a factor of nδ, for some constant δ > 0, unless NP = P. Also, k-decision lists with 1 0-1 alternations cannot be approximated within a factor log1 n unless NP ⊂ DTIME[no(log log n)] (providing an interesting comparison to the upper bound recently obtained in [1]).

Cite

CITATION STYLE

APA

Hancock, T., Jiang, T., Li, M., & Tromp, J. (1995). Lower bounds on learning decision lists and trees. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 900, pp. 527–538). Springer Verlag. https://doi.org/10.1007/3-540-59042-0_102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free