This paper proposes the minimum L-complexity algorithm (MLC), which can be thought of as an extension of the minimum description length (MDL) principle-based algorithm to the case where general real-valued functions are used as hypotheses and general loss functions are used as distortion measures. MLC is also closely related to Barron's complexity regu-larization algorithm and Vapnik's structural risk minimization. We demonstrate the effectiveness of MLC in terms of sample complexity within the decision theoretic PAC learning model. Specifically using MLC, we develop a unifying method of deriving upper bounds on target-dependent (non-uniform) sample complexity both for parametric and non-parametric settings. We further introduce a method for evaluating average-case sample complexity where the average is taken with respect to a prior probability over the parametric target class. These target-dependent and average-case sample complexity bounds offer a new view of sample complexity analysis, while most of previous work focusing on the worst-case sample complexity. As applications of MLC, we consider the issues of learning non-parametric rules in terms of 1) stochastic rules with finite partitioning, 2) finite Hermite series, and 3) finite Fourier series. We use MLC to improve the previously-known best results on sample complexity for these issues.
CITATION STYLE
Yamanishi, K. (1994). The minimum L-complexity algorithm and its applications to learning non-parametric rules. In Proceedings of the Annual ACM Conference on Computational Learning Theory (Vol. Part F129415, pp. 173–182). Association for Computing Machinery. https://doi.org/10.1145/180139.181096
Mendeley helps you to discover research relevant for your work.