A markovian extension of valiant’s learning model

10Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

An “Occam algorithm” learning model maintains a tentative hypothesis consistent with past observations and, when a new observation is inconsistent with the current hypothesis, updates to the next-simplest hypothesis consistent with all observations. In previous work, observations were assumed to be stochastically independent. This paper initiates study of such models under weaker Markovian assumptions on the observations, In the special case where the sequence of hypotheses satisfies a monotonicity condition, it is shown that the number of mistakes in classifying the first t observations is O(√t log 1/πi), where πi is the stationary probability of the initial state, i, of the Markov chain. © 1995 Academic Press, Inc.

Cite

CITATION STYLE

APA

Aldous, D., & Vazirani, U. (1995). A markovian extension of valiant’s learning model. Information and Computation, 117(2), 181–186. https://doi.org/10.1006/inco.1995.1037

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free