On the randomness in learning

3Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Given a random binary sequence X(n) of random variables, X t, t = 1, 2, . . . , n, for instance, one that is generated by a Markov source (teacher) of order k* (each state represented by k* bits). Assume that the probability of the event Xt = 1 is constant and denote it by β. Consider a learner which is based on a parametric model, for instance a Markov model of order k, who trains on a sequence x (m) which is randomly drawn by the teacher. Test the learner's performance by giving it a sequence x(n) (generated by the teacher) and check its predictions on every bit of x(n). An error occurs at time t if the learner's prediction Yt differs from the true bit value Xt. Denote by ξ(n) the sequence of errors where the error bit ξt at time t equals 1 or 0 according to whether the event of an error occurs or not, respectively. Consider the subsequence ξ(v) of ξ(n) which corresponds to the errors of predicting a 0, i.e., ξ(v) consists of the bits of ξ(n) only at times t such that Yt = 0. In this paper we compute an estimate on the deviation of the frequency of 1s of ξ(v) from β. The result shows that the level of randomness of ξ(v) decreases relative to an increase in the complexity of the learner. ©2009IEEE.

Cite

CITATION STYLE

APA

Ratsaby, J. (2009). On the randomness in learning. In ICCC 2009 - IEEE 7th International Conference on Computational Cybernetics (pp. 141–145). https://doi.org/10.1109/ICCCYB.2009.5393947

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free