The notion of a weak classifier, as one which is “a little better” than a random one, was introduced first for 2-class problems [1]. The extensions to K-class problems are known. All are based on relative activations for correct and incorrect classes and do not take into account the final choice of the answer. A new understanding and definition is proposed here. It takes into account only the final choice of classification that must be taken. It is shown that for a K class classifier to be called “weak”, it needs to achieve lower than 1/K risk value. This approach considers only the probability of the final answer choice, not the actual activations.
CITATION STYLE
Podolak, I. T., & Roman, A. (2009). A new notion of weakness in classification theory. Advances in Intelligent and Soft Computing, 57, 239–245. https://doi.org/10.1007/978-3-540-93905-4_29
Mendeley helps you to discover research relevant for your work.