Feature weighting in supervised learning concerns the development of methods for quantifying the capability of features to discriminate instances from different classes. A popular method for this task, called RELIEF, generates a feature weight vector from a given training set, one weight for each feature. This is achieved by maximizing in a greedy way the sample margin defined on the nearest neighbor classifier. The contribution from each class to the sample margin maximization defines a set of class dependent feature weight vectors, one for each class. This provides a tool to unravel interesting properties of features relevant to a single class of interest. In this paper we analyze such class dependent feature weight vectors. For instance, we show that in a machine learning dataset describing instances of recurrence and non-recurrence events in breast cancer, the features have different relevance in the two types of events, with size of the tumor estimated to be highly relevant in the recurrence class but not in the non-recurrence one. Furthermore, results of experiments show that a high correlation between feature weights of one class and those generated by RELIEF corresponds to an easier classification task. In general, results of this investigation indicate that class dependent feature weights are useful to unravel interesting properties of features with respect to a class of interest, and they provide information on the relative difficulty of classification tasks. © 2013 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Marchiori, E. (2013). Class dependent feature weighting and k-nearest neighbor classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7986 LNBI, pp. 69–78). Springer Verlag. https://doi.org/10.1007/978-3-642-39159-0_7
Mendeley helps you to discover research relevant for your work.