Extending linear classifiers from feature vectors to attributed graphs results in sublinear classifiers. In contrast to linear models, the classification performance of sublinear models depends on our choice as to which class we label as positive and which as negative. We prove that the expected classification accuracy of sublinear models may differ for different class labelings. Experiments confirm this finding for empirical classification accuracies on small samples. These results give rise to flip-flop sublinear classifiers that consider both class labelings during training and select the model for prediction that better fits the training data. © 2014 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Jain, B. (2014). Flip-flop sublinear models for graphs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8621 LNCS, pp. 93–102). Springer Verlag. https://doi.org/10.1007/978-3-662-44415-3_10
Mendeley helps you to discover research relevant for your work.