Parametric and Nonparametric Classification for Minimizing Misclassification Errors

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Parametric classification fits the parametric model to the training data and interpolates to classify the test data, whereas nonparametric methods like regression tree and classification trees use different techniques to determine classification. The classification process can be of two types: supervised and unsupervised. In supervised classification, training data are used to design the classifier. Bayes’s rule, nearest neighboring rule, and perceptron rules are few widely used supervised classification rules. For unlabeled data, the process of classification is called clustering or unsupervised classification. This paper proposes a wrapper-based approach for pattern classification to minimize the error factor. Techniques, such as Bayes’s classification, K-NN classifier, and NN classifier, are used to classify the patterns using linearly separable, linearly nonseparable, and Gaussian sample dataset. These methods classify the data in two stages: training stage and prediction stage. In this paper, we will be using parametric and nonparametric decision-making algorithm as we know the statistical and geometric properties of the patterns under study.

Cite

CITATION STYLE

APA

Nagdeote, S., & Chiwande, S. (2020). Parametric and Nonparametric Classification for Minimizing Misclassification Errors. In Lecture Notes in Networks and Systems (Vol. 100, pp. 441–453). Springer. https://doi.org/10.1007/978-981-15-2071-6_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free