Algorithmic approach to the identification of classification rules or separation surface for spatial data

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As discussed in Chap. 3, naïve Bayes, LDA, logistic regression, and support vector machine are statistical or statistics related models developed for the classification of data. Breaking away from the statistical tradition is a number of classifiers which are algorithmic in nature. Instead of assuming a data model which is essential to the conventional statistical methods, these algorithmic classifiers attempt to work directly on the data without making any assumption about them. It has been regarded by many, particularly in the pattern recognition and artificial intelligence communities, as a more flexible approach to discover how data should be classified. Decision trees (or classification trees in the context of classification), neural networks, genetic algorithms, fuzzy sets, rough sets are typical paradigms. They are in general algorithmic in nature. In place of searching for a separation surface, like the statistical classifiers, some of these methods attempt to discover classification rules that can appropriately partition the feature space with reference to pre-specified classes. A decision tree is a segmentation of a training data set (Quinlan 1986; Friedman 1977). It is built by considering all objects as a single group, with the top node serving as the root of the tree. Training examples are then passed down the tree by splitting each intermediate node with respect to a variable. A decision tree is constructed when a certain stopping criterion is met. Each leaf, terminal, node of the tree contains a decision label, e.g., a class label. The decision tree partitions the feature space into sub-spaces corresponding to the leaves. Specifically, a decision tree that handles classification is known as a classification tree and a decision tree that solves regression problems is called a regression tree (Breiman et al. 1984). A decision tree that deals with both the classification and regression problems is referred to as a classification and regression tree (Breiman et al. 1984). Decision tree algorithms differ mainly in terms of their splitting and pruning strategies. They usually aim at the optimal partitioning of the feature space by minimizing the generalization error. The advantages of the decision tree approach are that it does not need any assumptions about the underlying distribution of the data, and it can handle both discrete and continuous variables. Furthermore, decision trees are easy to construct and interpret if they are of reasonable size and complexity. Their disadvantages are that splitting and pruning rules can be rather subjective. The theory is not as rigorous in terms of the statistical tradition. They also suffer from combinatorial explosion if the number of variables and their value labels are not appropriately controlled. Typical decision tree methods are ID3 (Quinlan 1986), C4.5 (Quinlan 1993), CART (Breiman et al. 1984), CHAID (Kass 1980), QUEST and newer versions, and FACT (Loh and Vanichsetakul 1988).

Cite

CITATION STYLE

APA

Leung, Y. (2009). Algorithmic approach to the identification of classification rules or separation surface for spatial data. In Advances in Spatial Science (Vol. 62, pp. 143–221). Springer International Publishing. https://doi.org/10.1007/978-3-642-02664-5_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free