Discriminant functions calculated by Support Vector Machines (SVMs) define in a computationally efficient way projections of high-dimensional data on a direction perpendicular to the discriminating hyperplane. These projections may be used to estimate and display posterior probability densities . Additional directions for visualization and dimensionality reduction are created by repeating the linear discrimination process in a space orthogonal to already defined projections. This process allows for an efficient reduction of dimensionality and visualization of data, at the same time improving classification accuracy of a single discriminant function. Visualization of real and artificial data shows that transformed data may not be separable and thus linear discrimination will completely fail, but the nearest neighbor or rule-based methods in the reduced space may still provide simple and accurate solutions. © Springer-Verlag Berlin Heidelberg 2008.
CITATION STYLE
Maszczyk, T., & Duch, W. (2008). Support vector machines for visualization and dimensionality reduction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5163 LNCS, pp. 346–356). https://doi.org/10.1007/978-3-540-87536-9_36
Mendeley helps you to discover research relevant for your work.