Dimensionality reduction using singular vectors

10Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A common problem in machine learning and pattern recognition is the process of identifying the most relevant features, specifically in dealing with high-dimensional datasets in bioinformatics. In this paper, we propose a new feature selection method, called Singular-Vectors Feature Selection (SVFS). Let D= [A∣ b] be a labeled dataset, where b is the class label and features (attributes) are columns of matrix A. We show that the signature matrix SA= I- A†A can be used to partition the columns of A into clusters so that columns in a cluster correlate only with the columns in the same cluster. In the first step, SVFS uses the signature matrix SD of D to find the cluster that contains b. We reduce the size of A by discarding features in the other clusters as irrelevant features. In the next step, SVFS uses the signature matrix SA of reduced A to partition the remaining features into clusters and choose the most important features from each cluster. Even though SVFS works perfectly on synthetic datasets, comprehensive experiments on real world benchmark and genomic datasets shows that SVFS exhibits overall superior performance compared to the state-of-the-art feature selection methods in terms of accuracy, running time, and memory usage. A Python implementation of SVFS along with the datasets used in this paper are available at https://github.com/Majid1292/SVFS.

Cite

CITATION STYLE

APA

Afshar, M., & Usefi, H. (2021). Dimensionality reduction using singular vectors. Scientific Reports, 11(1). https://doi.org/10.1038/s41598-021-83150-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free