High frequent value reduct in very large databases

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the main contributions of rough set theory to data mining is data reduction. There are three reductions: attribute (column) reduction, row reduction, and value reduction. Row reduction is merging the duplicate rows. Attribute reduction is to find important attributes. Value reduction is to reduce the decision rules to a logically equivalent minimal length. Most recent attentions have been on finding attribute reducts. Traditionally, the value reduct has been searched through the attribute reduct. This paper observes that this method may miss the best value reducts. It also revisits an old rudiment idea [11], namely, a rough set theory on high frequency data: The notion of high frequency value reduct is extracted in a bottom-up fashion without finding attribute reducts. Our method can discover concise and important decision rules in large databases, and is described and illustrated by an example. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Lin, T. Y., & Han, J. (2007). High frequent value reduct in very large databases. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4482 LNAI, pp. 346–354). Springer Verlag. https://doi.org/10.1007/978-3-540-72530-5_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free