Entropy-Based Approach to Efficient Cleaning of Big Data in Hierarchical Databases

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When databases are at risk of containing erroneous, redundant, or obsolete data, a cleaning procedure is used to detect, correct or remove such undesirable records. We propose a methodology for improving data cleaning efficiency in a large hierarchical database. The methodology relies on Shannon’s information entropy for measuring the amount of information stored in databases. This approach, which builds on previously-gathered statistical data regarding the prevalence of errors in the database, enables the decision maker to determine which components of the database are likely to have undergone more information loss, and thus to prioritize those components for cleaning. In particular, in cases where the cleaning process is iterative (from the root node down), the entropic approach produces a scientifically motivated stopping rule that determines the optimal (i.e. minimally required) number of tiers in the hierarchical database that need to be examined. This stopping rule defines a more streamlined representation of the database, in which less informative tiers are eliminated.

Cite

CITATION STYLE

APA

Levner, E., Kriheli, B., Benis, A., Ptuskin, A., Elalouf, A., Hovav, S., & Ashkenazi, S. (2020). Entropy-Based Approach to Efficient Cleaning of Big Data in Hierarchical Databases. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12402 LNCS, pp. 3–12). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59612-5_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free