An Introduction to Duplicate Detection

  • Naumann F
  • Herschel M
N/ACitations
Citations of this article
135Readers
Mendeley users who have this article in their library.

Abstract

Abstract With the ever increasing volume of data, data quality problems abound. Multiple, yet different representations of the same real-world objects in data, duplicates, are one of the most intriguing data quality problems. The effects of such duplicates are detrimental; for instance, bank customers can obtain duplicate identities, inventory levels are monitored incorrectly, catalogs are mailed multiple times to the same household, etc. Automatically detecting duplicates is difficult: First, duplicate representations are usually not identical but slightly differ in their values. Second, in principle all pairs of records should be compared, which is infeasible for large volumes of data. This lecture examines closely the two main components to overcome these difficulties: (i) Similarity measures are used to automatically identify duplicates when comparing two records. Well-chosen similarity measures improve the effectiveness of duplicate detection. (ii) Algorithms are developed to perform on very large vo...

Cite

CITATION STYLE

APA

Naumann, F., & Herschel, M. (2010). An Introduction to Duplicate Detection. Synthesis Lectures on Data Management, 2(1), 1–87. https://doi.org/10.2200/s00262ed1v01y201003dtm003

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free