Automatic evaluation of metadata quality in digital repositories
Abstract Owing to the recent developments in automatic metadata generation and interoperability between digital repositories, the production\nof metadata is now vastly surpassing manual quality control capabilities. Abandoning quality control altogether is problematic,\nbecause low-quality metadata compromise the effectiveness of services that repositories provide to their users. To address\nthis problem, we present a set of scalable quality metrics for metadata based on the Bruce & Hillman framework for metadata\nquality control. We perform three experiments to evaluate our metrics: (1) the degree of correlation between the metrics and\nmanual quality reviews, (2) the discriminatory power between metadata sets and (3) the usefulness of the metrics as low-quality\nfilters. Through statistical analysis, we found that several metrics, especially Text Information Content, correlate well\nwith human evaluation and that the average of all the metrics are roughly as effective as people to flag low-quality instances.\nThe implications of this finding are discussed. Finally, we propose possible applications of the metrics to improve tools\nfor the administration of digital repositories.