A distributed, proactive intelligent scheme for securing quality in large scale data processing

16Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The involvement of numerous devices and data sources in the current form of Web leads to the collection of vast volumes of data. The advent of the Internet of Things (IoT) enhances the devices to act autonomously and transforms them into information and knowledge producers. The vast infrastructure of the Web/IoT becomes the basis for producing data either in a structured or in an unstructured way. In this paper, we focus on a distributed scheme for securing the quality of data as collected and stored in multiple partitions. A high quality is achieved through the adoption of a model that identifies any change in the accuracy of the collected data. The proposed scheme determines if the incoming data negatively affect the accuracy of the already present datasets and when this is the case, it excludes them from further processing. We are based on a scheme that also identifies the appropriate partition where the incoming data should be allocated. We describe the proposed scheme and present simulation and comparison results that give insights on the pros and cons of our solution.

Cite

CITATION STYLE

APA

Kolomvatsos, K. (2019). A distributed, proactive intelligent scheme for securing quality in large scale data processing. Computing, 101(11), 1687–1710. https://doi.org/10.1007/s00607-018-0683-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free