An Iterative Methodology for Defining Big Data Analytics Architectures

10Citations
Citations of this article
78Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Thanks to the advances achieved in the last decade, the lack of adequate technologies to deal with Big Data characteristics such as Data Volume is no longer an issue. Instead, recent studies highlight that one of the main Big Data issues is the lack of expertise to select adequate technologies and build the correct Big Data architecture for the problem at hand. In order to tackle this problem, we present our methodology for the generation of Big Data pipelines based on several requirements derived from Big Data features that are critical for the selection of the most appropriate tools and techniques. Thus, thanks to our approach we reduce the required know-how to select and build Big Data architectures by providing a step-by-step methodology that leads Big Data architects into creating their Big Data Pipelines for the case at hand. Our methodology has been tested in two use cases.

Cite

CITATION STYLE

APA

Tardio, R., Mate, A., & Trujillo, J. (2020). An Iterative Methodology for Defining Big Data Analytics Architectures. IEEE Access, 8, 210597–210616. https://doi.org/10.1109/ACCESS.2020.3039455

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free