Edge-Heavy Data and architecture in the big data era

  • MARUYAMA H
N/ACitations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

We argue that most of data will be stored and processed at the edge of the network in the next "big data" era. We call this phenomenon "Edge-Heavy Data", and we propose an architecture named "Krill" for it. One important characteristic of big data is its low value-density. If the data are never used, it is wasteful of sending it to the data centers (with a network cost) and storing them into expensive enterprise-grade data servers. Instead, the low value-density data will be stored near where they are generated, that is, the edge of the network. This paper discusses requirements and a possible architecture for efficiently dealing with the phenomenon of Edge-Heavy Data. We first consider a few scenarios where Edge-Heavy Data is desirable, or even inevitable, and identify its requirements. Then we propose an architecture based on the concept of Data Value Field (DVF), followed by an introduction of Jubatus, an open-source framework of online machine learning, as a first step towards the proposed architecture. [ABSTRACT FROM AUTHOR]

Cite

CITATION STYLE

APA

MARUYAMA, H. (2013). Edge-Heavy Data and architecture in the big data era. Journal of Information Processing and Management, 56(5), 269–275. https://doi.org/10.1241/johokanri.56.269

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free