Evaluating different in-memory cached architectures in regard to time efficiency for big data analysis

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The era of big data has arrived, and a plethora of methods and tools are being used to manage and analyse the emerging huge volume, velocity, variety, veracity and volatility of information system data sources. In this paper, a particular aspect of a business domain is explored where the primary data being stored/ accessed are not the data value itself (which is highly volatile), but the frequency of its change. Each data frequency has a chain of related data pertaining to it, whose links must be incorporated into this architecture. The volatility of data necessitates the use of in-memory architectures to reduce access/update times. Given these business requirements, different in-memory architectures are examined, using an experiment with sample data, in order to evaluate their worst case response times for a given test set of data analysis/manipulation operations. The results of this experiment are presented and discussed in terms of the most suitable architecture for this type of data, which is in-memory objects linked via hash table links.

Cite

CITATION STYLE

APA

Millham, R. (2015). Evaluating different in-memory cached architectures in regard to time efficiency for big data analysis. In Advances in Intelligent Systems and Computing (Vol. 355, pp. 63–74). Springer Verlag. https://doi.org/10.1007/978-3-319-17398-6_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free