Three Big Data Tools for a Data Scientist’s Toolbox

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sometimes data is generated unboundedly and at such a fast pace that it is no longer possible to store the complete data in a database. The development of techniques for handling and processing such streams of data is very challenging as the streaming context imposes severe constraints on the computation: we are often not able to store the whole data stream and making multiple passes over the data is no longer possible. As the stream is never finished we need to be able to continuously provide, upon request, up-to-date answers to analysis queries. Even problems that are highly trivial in an off-line context, such as: “How many different items are there in my database?” become very hard in a streaming context. Nevertheless, in the past decades several clever algorithms were developed to deal with streaming data. This paper covers several of these indispensable tools that should be present in every big data scientists’ toolbox, including approximate frequency counting of frequent items, cardinality estimation of very large sets, and fast nearest neighbor search in huge data collections.

Cite

CITATION STYLE

APA

Calders, T. (2018). Three Big Data Tools for a Data Scientist’s Toolbox. In Lecture Notes in Business Information Processing (Vol. 324, pp. 112–133). Springer Verlag. https://doi.org/10.1007/978-3-319-96655-7_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free