As data volumes increase at exponential speed in more and more application fields of science, the challenges posed by handling Big Data in the Exabyte era gain an increasing importance. High-energy physics, statistics, climate modeling, cosmology, genetics or bio-informatics are just a few examples of fields where it becomes crucial to efficiently manipulate Big Data, which are typically shared at large scale. Rapidly storing this data, protecting it from loss and analyzing it to understand the results are significant challenges, made more difficult by decades of improvements in computation capabilities that have been unmatched in storage. For many applications, the overall performance and scalability becomes clearly driven by the performance of the data handling subsystem. As we anticipate Exascale systems in 2020, there is a growing consensus in the scientific community that revolutionary new approaches are needed in computational science data management. These new trends lead us to rethink the traditional file-based data management abstraction for large-scale applications. Moreover, for obvious cost-related reasons, new architectures are clearly needed as well as alternate infrastructures to supercomputers., like hybrid or HPC clouds. © 2013 Springer-Verlag.
CITATION STYLE
Costan, A., & Dobre, C. (2013). 1st Workshop on Big Data Management in Clouds - BDMC2012. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7640 LNCS, pp. 1–2). https://doi.org/10.1007/978-3-642-36949-0_1
Mendeley helps you to discover research relevant for your work.