Abstract
This paper aims to establish a performance baseline of an HPC installation of OpenStack. We created InfiniCloud - a distributed High Performance Cloud hosted on remote nodes of InfiniCortex. InfiniCloud compute nodes use high performance Intel (R) Haswell and Sandy Bridge CPUs, SSD storage and 64-256GB RAM. All computational resources are connected by high performance IB interconnects and are capable of trans-continental IB communication using Obsidian Longbow range extenders. We benchmark the performance of our test-beds using micro-benchmarks for TCP bandwidth, IB bandwidth and latency, file creation performance, MPI collectives and Linpack. This paper compares different CPU generations across virtual and bare-metal environments. The results show modest improvements in TCP and IB bandwidth and latency on Haswell; performance being largely dependent on the IB hardware. Virtual overheads were minimal and near-native performance is possible for sufficiently large messages. From the Linpack testing, users can expect the performance in their applications on Haswell-provisioned VMs more than twice. On Haswell hardware, native and virtual performance differences is still significant for MPI collective operations. Finally, our parallel filesystem testing revealed virtual performance coming close to native only for non-sync/fsync file operations.
Author supplied keywords
Cite
CITATION STYLE
Low, J., Chrzeszczyk, J., Howard, A., & Chrzeszczyk, A. (2015). Performance assessment of InfiniBand HPC cloud instances on IntelTM haswell and IntelTM Sandy Bridge architectures. Supercomputing Frontiers and Innovations, 2(3), 28–40. https://doi.org/10.14529/jsfi150303
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.