Autonomic Distribution and Adaptation

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This chapter describes an approach for increasing the scalability of applications by exploiting inherent concurrency in order to parallelize and distribute the code. It focuses on concurrency in the sense of reduced dependencies between logical parts of an application. A promising approach consists in exploitation of concurrency rather than 'automagic' parallelization which can only lead to very suboptimal solutions. Concurrency can be analyzed on multiple code levels, thus providing information of different granularity - however, all approaches so far still base on the programmer providing the according dependency information. The most classical approach to parallelization consists in enabling the development of 'threads', for example. With the introduction of the Message Passing Interface (MPI), an attempt was made to standardize the communication between threads in order to principally allow message-based data synchronization across infrastructures. MPI provides all the essential capabilities needed to deal with large-scale and heterogeneous infrastructure.

Cite

CITATION STYLE

APA

Schubert, L., Wesner, S., Bonilla, D. R., & Cucinotta, T. (2017). Autonomic Distribution and Adaptation. In Programming Multicore and Many-Core Computing Systems (pp. 227–240). wiley. https://doi.org/10.1002/9781119332015.ch11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free