Using PVM and MPI for co-processed, distributed and parallel scientific visualization

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper discusses the combined use of MPI and PVM in the parallel visualization tool kit, pV3. The implementation provides for efficient co-processed, distributed parallel visualization of large-scale 3D time dependent simulations. The primary goals of pV3 include the ability; to handle large-scale transient 3D simulations, to take full advantage of available hardware encompassing both parallel compute engines and graphics workstations, to visualize the data as the computation progresses, to interact and interrogate the data as its computed, and to steer the simulation by adjusting parameters obtaining immediate feedback from the changes. Based on a client/server model, the original implementation of pV3 was founded on PVM. The client portion executes coupled closely with the application code and ‘extracts’ from the distributed data volume, in place, lower dimensional data. The distilled data is transferred to the interactive server portion of pV3 (executing on a graphics workstation) where the rendering and display are performed. This unique architecture frees up the compute engine(s) from dealing with the graphics, and reduces the large data-set to a size that can be rapidly transported to the server using standard network connections. There are some drawbacks in using PVM on parallel machines which stem from TCP/IP communications. These include, in some instances, not taking full advantage of the hardware interconnect on the machine as communication must occur with the graphics workstation. This can result in smaller messages and increased latency. In addition, pV3’s dynamic attachment required that the graphics workstation be accessible from each node on the machine with PVM's ‘group’ functionality intact, which poses a problem on some parallel systems. The pV3 ‘concentrator’ module was designed to overcome such problems on distributed memory platforms. All pV3 messages on the parallel machine use the high-speed interconnect via the MPI protocol. Communications off the parallel machine to the graphics workstation is handled via PVM. Using the communicator facility of MPI, the concentrator is able to keep the solver messages separated from the pV3 client messages. An added advantage of the concentrator module for a large number of processors is that it simplifies the pV3 start up procedure.

Cite

CITATION STYLE

APA

Haimes, R., & Jordan, K. E. (1998). Using PVM and MPI for co-processed, distributed and parallel scientific visualization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1388, pp. 1098–1105). Springer Verlag. https://doi.org/10.1007/3-540-64359-1_775

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free