Efficient multimodality volume fusion using graphics hardware

8Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a novel technique of multimodality volume fusion using graphics hardware that solves the depth cueing problem with less time consumption. Our method consists of three steps. First, it takes two volumes and generates sample planes orthogonal to the viewing direction following 3D texture mapping volume rendering. Second, it composites textured slices each from different modalities with several compositing operations. Third, alpha blending for all the slices is performed. For the efficient volume fusion, a pixel program is written in HLSL(High Level Shader Language). Experimental results show that our hardware-accelerated method distinguishes the depth of overlapping region of the volume and renders them much faster than conventional ones on software. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Hong, H., Bae, J., Kye, H., & Shin, Y. G. (2005). Efficient multimodality volume fusion using graphics hardware. In Lecture Notes in Computer Science (Vol. 3516, pp. 842–845). Springer Verlag. https://doi.org/10.1007/11428862_120

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free