We propose a novel technique of multimodality volume fusion using graphics hardware that solves the depth cueing problem with less time consumption. Our method consists of three steps. First, it takes two volumes and generates sample planes orthogonal to the viewing direction following 3D texture mapping volume rendering. Second, it composites textured slices each from different modalities with several compositing operations. Third, alpha blending for all the slices is performed. For the efficient volume fusion, a pixel program is written in HLSL(High Level Shader Language). Experimental results show that our hardware-accelerated method distinguishes the depth of overlapping region of the volume and renders them much faster than conventional ones on software. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Hong, H., Bae, J., Kye, H., & Shin, Y. G. (2005). Efficient multimodality volume fusion using graphics hardware. In Lecture Notes in Computer Science (Vol. 3516, pp. 842–845). Springer Verlag. https://doi.org/10.1007/11428862_120
Mendeley helps you to discover research relevant for your work.