The need for non-rigid multi-modal registration is becoming increasingly common for many clinical applications. To date, however, existing proposed techniques remain as largely academic research effort with very few methods being validated for clinical product use. It has been suggested by Crum et al. [1] that the context-free nature of these methods is one of the main limitations and that moving towards context-specific methods by incorporating prior knowledge of the underlying registration problem is necessary to achieve registration results that are accurate and robust enough for clinical applications. In this paper, we propose a novel non-rigid multi-modal registration method using a variational formulation that incorporates a prior learned joint intensity distribution. The registration is achieved by simultaneously minimizing the Kullback-Leibler divergence between an observed and a learned joint intensity distribution and maximizing the mutual information between reference and alignment images. We have applied our proposed method on both synthetic and real images with encouraging results. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Guetter, C., Xu, C., Sauer, F., & Hornegger, J. (2005). Learning based non-rigid multi-modal image registration using Kullback-Leibler divergence. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3750 LNCS, pp. 255–262). https://doi.org/10.1007/11566489_32
Mendeley helps you to discover research relevant for your work.