We propose an image reconstruction framework to combine a large number of overlapping image patches into a fused reconstruction of the object of interest, that is robust to inconsistencies between patches (e.g. motion artefacts) without explicitly modelling them. This is achieved through two mechanisms: first, manifold embedding, where patches are distributed on a manifold with similar patches (where similarity is defined only in the region where they overlap) closer to each other. As a result, inconsistent patches are set far apart in the manifold. Second, fusion, where a sample in the manifold is mapped back to image space, combining features from all patches in the region of the sample. For the manifold embedding mechanism, a new method based on a Convolutional Variational Autoencoder (β -VAE) is proposed, and compared to classical manifold embedding techniques: linear (Multi Dimensional Scaling) and non-linear (Laplacian Eigenmaps). Experiments using synthetic data and on real fetal ultrasound images yield fused images of the whole fetus where, in average, β -VAE outperforms all the other methods in terms of preservation of patch information and overall image quality.
CITATION STYLE
Gomez, A., Zimmer, V., Toussaint, N., Wright, R., Clough, J. R., Khanal, B., … Schnabel, J. A. (2019). Image Reconstruction in a Manifold of Image Patches: Application to Whole-Fetus Ultrasound Imaging. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11905 LNCS, pp. 226–235). Springer. https://doi.org/10.1007/978-3-030-33843-5_21
Mendeley helps you to discover research relevant for your work.