ssEMnet: Serial-section electron microscopy image registration using a spatial transformer network with learned features

44Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The alignment of serial-section electron microscopy (ssEM) images is critical for efforts in neuroscience that seek to reconstruct neuronal circuits. However, each ssEM plane contains densely packed structures that vary from one section to the next, which makes matching features across images a challenge. Advances in deep learning has resulted in unprecedented performance in similar computer vision problems, but to our knowledge, they have not been successfully applied to ssEM image co-registration. In this paper, we introduce a novel deep network model that combines a spatial transformer for image deformation and a convolutional autoencoder for unsupervised feature learning for robust ssEM image alignment. This results in improved accuracy and robustness while requiring substantially less user intervention than conventional methods. We evaluate our method by comparing registration quality across several datasets.

Cite

CITATION STYLE

APA

Yoo, I., Hildebrand, D. G. C., Tobin, W. F., Lee, W. C. A., & Jeong, W. K. (2017). ssEMnet: Serial-section electron microscopy image registration using a spatial transformer network with learned features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10553 LNCS, pp. 249–257). Springer Verlag. https://doi.org/10.1007/978-3-319-67558-9_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free