Abstract
One of the key challenges in analysis of atomically resolved imaging data is the determination of the symmetry, ideally down to the space group, of the various phases that are present. Doing so in an automated fashion can allow for e.g. tracking of phase transformations under different stimuli (including under the electron beam), but the existing methods are susceptible to distortions arising from noise that can greatly complicate the classification process, and at times, still need manual input (e.g., selection of the repeating motif) by the user. Therefore, a fully automated method that requires no user input, and can produce results with quantified uncertainty is necessary. In total, for any 2D periodic lattice, there exist only five Bravais lattice types and 17 space groups. Therefore, given any 2D atomically resolved image, the task boils down to two steps: (1) segmentation of the image into various constituent phases, and (2) classification of the symmetry of these phases. For the first task, we have previously shown the use of a sliding window Fourier transform combined with linear unmixing techniques, which allows the spatial phases to be easily segmented [1]. Here, we show that we can use a deep learning approach towards tackling the second part of this problem, namely, the symmetry determination into one of the five Bravais lattice types. Deep convolutional neural networks (DCNNs) have been shown to have superior performance in computer vision challenges to previous methods, which typically required the hand-crafted feature vectors on which the machine vision algorithms were trained [2]. Indeed, DCNN classification can now approach human-level performance for real images. The key idea behind DCNNs is that that various convolutional layers learn abstract representations for classes that progressively become more detailed, allowing the network to learn features that are position and viewpoint-invariant and can be therefore useful in image classification tasks. We exploit this advance via training of a DCNN for symmetry classification. Our method utilizes images in reciprocal (Fourier) space, as opposed to real space. In effect, by employing the 2D fast Fourier Transform (FFT) as preprocessing, we ensure that the DCNN will only focus on the features that are important for the symmetry classification. We first simulated 4000 images of each of the five Bravais lattice types, including a sixth class for missing or absent periodicity (termed a 'noise' class), and then took the FFT of each lattice simulated. We then trained a DCNN consisting of 3 convolutional layers, a fully connected layer, and a final 'softmax' output layer on this training dataset. Importantly, we utilized dropout, which randomly masks a fraction of the output of a previous layer before it is fed into the input of the next layer. Dropout serves two purposes: (1), it reduces overfitting by ensuring weights on any one convolutional filter do not become too large during training, and (2) by using dropout during the running (testing) phase, it allows for the probabilities over the classifications to be determined [3]. After training over 30 epochs, the network reached 85% accuracy on the validation set.
Cite
CITATION STYLE
Vasudevan, R. K., Dyck, O., Ziatdinov, M., Jesse, S., Laanait, N., & Kalinin, S. V. (2018). Deep Convolutional Neural Networks for Symmetry Detection. Microscopy and Microanalysis, 24(S1), 112–113. https://doi.org/10.1017/s1431927618001058
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.