Purpose: In obstetric ultrasound (US) scanning, the learner’s ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a major challenge in skill acquisition. We aim to build a US plane localisation system for 3D visualisation, training, and guidance without integrating additional sensors. Methods: We propose a regression convolutional neural network (CNN) using image features to estimate the six-dimensional pose of arbitrarily oriented US planes relative to the fetal brain centre. The network was trained on synthetic images acquired from phantom 3D US volumes and fine-tuned on real scans. Training data was generated by slicing US volumes into imaging planes in Unity at random coordinates and more densely around the standard transventricular (TV) plane. Results: With phantom data, the median errors are 0.90 mm/1.17∘ and 0.44 mm/1.21∘ for random planes and planes close to the TV one, respectively. With real data, using a different fetus with the same gestational age (GA), these errors are 11.84 mm/25.17∘. The average inference time is 2.97 ms per plane. Conclusion: The proposed network reliably localises US planes within the fetal brain in phantom data and successfully generalises pose regression for an unseen fetal brain from a similar GA as in training. Future development will expand the prediction to volumes of the whole fetus and assess its potential for vision-based, freehand US-assisted navigation when acquiring standard fetal planes.
CITATION STYLE
Di Vece, C., Dromey, B., Vasconcelos, F., David, A. L., Peebles, D., & Stoyanov, D. (2022). Deep learning-based plane pose regression in obstetric ultrasound. International Journal of Computer Assisted Radiology and Surgery, 17(5), 833–839. https://doi.org/10.1007/s11548-022-02609-z
Mendeley helps you to discover research relevant for your work.