Accurate diagnosis in cardiac ultrasound requires high quality images, containing different specific features and structures depending on which of the 14 standard cardiac views the operator is attempting to acquire. Inexperienced operators can have a great deal of difficulty recognizing these features and thus can fail to capture diagnostically relevant heart cines. This project aims to mitigate this challenge by providing operators with real-time feedback in the form of view classification and quality estimation. Our system uses a frame grabber to capture the raw video output of the ultrasound machine, which is then fed into an Android mobile device, running a customized mobile implementation of the TensorFlow inference engine. By multi-threading four TensorFlow instances together, we are able to run the system at 30 Hz with a latency of under 0.4 s.
Van Woudenberg, N., Liao, Z., Abdi, A. H., Girgis, H., Luong, C., Vaseli, H., … Abolmaesumi, P. (2018). Quantitative echocardiography: Real-time quality estimation and view classification implemented on a mobile android device. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11042 LNCS, pp. 74–81). Springer Verlag. https://doi.org/10.1007/978-3-030-01045-4_9