During the COVID-19 Pandemic, the need for rapid and reliable alternative COVID-19 screening methods have motivated the development of learning networks to screen COVID-19 patients based on chest radiography obtained from Chest X-ray (CXR) and Computed Tomography (CT) imaging. Although the effectiveness of developed models have been documented, their adoption in assisting radiologists suffers mainly due to the failure to implement or present any applicable framework. Therefore in this paper, a robotic framework is proposed to aid radiologists in COVID-19 patient screening. Specifically, Transfer learning is employed to first develop two well-known learning networks (GoogleNet and SqueezeNet) to classify positive and negative COVID-19 patients based on chest radiography obtained from Chest X-Ray (CXR) and CT imaging collected from three publicly available repositories. A test accuracy of 90.90%, sensitivity and specificity of 94.70% and 87.20% were obtained respectively for SqueezeNet and a test accuracy of 96.40%, sensitivity and specificity of 95.50% and 97.40% were obtained respectively for GoogleNet. Consequently, to demonstrate the clinical usability of the model, it is deployed on the Softbank NAO-V6 humanoid robot which is a social robot to serve as an assistive platform for radiologists. The strategy is an end-to-end explainable sorting of X-ray images, particularly for COVID-19 patients. Laboratory-based implementation of the overall framework demonstrates the effectiveness of the proposed platform in aiding radiologists in COVID-19 screening.
CITATION STYLE
Ajani, O. S., Obasekore, H., Kang, B. Y., & Rammohan, M. (2023). Robotic Assistance in Radiology: A Covid-19 Scenario. IEEE Access, 11, 49785–49793. https://doi.org/10.1109/ACCESS.2023.3277526
Mendeley helps you to discover research relevant for your work.