Automatic radiological report generation (ARRG) smoothens the clinical workflow by speeding the report generation task. Recently, various deep neural networks (DNNs) have been used for report generation and have achieved promising results. Despite the impressive results, their deployment remains challenging because of their size and complexity. Researchers have proposed several pruning methods to reduce the size of DNNs. Inspired by the one-shot weight pruning methods, we present CheXPrune, a multi-attention based sparse radiology report generation method. It uses encoder-decoder based architecture equipped with a visual and semantic attention mechanism. The model is 70% pruned during the training to achieve 3.33× compression without sacrificing its accuracy. The empirical results evaluated on the OpenI dataset using BLEU, ROUGE, and CIDEr metrics confirm the accuracy of the sparse model viz-a` -viz the dense model.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Kaur, N., & Mittal, A. (2023). CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning. Journal of Ambient Intelligence and Humanized Computing, 14(6), 7485–7497. https://doi.org/10.1007/s12652-022-04454-z