A significant task of automatic diagnosis for radiology imaging, especially for chest X-rays, is to identify disease types, which can be viewed as a multi-label learning problem. Prior state-of-the-art approaches adopted the graph convolutional network to model the correlations among disease labels. However, the utilization of medical reports paired with radiology images is neglected in such approaches. Hence, at least two novel improvements are proposed in this paper. First, disease label embeddings are pre-trained on the total radiology reports, and these semantic features along with encoded X-ray features are fused in a transformer encoder for graph initialization. Second, to expand the representation ability of the graph, extra medical terms from radiology reports are mined and added to the graph model as auxiliary nodes without changing the size of the output space. Experiments conducted on two public chest-X-ray datasets demonstrate the outstanding performance over compared models and the advantages of the proposed improvements.
CITATION STYLE
Hou, D., Zhao, Z., & Hu, S. (2021). Multi-label learning with visual-semantic embedded knowledge graph for diagnosis of radiology imaging. IEEE Access, 9, 15720–15730. https://doi.org/10.1109/ACCESS.2021.3052794
Mendeley helps you to discover research relevant for your work.