In this work, we propose the application of abstract meaning representation (AMR) based semantic parsing models to parse textual descriptions of a visual scene into scene graphs, which is the first work to the best of our knowledge. Previous works examined scene graph parsing from textual descriptions using dependency parsing and left the AMR parsing approach as future work since sophisticated methods are required to apply AMR. Hence, we use pre-trained AMR parsing models to parse the region descriptions of visual scenes (i.e. images) into AMR graphs and pre-trained language models (PLM), BART and T5, to parse AMR graphs into scene graphs. The experimental results show that our approach explicitly captures high-level semantics from textual descriptions of visual scenes, such as objects, attributes of objects, and relationships between objects. Our textual scene graph parsing approach outperforms the previous state-of-the-art results by 9.3% in the SPICE metric score.
CITATION STYLE
Choi, W. S., Heo, Y. J., Punitan, D., & Zhang, B. T. (2022). Scene Graph Parsing via Abstract Meaning Representation in Pre-trained Language Models. In DLG4NLP 2022 - 2nd Workshop on Deep Learning on Graphs for Natural Language Processing, Proceedings of the Workshop (pp. 30–35). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.dlg4nlp-1.4
Mendeley helps you to discover research relevant for your work.