In this paper, we propose a novel human-centric visual relation segmentation method based on Mask R-CNN model and VTransE model. We first retain the Mask R-CNN model, and segment both human and object instances. Because Mask R-CNN may omit some human instances in instance segmentation, we further detect the omitted faces and extend them to localize the corresponding human instances. Finally, we retrain the last layer of VTransE model, and detect the visual relations between each pair of human instance and human/object instance. The experimental results show that our method obtains 0.4799, 0.4069, and 0.2681 on the criteria of R@100 with the m-IoU of 0.25, 0.50 and 0.75, respectively, which outperforms other methods in Person in Context Challenge.
CITATION STYLE
Yu, F., Tan, X., Ren, T., & Wu, G. (2019). Human-centric visual relation segmentation using Mask R-CNN and VTransE. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11130 LNCS, pp. 582–589). Springer Verlag. https://doi.org/10.1007/978-3-030-11012-3_44
Mendeley helps you to discover research relevant for your work.