Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes 1 .
CITATION STYLE
B, T. X., Liu, Y., Zhou, B., Jiang, Y., & Sun, J. (2018). Unified Perceptual Parsing for Scene. Eccv (Vol. 1, pp. 432–448). Springer International Publishing. Retrieved from https://github.com/CSAILVision/unifiedparsing%0Ahttp://dx.doi.org/10.1007/978-3-030-01228-1_26
Mendeley helps you to discover research relevant for your work.