Unified Perceptual Parsing for Scene

  • B T
  • Liu Y
  • Zhou B
  • et al.
N/ACitations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes 1 .

Cite

CITATION STYLE

APA

B, T. X., Liu, Y., Zhou, B., Jiang, Y., & Sun, J. (2018). Unified Perceptual Parsing for Scene. Eccv (Vol. 1, pp. 432–448). Springer International Publishing. Retrieved from https://github.com/CSAILVision/unifiedparsing%0Ahttp://dx.doi.org/10.1007/978-3-030-01228-1_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free