Joint Semantic Understanding with a Multilevel Branch for Driving Perception

9Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Visual perception is a critical task for autonomous driving. Understanding the driving environment in real time can assist a vehicle in driving safely. In this study, we proposed a multi-task learning framework for simultaneous traffic object detection, drivable area segmentation, and lane line segmentation in an efficient way. Our network encoder extracts features from an input image and three decoders at multilevel branches handle specific tasks. The decoders share the feature maps with more similar tasks for joint semantic understanding. Multiple loss functions are automatically weighted summed to learn multiple objectives simultaneously. We demonstrate the effectiveness of this framework on a BerkeleyDeepDrive100K (BDD100K) dataset. In the experiment, the proposed method outperforms the competing multi-task and single-task methods in terms of accuracy and maintains a real-time inference at more than 37 frames per second.

Cite

CITATION STYLE

APA

Lee, D. G., & Kim, Y. K. (2022). Joint Semantic Understanding with a Multilevel Branch for Driving Perception. Applied Sciences (Switzerland), 12(6). https://doi.org/10.3390/app12062877

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free