Abstract
Blur region detection from a single image with spatially-varying blur is a challenging task. Although many methods are proposed in the past decades, most of them are based on hand-crafted features. These features are not robust to image context, image size, blur type and other factors, which cannot obtain sound performance. In addition, the craft of these features requires a lot of domain knowledge. To address these problems, in this paper, a blur region detection method based on semantic segmentation is proposed to extract blur regions, which well integrates global image-level context and cross-layer context information making the auto-learned features more robust. Specifically, we design a blur detection net (BDNet) for blur detection by combining ResNets and FCNs. A binary mask can be produced in an end-to-end way. By our method, the mean region intersection over union (Mean IoU) increased by nearly 20% compared with most other blur detection methods. We make the code publicly available at https://github.com/SEU-DongHan/BDNet.
Author supplied keywords
Cite
CITATION STYLE
Shen, A., Dong, H., Wang, K., Kong, Y., Wu, J., & Shu, H. (2020). Automatic Extraction of Blur Regions on a Single Image Based on Semantic Segmentation. IEEE Access, 8, 44867–44878. https://doi.org/10.1109/ACCESS.2020.2978084
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.