Abstract
Establishing dense semantic correspondences between object instances remains a challenging problem due to background clutter, significant scale and pose differences, and large intra-class variations. In this paper, we present an end-to-end trainable network for learning semantic correspondences using only matching image pairs without manual key-point correspondence annotations. To facilitate network training with this weaker form of supervision, we 1) explicitly estimate the foreground regions to suppress the effect of background clutter and 2) develop cycle-consistent losses to enforce the predicted transformations across multiple images to be geometrically plausible and consistent. We train the proposed model using the PF-PASCAL dataset and evaluate the performance on the PF-PASCAL, PF-WILLOW, and TSS datasets. Extensive experimental results show that the proposed approach achieves favorably performance compared to the state-of-the-art. The code and model will be available at https://yunchunchen.github.io/WeakMatchNet/.
Cite
CITATION STYLE
Chen, Y.-C., Huang, P., Yu, L.-Y., Huang, J.-B., & Tech, V. (2018). Deep Semantic Matching with Foreground Detection and Cycle-Consistency. Asian Conference on Computer Vision (ACCV), 1–16. Retrieved from https://yunchunchen.github.io/WeakMatchNet/.
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.