Image captioning and object detection are some of the most growing and popular research areas in the field of computer vision. Almost every upcoming technology uses vision in some way, and with various people researching in the field of object detection, many vision problems which seemed intractable seem close to solved now. But there has been less research in identifying regions associating actions with objects. Dense Image Captioning [8] is one such application, which localizes all the important regions in an image along with their description. Something very similar to normal image captioning, but repeated for every salient region in the image. In this paper, we address the aforementioned problem of detecting regions explaining the query caption. We use edge boxes for efficient object proposals, which we further filter down using a score measure. The object proposals are then captioned using a pretrained Inception [19] model. The captions of each of these regions are checked for similarity with the query caption using the skip-thought vectors [9]. This proposed framework produces interesting and efficient results. We provide a quantitative measure of our experiment by taking the intersection over union (IoU) with the ground truth on the visual genome [10] dataset. By combining the above techniques in an orderly manner, we have been able to achieve encouraging results.
CITATION STYLE
Agrawal, P., Yadav, R., Yadav, V., De, K., & Pratim Roy, P. (2020). Caption-Based Region Extraction in Images. In Advances in Intelligent Systems and Computing (Vol. 1024, pp. 27–38). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-32-9291-8_3
Mendeley helps you to discover research relevant for your work.