Structuralizing disaster-scene data through auto-captioning

0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Disaster-scene images documenting the magnitude and effects of natural disasters nowadays can be easily collected through crowdsourcing aided by mobile technologies (e.g., smartphones or drones). One challenging issue that confronts the first-responders who desire the use of such data is the non-structured nature of these crowdsourced images. Among other techniques, one natural way is to structuralize disaster-scene images through captioning. Through captioning, their imagery contents are augmented by descriptive captions that further enable more effective search and query (S&Q). This work presents a preliminary test by exploiting an end-to-end deep learning framework with a linked CNN-LSTM architecture. Demonstration of the results and quantitative evaluation are presented that showcase the validity of the proposed concept.

Cite

CITATION STYLE

APA

Klerings, A., Tang, S., & Chen, Z. Q. (2019). Structuralizing disaster-scene data through auto-captioning. In Proceedings of the 2nd ACM SIGSPATIAL International Workshop on Advances in Resilient and Intelligent Cities, ARIC 2019 (pp. 29–32). Association for Computing Machinery, Inc. https://doi.org/10.1145/3356395.3365671

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free