Disaster-scene images documenting the magnitude and effects of natural disasters nowadays can be easily collected through crowdsourcing aided by mobile technologies (e.g., smartphones or drones). One challenging issue that confronts the first-responders who desire the use of such data is the non-structured nature of these crowdsourced images. Among other techniques, one natural way is to structuralize disaster-scene images through captioning. Through captioning, their imagery contents are augmented by descriptive captions that further enable more effective search and query (S&Q). This work presents a preliminary test by exploiting an end-to-end deep learning framework with a linked CNN-LSTM architecture. Demonstration of the results and quantitative evaluation are presented that showcase the validity of the proposed concept.
CITATION STYLE
Klerings, A., Tang, S., & Chen, Z. Q. (2019). Structuralizing disaster-scene data through auto-captioning. In Proceedings of the 2nd ACM SIGSPATIAL International Workshop on Advances in Resilient and Intelligent Cities, ARIC 2019 (pp. 29–32). Association for Computing Machinery, Inc. https://doi.org/10.1145/3356395.3365671
Mendeley helps you to discover research relevant for your work.