Satellite imagery has long been an attractive data source providing a wealth of information regarding human-inhabited areas. While high-resolution satellite images are rapidly becoming available, limited studies have focused on how to extract meaningful information regarding human habitation patterns and economic scales from such data. We present READ, a new approach for obtaining essential spatial representation for any given district from high-resolution satellite imagery based on deep neural networks. Our method combines transfer learning and embedded statistics to efficiently learn the critical spatial characteristics of arbitrary size areas and represent such characteristics in a fixed-length vector with minimal information loss. Even with a small set of labels, READ can distinguish subtle differences between rural and urban areas and infer the degree of urbanization. An extensive evaluation demonstrates that the model outperforms state-of-the-art models in predicting economic scales, such as the population density in South Korea (R2=0.9617), and shows a high use potential in developing countries where district-level economic scales are unknown.
CITATION STYLE
Han, S., Ahn, D., Cha, H., Yang, J., Park, S., & Cha, M. (2020). Lightweight and robust representation of economic scales from satellite imagery. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 428–436). AAAI press. https://doi.org/10.1609/aaai.v34i01.5379
Mendeley helps you to discover research relevant for your work.