Learning Digital Terrain Models from Point Clouds: ALS2DTM Dataset and Rasterization-Based GAN

12Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Despite the popularity of deep neural networks in various domains, the extraction of digital terrain models (DTMs) from airborne laser scanning (ALS) point clouds is still challenging. This might be due to the lack of the dedicated large-scale annotated dataset and the data-structure discrepancy between point clouds and DTMs. To promote data-driven DTM extraction, this article collects from open sources a large-scale dataset of ALS point clouds and corresponding DTMs with various urban, forested, and mountainous scenes. A baseline method is proposed as the first attempt to train a deep neural network to extract DTMs directly from ALS point clouds via rasterization techniques, coined DeepTerRa. Extensive studies with well-established methods are performed to benchmark the dataset and analyze the challenges in learning to extract DTM from point clouds. The experimental results show the interest of the agnostic data-driven approach, with submetric error level compared to methods designed for DTM extraction. The data and source code are available online at https://lhoangan.github.io/deepterra/ for reproducibility and further similar research.

Cite

CITATION STYLE

APA

Le, H. A., Guiotte, F., Pham, M. T., Lefevre, S., & Corpetti, T. (2022). Learning Digital Terrain Models from Point Clouds: ALS2DTM Dataset and Rasterization-Based GAN. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15, 4980–4989. https://doi.org/10.1109/JSTARS.2022.3182030

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free