The generation of reference data for machine learning models is challenging for dust emissions due to perpetually dynamic environmental conditions. We generated a new vision dataset with the goal of advancing semantic segmentation to identify and quantify vehicle-induced dust clouds from images. We conducted field experiments on 10 unsealed road segments with different types of road surface materials in varying climatic conditions to capture vehicle-induced road dust. A direct single-lens reflex (DSLR) camera was used to capture the dust clouds generated due to a utility vehicle travelling at different speeds. A research-grade dust monitor was used to measure the dust emissions due to traffic. A total of ~210,000 images were photographed and refined to obtain ~7,000 images. These images were manually annotated to generate masks for dust segmentation. The baseline performance of a truncated sample of ~900 images from the dataset is evaluated for U-Net architecture.
CITATION STYLE
De Silva, A., Ranasinghe, R., Sounthararajah, A., Haghighi, H., & Kodikara, J. (2023). A benchmark dataset for binary segmentation and quantification of dust emissions from unsealed roads. Scientific Data, 10(1). https://doi.org/10.1038/s41597-022-01918-x
Mendeley helps you to discover research relevant for your work.