Deep learning models for groundwater level prediction based on delay penalty

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

In irrigation agriculture, predicting groundwater level (GWL) using deep learning models can help decision-makers coordinate surface water and groundwater usage, thus aiding in the sustainable development and utilization of groundwater. However, when making a long sequence prediction, prediction sequences often have severe delays affecting the availability of prediction results. In this paper, a new loss function is proposed to minimize the lag and oversmoothing on the prediction of GWLs. GWL, meteorology, and pumping data are collected via an irrigation Internet of Things system in Hutubi County, Xinjiang. Through Pearson’s correlation analysis, historical potential evapotranspiration (ET0), groundwater extraction, and GWL were chosen to predict GWLs. Datasets were constructed through the proposed spatiotemporal data fusion method; then, the best model from the six deep learning models was selected by comparing the prediction capability of the datasets. Finally, the mean-squared error (MSE) loss function is replaced by the proposed loss function. Compared to the mean absolute error, MSE, and predicted sequence graphs, the new loss function significantly depresses the time delay with similar prediction accuracy.

Cite

CITATION STYLE

APA

Chenjia, Z., Xu, T., Zhang, Y., & Ma, D. (2024). Deep learning models for groundwater level prediction based on delay penalty. Water Supply, 24(2), 555–567. https://doi.org/10.2166/ws.2024.009

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free