An operational framework to automatically evaluate the quality of weather observations from third-party stations

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

With increasing numbers of crowdsourced private automatic weather stations (called TPAWS) established to fill the gap of official network and obtain local weather information for various purposes, the data quality is a major concern in promoting their usage. For example, farms can use their local weather stations to help its management and assessment of extreme weather events (e.g., frosts and heatwaves) and as evidence to make insurance claims, but other users such as the Agricultural Finance/insurance industry need to have confidence in the reported local observations. Proper quality control and assessment are necessary to reach mutual agreement on the TPAWS observations. To derive near real-time assessment (i.e., soon after required data become available) for operational system, we propose a simple, scalable and interpretable framework based on AI/Stats/ML models. The framework constructs separate models for individual data from official sources and then provides the final assessment by fusing the individual models. In the proposed framework, five weather variables—rainfall, minimum temperature, maximum temperature, wind and relative humidity—were considered. We used various official data, such as official weather station observations, numerical weather prediction (NWP) forecasts, gridded climate analyses and radar data as reference data to form different statistical tests for different weather variables. Individual statistical tests, including domain test, spatio-temporal test, spatial test, NWP test, trend test and test for sub-daily data, were developed to evaluate the quality of TPAWS observations from different aspects. The final assessment on TPAWS observations with the basic logic is given as below (Figure 1): (1) TPAWS observations are firstly checked by a domain test to see whether the value is physically meaningful. (2) After passing domain tests, five individual tests (including spatial-temporal test, spatial test, NWP test, trend test and test for sub-daily data) are carried out parallelly to test the quality of TPAWS observations from different aspects. (3) Conditions associated with each test will be checked to determine if the test can be used in the final assessment. (4) The final assessment is derived by fusing the results from the tests that satisfy assumptions and conditions. The performance of our proposed framework is evaluated by synthetic data and demonstrated by applying it to a real TPAWS network.

Cite

CITATION STYLE

APA

Shao, Q., Li, M., Dabrowski, J. J., Bakar, S., Rahman, A., Powell, A., & Henderson, B. (2023). An operational framework to automatically evaluate the quality of weather observations from third-party stations. In Proceedings of the International Congress on Modelling and Simulation, MODSIM (pp. 846–852). Modelling and Simulation Society of Australia and New Zealand Inc. (MSSANZ). https://doi.org/10.36334/modsim.2023.shao114

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free