Processing math: 100%

Calibration of Gridded Wind Speed Forecasts Based on Deep Learning

基于深度学习的网格风速预报订正研究

+ Author Affiliations + Find other works by these authors
Funds: 
Supported by the National Key Research and Development Program of China (2021YFC3000905) and Key Innovation Team Fund of China Meteorological Administration (CMA2022ZD04).

PDF

  • The challenges of applying deep learning (DL) to correct deterministic numerical weather prediction (NWP) biases with non-Gaussian distributions are discussed in this paper. It is known that the DL UNet model is incapable of correcting the bias of strong winds with the traditional loss functions such as the MSE (mean square error), MAE (mean absolute error), and WMAE (weighted mean absolute error). To solve this, a new loss function embedded with a physical constraint called MAE_MR (miss ratio) is proposed. The performance of the UNet model with MAE_MR is compared to UNet traditional loss functions, and statistical post-processing methods like Kalman filter (KF) and the machine learning methods like random forest (RF) in correcting wind speed biases in gridded forecasts from the ECMWF high-resolution model (HRES) in East China for lead times of 1–7 days. In addition to MAE for full wind speed, wind force scales based on the Beaufort scale are derived and evaluated. Compared to raw HRES winds, the MAE of winds corrected by UNet (MAE_MR) improves by 22.8% on average at 24–168 h, while UNet (MAE), UNet (WMAE), UNet (MSE), RF, and KF improve by 18.9%, 18.9%, 17.9%, 13.8%, and 4.3%, respectively. UNet with MSE, MAE, and WMAE shows good correction for wind forces 1–3 and 4, but negative correction for 6 or higher. UNet (MAE_MR) overcomes this, improving accuracy for forces 1–3, 4, 5, and 6 or higher by 11.7%, 16.9%, 11.6%, and 6.4% over HRES. A case study of a strong wind event further shows UNet (MAE_MR) outperforms traditional post-processing in correcting strong wind biases.

    本文探讨应用深度学习(DL)对数值模式的非高斯分布确定性预报要素订正时面临的挑战。利用统计方法如卡尔曼滤波(KF)、机器学习方法如随机森林(RF)和深度学习UNet模型作为订正方法,对欧洲中期天气预报中心(ECMWF)在我国东部部分地区未来1–7天逐24 h的地面10 m风速高分辨率预报(HRES)进行订正。UNet模型分别应用不同的损失函数,包括均方根误差(MSE)、平均绝对误差(MAE)、权重平均绝对误差(WMAE)和本文提出的一种加入物理约束的损失函数MAE_MR。为全面检验各方法对风速预报的订正能力,本文不仅采用MAE检验总风速,还将风速转化为蒲福风力等级,利用风速准确率检验不同风力等级的订正效果。结果表明,基于UNet(MAE_MR)模型订正后的风速预报是各订正方法中最好的。相对于HERS原始风速预报,基于UNet(MAE_MR)模型订正后的风速预报的MAE在24–168 h预报时效平均提升了22.8%,UNet (MAE), UNet (WMAE), UNet (MSE), RF, 及KF方法分别提升18.9%、18.9%、17.9%、13.8%和4.3%。基于UNet模型的风速订正模型在应用MSE、MAE和WMAE作为损失函数时,对于1–3级风、4级风有较好的订正效果,但对于5级和6级以上风速会出现负订正的问题。UNet(MAE_MR)模型解决了上述方法对于大风预报没有订正能力的问题,对于1–3级风、4级、5级和6级以上风速的预报准确率较HRES原始预报分别提升11.7%、16.9%、11.6% 和 6.4%。针对一次大风过程进行案例分析,结果表明,相对于其他方法,UNet(MAE_MR)对较大风速预报订正有明显的优势。

  • Surface wind forecasting (at 10 m) is one of the crucial weather forecast elements that impact many aspects of life, such as people’s daily activities, industrial production, transportation, wind energy resources, and so on. Accurate wind speed forecasting, especially for strong winds, is essential for improving warning and defense systems against extreme weather events (Yu and Zheng, 2020). Currently, wind speed forecasts are produced by using numerical weather prediction models (NWP; Shen et al., 2020). However, these models may be affected by initial field errors, limited resolution, incomplete physical parameterization schemes, and rough lower boundary conditions, which can lead to biases in the forecasts (Boeing, 2016).

    To improve the accuracy of wind speed forecasts based on NWP models, statistical post-processing techniques can be used (Jin et al., 2019). These techniques employ observed data and NWP model forecasts to correct biases in the forecasts. Common statistical post-processing techniques for wind speed forecasts include traditional methods and machine learning (ML) methods. The MOS (model output statistics) method is a classic deterministic post-processing technique (Glahn and Lowry, 1972), but linear regression methods may not be more effective at correcting wind speed forecasts due to the non-Gaussian distribution of wind speed. Other techniques have achieved some success in correcting bias, including Kalman filtering (KF; Cui et al., 2012), anomaly numerical correction with observations (Peng et al., 2013), and error analysis (Yang L. et al., 2022). In recent years, machine learning-based post-processing techniques havebeen widely used in wind speed forecasting. Compared to traditional methods, this technology has significant advantages in processing large amounts of data and identifying nonlinear relationships (Li et al., 2019; Men et al., 2019; Sun et al., 2019; Cho et al., 2020; Hewson and Pillosu, 2021). For example, machine learning techniques such as random forests (Xia et al., 2020) and support vector machines (Teng et al., 2019; Wang et al., 2020) have effectively improved wind field forecasts. However, traditional post-processing methods and traditional machine learning methods mainly focus on single-point modeling and come with spatial constraints, which makes them inappropriate for handling high-resolution and fine-gridded forecasts on a large scale. If modeled for each grid point, it would require enormous computational resources. Therefore, there is an urgent need to develop statistical post-processing forecast techniques for grid forecasts (Zeng et al., 2019; Dabernig et al., 2020).

    DL technology provides new opportunities for the statistical post-processing of meteorological elements (Zhou et al., 2019; Chantry et al., 2021; Haupt et al., 2021; Vannitsem et al., 2021; Yang X. et al., 2022). In recent years, an increasing number of studies have applied DL to the statistical post-processing of forecast (Dupuy et al., 2021; Zhou et al., 2022). Compared to traditional machine learning, DL has greater potential for mining nonlinear relationships in big data and spatial modeling (Hinton et al., 2006; LeCun et al., 2015). For deterministic forecast statistical post-processing techniques, previous studies have used DL to improve wind speed forecasting. Salazar et al. (2022) developed a short-term ground wind speed forecast statistical post-processing model based on artificial neural network (ANN) and effectively improved wind speed forecasts under terrain conditions by incorporating multiple predictors. Sun et al. (2019) applied a convolutional neural network (CNN) model to improve 10-m wind speed forecasts in North China. Han et al. (2021) applied the UNet model (Ronneberger et al., 2015), with an encoder–decoder structure, to build a statistical post-processing model for wind field grid forecasts in North China using data from ECMWF. Based on the work of Han et al. (2021), Zhang et al. (2022) further improved the ability to correct wind field forecasts by adding dense convolutional blocks (Huang et al., 2017) to the UNet model. Xiang et al. (2022) used transformer in constructing a spatio-temporal transformer UNet (ST-UNet) model for temperature and wind speed forecast bias correction, outperforming the CNN-based correction models. However, previous studies on DL-based wind speed correction forecasts have rarely incorporated physical constraints into the model. Additionally, while most studies evaluate the corrected wind speed performance by analyzing errors across all wind speed samples, few have evaluated the results across different wind forces on the Beaufort scale, particularly for strong winds, which is crucial in determining the effectiveness of the wind speed correction model.

    Incorporating physical constraints into DL models and addressing data imbalance are essential for improving weather forecasting using DL. DL technology does not rely on physical principles, and may produce model results that do not conform to basic physical principles (Reichstein et al., 2019). Therefore, incorporating physical constraints into deep learning models is a focus of this technology in forecasting applications (Kashinath et al., 2021; Willard et al., 2023). Previous research has explored this problem by defining DL loss functions based on physical constraints (Daw et al., 2022) or by designing specific model architectures that strictly enforce physical constraints (Zanetta et al., 2023). However, such work has not yet been carried out in wind speed forecasting. In addition, wind speed samples have non-Gaussian distribution characteristics, and must be addressed. To tackle these issues, this study designs a loss function, which is incorporated as a physical constraint into wind speed correction model, and solves the problem that conventional loss functions lack the ability to correct strong winds. At the same time, a correction scheme based on wind speed forecast errors is used to mitigate the data imbalance caused by the skewed distribution of wind speed samples.

    In this paper, the data, statistical post-processing scheme, and methods are introduced in Section 2. In Section 3, an optimization of loss functions that incorporates physical constraint is proposed. In Section 4, the results for loss functions and methods for wind forces on Beaufort scale are provided, and a case study of a strong wind event is analyzed. The interpretability of the DL model is also discussed. Finally, conclusions are presented in Section 5.

    This study utilizes ECMWF Reanalysis version 5 (ERA5) as the source for actual value of wind speed. ERA5 is a global climate reanalysis provided by the ECMWF, with a long historical record and relatively high spatial and temporal resolutions. It provides hourly estimates for a variety of atmospheric, ocean wave, and land-surface quantities, with a spatial resolution of 0.25° × 0.25°. The daily 10-m u (u10) and 10-m v (v10) at 0000 and 1200 UTC 2015–2020 are used as the reference for the surface wind field. DL is employed to study the use of wind speed statistical post-processing, which will be applied to observed wind fields in future work.

    ECMWF global forecasts are widely recognized as highly accurate (Bauer et al., 2015) and are used worldwide by meteorological agencies and related industries. In this study, the deterministic forecast product from ECMWF’s high-resolution model (HRES) is utilized, issued twice a day at 0000 and 1200 UTC. The raw spatial resolution is 0.125° × 0.125° for surface layer and 0.250° × 0.250° for high layers. The statistical post-processing applies to 24–168-h (1–7-day) 10-m wind speed forecasts by HRES initialized at 0000 and 1200 UTC during 2015–2020. The selected predictors include 33 upper-air layers and various surface variables with matching lead-times (Table 1). Additionally, the upper-air isobaric layers and surface wind field are processed, calculating zonal wind (u) and meridional wind (v) as wind speed and direction, which serving as both forecast correction variables and predictors.

    Table  1.  Predictors used in the RF and deep learning models
    Category Predictor Pressure layer (hPa)
    Upper air
    (0.25° × 0.25°)
    Divergence, geopotential height, potential vorticity, specific humidity, relative humidity, temperature, wind speed, wind direction, vertical velocity 200, 300, 400, 500, 700, 850, 925, 1000
    Surface
    (0.125° × 0.125°)
    Convective available potential energy, convective precipitation, 2-m temperature, 2-m dewpoint temperature, 0°C isothermal level, forecast albedo, total cloud cover, large-scale precipitation, 2-m minimum temperature in the last 6 hours, 2-m maximum temperature in the last 6 hours, mean sea level pressure, sea surface temperature, total cloud cover, total column water, total column water vapour, total precipitation, 10-m wind speed, 10-m wind direction, 100-m wind speed, 100-m wind direction
    Terrain
    (1 km × 1 km)
    Altitude, slope, aspect
     | Show Table
    DownLoad: CSV

    The selected surface geographic data include Digital Elevation Model (DEM) data (Fig. 1) and slope and aspect (Table 1). The DEM data come from the Shuttle Radar Topography Mission Radar Topographic Mission with 1-km resolution. Slope and aspect of the study area are calculated based on the DEM.

    Fig  1.  Topography distribution and grid points in the study area (28.125°–44°N, 109°–124.875°E).

    This study evaluates wind speed gridded forecasting by using both MAE and wind accuracy of wind force based on the Beaufort scale. The Beaufort scale is a widely used system for measuring wind speed. Here, it classifies wind force into 4 categories: 1–3, 4, 5, and 6 or higher (Table 2). Winds reaching or exceeding force 6 are classified as strong winds.

    Table  2.  The Beaufort wind force scale
    Force 1–3 4 5 6 or higher
    Wind speed (m s−1) 0.3–5.4 5.5–7.9 8.0–10.7 ≥10.8
     | Show Table
    DownLoad: CSV

    The study area selected covers over 28.125°–44°N, 109°–124.875°E (Fig. 1), including North China, Huaihe, Jianghuai, and lower–middle reaches of the Yangtze River region and related marine areas in China. The spatial resolution of the 10-m wind speed statistical post-processing model is 0.125° × 0.125°, with a total of 16,384 (128 × 128) grid points. HRES upper-air layer forecasts, ERA5 10-m wind speed, and geographic data are interpolated to a uniform 0.125° × 0.125° grid using bilinear interpolation.

    Since the NWP model shows independent systematic biases at different forecast lead times, independent wind speed statistical post-processing models and datasets must be created for each lead time. Here, 7 models and datasets are constructed for the study area for 24–168 h (1–7 days) every 24 hours from 2015 to 2020. Each dataset consists of three parts: raw 10-m wind speed forecasts from HRES, 10-m wind speed forecast errors, and predictors. The forecast error is the difference between the HRES raw wind speed forecast and the ERA5 wind speed at the same spatiotemporal grid points.

    In Table 1, 95 variables are selected as predictors, including 9 predictors at 8 upper-air layers of HRES, 20 surface predictors, and 3 geographic predictors. As the predictors have different dimensions, data standardization is required to eliminate magnitude differences.

    ˆxi(t)=xi(t)ˉxi(t)σi(t), (1)

    where xi(t) is the predictor at the i-th grid point at each t lead time, ˉxi(t) is the mean value of the predictor at the i-th grid point at each t lead time, and σi(t) is the standard deviation. Seven independent models by RF and UNet are built for 24–168-h lead times. For each lead time, the mean and variance are calculated separately.

    The DL model divides the dataset into training, validation, and test sets. The training set trains the model, the validation set validates model training, and the test set evaluates model performance. For each dataset, 2015–2018 data are the training set, 2019 is the validation set, and 2020 is the test set.

    Common statistical post-processing methods for NWP model forecasts establish a relationship between predictors and quantities through regression equations. However, the probability density distribution of wind speed samples is non-Gaussian distribution, so these methods may fit most samples well but struggle with outliers in unbalanced samples. To address imbalanced caused by using wind speed directly, this study feeds forecast error into correction modeling. The formula is:

    εi(t)=Fi(t)Oi(t), (2)

    where the forecast error εi(t) at each lead-time t and each grid point i is the difference between the actual wind speed Oi(t) and forecast Fi(t).

    This scheme based on forecast error has three main advantages. First, it alleviates severe imbalance caused by the skewed wind speed distribution, as forecast error distribution is closer to normal. Second, forecast error is relatively stable in time and space, eliminating instability from seasonal and temporal changes. As a result, a unified model can be built for the entire year using random forests and DL to correct forecasting. Third, NWP models have independent systematic biases at different initialization times, and traditional methods typically correct forecasts from different initialization times separately, while forecast errors are consistent across initialization times (Kirkwood et al., 2021). Therefore, the dataset based on forecast error includes forecast with the same lead-time and different NWP initialization times, significantly increasing available data. Previous studies have used forecast errors for statistical post-processing of NWP forecasts with good correction effects (Dabernig et al., 2017; Kirkwood et al., 2021).

    Figure 2 shows the sample of wind speed by ERA5 and the wind speed forecast errors for 24-h forecast lead-time in the study area for 2015 to 2020. The probability density of wind speed samples exhibits significant skewness (Fig. 2a), ranging from 0 to 8 m s−1 with a mean of 3.25 m s−1, and a variance of 2.40 m s−1. Only 1.5% of wind speed samples are strong winds. The forecast error density distribution is more normal, with a main distribution between −5 and 5 m s−1, a mean of 0.2 m s−1, and a variance of 0.81 m s−1. Figure 2b shows the forecast error characteristics for wind forces. The error means are near zero for each force, but the variance increases significantly with increasing wind forces. The errors for wind forces 1–3, 4, 5, and 6 or higher are 0.89, 1.10, 1.13, and 1.26 m s−1, respectively. As wind force increases, more large errors occur. As shown in Table 3, during 2015–2020, the mean forecast error remains around 0.22 m s−1 for total wind speeds with 24–168-h lead times. However, the error variance gradually increases with lead time, indicating more large errors at longer lead times. For strong winds, both the mean and variance increase significantly with lead time, suggesting greater imbalance in strong wind errors at longer lead times. Therefore, the forecast error scheme improves wind speed sample imbalance, but large errors persist at longer leads and higher speeds, especially for strong winds.

    Fig  2.  Sample of wind speed data from ERA5 and the forecast error for 24-h lead time during 2015–2020 for (a) the probability density distribution and (b) a box plot of the forecast error distribution for wind forces on the Beaufort scale.
    Table  3.  Statistics of the wind speed forecast error for total wind speed and strong wind samples for 24–168-h lead times from 2015 to 2020
    Forecast lead time (h) Total wind speed sample Strong wind sample
    Mean value (m s−1) Variance (m s−1) Mean value (m s−1) Variance (m s−1)
    24 0.21 0.94 −0.07 1.26
    48 0.22 1.11 −0.41 1.72
    72 0.23 1.26 −0.70 2.12
    96 0.23 1.44 −1.00 2.38
    120 0.24 1.60 −1.45 2.62
    144 0.25 1.79 −1.96 3.00
    168 0.22 1.97 −2.86 3.27
     | Show Table
    DownLoad: CSV

    KF is a widely used statistical post-processing method in meteorological applications (Cui et al., 2012). In this study, KF serves as the baseline method for forecast correction. This method computes the bias of the NWP at different initialization and forecast lead times, also known as the decaying-average method.

    Bi(t)=(1ω)×Bi(t1)+ω×(Fi(t)Oi(t)), (3)
    ˆFi(t)=Fi(t)Bi(t), (4)

    where Bi(t) is the bias at the i-th grid point for lead time t, Bi(t1) is the bias at the i-th grid point for t − 1 forecast lead time, Fi(t) is the forecast at the same initial time and forecast lead time for i-th grid point, Oi(t) is the observation at i-th grid point, and ˆFi(t) is the corrected forecast. The weight coefficient ω determines the influence of recent versus older biases on the current correction. The larger ω weights recent biases more strongly.

    The KF method directly corrects wind speed bias without requiring predictors. It does not apply the forecast error scheme, nor account for systematic differences at initialization times. Here, KF corrects 24–168-h winds for 0000 and 1200 UTC initializations in the test set, averaging the results. Sensitivity tests determine an optimal ω of 0.02.

    RF is a machine learning ensemble method combining multiple decision tree “learners” for improved prediction and generalization (Breiman, 2001). Ensemble learning uses multiple learners, either serially with dependencies or in parallel independently. RF is a parallel ensemble of multiple decision trees. Each tree is independent, with the RF output averaging all trees. RF creates each tree by: 1) random sampling with replacement from the dataset, and 2) random sampling of feature predictors to split nodes. RF avoids overfitting through predictor sampling, unlike traditional decision trees.

    RF models the 10-m wind speed forecast error. The inputs are the dataset predictors, and the target is the forecast error. As RF performs single-point modeling, one model is built per grid point. Using the 2015–2018 training set, an error correction model is built for each grid point from 24–168 h, evaluating on the 2020 test set. Sensitivity tests determine the key RF parameters: Gini index for feature selection, log2 maximum features, 200 estimators, and default values for other parameters.

    A UNet CNN model performs statistical post-processing for 10-m wind speeds. Figure 3 shows the DL bias correction flowchart. UNet conducts pixel-wise semantic segmentation by region. For the study area, the input for the model is the predictors (128 × 128 × 95), and the target is the 10-m wind speed forecast error (128 × 128) for each lead time. Seven models are built from 24–168 h. The 2020 test set model outputs predict the errors, which are added to the raw HRES forecast per (2) to correct the wind speeds.

    Fig  3.  The flowchart of UNet bias correction model employed in this study.
    Fig  4.  Relationship between the weight coefficient of MAE_MR and the accuracy of wind speed forecasting with an average lead time ranging from 24 to 168 h. The error bars represent the 95% confidence intervals of the results.
    Fig  5.  (a) MAE (m s−1) and (b) MAE improvement percentage (%) of wind speed forecasts for the KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods at 24–168-h lead times, compared to ERA5 wind speed in 2020.

    UNet has an encoder–decoder structure with encoding and decoding parts (Fig. 3). The encoder extracts feature from the input predictors using convolutional modules, max-pooling layers, and Dropout. The decoder up samples and concatenates channels to reconstruct the feature vector size. In the encoder, predictors pass through convolutional modules, each with two convolutional layers, batch normalization, and ReLU activation. The convolutional layers have 3 × 3 kernels. After each module, a max pooling layer down samples features by half (stride 2), and a dropout layer (parameter 0.3) prevents overfitting. The convolutional module filters are 64, 128, 256, and 512 for the 4 UNet layers. After encoding, the 128 × 128 × 95 input becomes an 8 × 8 × 1024 feature vector. In the decoder, the features are unsampled by deconvolution and concatenated with the matching encoder layer. The combined features pass through a convolutional module and ReLU activation to output the 128 × 128 × 1 error prediction. The raw HRES wind speed forecast is added to the predicted error to correct the wind speed.

    UNet training performs 100 iterations with early stopping after 10 iterations without validation loss improvement. Optimization uses Adam (Kingma and Ba, 2015), with a learning rate of 0.001, and ReduceLROnPlateau, decreasing the rate by 0.7 after 5 iterations without improvement. For model robustness, training data are randomly shuffled and results averaged over 10 runs.

    To comprehensively assess the efficacy of various techniques in correcting biases in wind speed, the MAE is utilized to quantify the error between the total wind speed forecast samples and ERA5. The correction effect on wind forces on the Beaufort scale is evaluated by using three metrics: wind speed accuracy (SC), false alarm rate (FAR), and miss ratio (MR). Wind speed accuracy is defined as the ratio of the number of grid points with the same wind force between the forecast and actual values, with a lower MAE and higher accuracy indicating a more accurate wind speed forecast.

    The formula for calculating MAE is:

    MAE=1NNi=1|FiOi|, (5)

    where Fi is the wind speed forecast at the i-th grid point, Oi is the actual wind speed at the i-th grid point, and N is the total number of grid points.

    The formula for calculating wind speed accuracy, false alarm rate, and miss ratio are:

    SC=SCsiN, (6)
    FAR=NBNA+NB, (7)
    MR=NCNA+NC, (8)

    where SCsi is the number of grid points with the same wind force based on the Beaufort wind scale between the forecast and actual values, N is the total number of grid points, NA is the number of grid points with correctly forecasted wind force on the Beaufort scale, NB is the number of false alarm grid points, and NC is the number of missed grid points.

    The UNet-based wind speed correction model poses a supervised regression problem, utilizing common loss functions like MAE and MSE. However, UNet trained with MAE or MSE effectively corrects lighter winds but struggles with strong wind biases. This results from high miss ratios when predicting strong winds, attributed to large errors in the strong wind data. To minimize MAE or MSE, UNet may flatten peak predictions for large errors, averaging the result.

    Fig  6.  Accuracy of wind forces (a) 1–3, (b) 4, (c) 5, and (d) 6 or higher compared between HRES raw forecasts and corrected wind using KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods at 24–168-h lead times in 2020. The bar charts represent the mean values obtained from 10 independent training runs, with the short vertical lines indicating the standard deviation.
    Table  4.  MAE of wind speed forecasts using the KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods at 24–168-h lead times in 2020 compared to ERA5 wind speed
    Forecast result Lead time (h)
    24 48 72 96 120 144 168
    HRES 0.70 0.82 0.93 1.04 1.15 1.28 1.41
    KF 0.65 0.77 0.88 1.00 1.11 1.24 1.37
    RF 0.56 0.67 0.79 0.89 1.09 1.19 1.23
    UNet (MSE) 0.53 0.64 0.76 0.87 1.05 1.10 1.18
    UNet (MAE) 0.52 0.63 0.75 0.85 1.02 1.12 1.19
    UNet (WMAE) 0.51 0.63 0.74 0.86 0.98 1.08 1.16
    UNet (MAE_MR) 0.51 0.60 0.71 0.82 0.93 1.02 1.13
     | Show Table
    DownLoad: CSV
    Table  5.  SC, FAR, and MR of wind forces 1–3, 4, 5, and 6 or higher compared between HRES raw forecasts and corrected wind using KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods at 24–168-h forecast lead times. Bolding indicates the best results of several methods
    Lead time (h) SC FAR MR
    Wind force 1–3 4 5 ≥6 1–3 4 5 ≥6 1–3 4 5 ≥6
    24 HRES 0.621 0.487 0.533 0.541 0.010 0.209 0.231 0.279 0.011 0.128 0.137 0.206
    KF 0.637 0.514 0.552 0.553 0.009 0.168 0.193 0.235 0.029 0.146 0.159 0.235
    RF 0.697 0.552 0.545 0.525 0.011 0.114 0.140 0.159 0.001 0.187 0.219 0.325
    UNet (MSE) 0.721 0.551 0.563 0.528 0.011 0.097 0.115 0.122 0.002 0.177 0.217 0.356
    UNet (MAE) 0.731 0.565 0.572 0.526 0.011 0.098 0.112 0.112 0.002 0.169 0.220 0.366
    UNet (WMAE) 0.712 0.568 0.576 0.53 0.011 0.107 0.120 0.132 0.001 0.156 0.207 0.348
    UNet (MAE_MR) 0.727 0.577 0.60 0.589 0.011 0.101 0.113 0.117 0.002 0.163 0.216 0.323
    48 HRES 0.576 0.42 0.462 0.469 0.010 0.247 0.275 0.308 0.011 0.165 0.197 0.278
    KF 0.589 0.439 0.465 0.471 0.009 0.210 0.241 0.270 0.030 0.183 0.220 0.313
    RF 0.653 0.463 0.469 0.434 0.011 0.153 0.168 0.160 0.001 0.223 0.295 0.431
    Unet (MSE) 0.665 0.465 0.473 0.421 0.011 0.144 0.156 0.142 0.002 0.216 0.296 0.467
    UNet (MAE) 0.677 0.473 0.479 0.431 0.011 0.144 0.157 0.138 0.001 0.208 0.283 0.436
    UNet (WMAE) 0.636 0.461 0.480 0.433 0.011 0.120 0.118 0.087 0.003 0.247 0.354 0.537
    UNet (MAE_MR) 0.664 0.489 0.515 0.497 0.011 0.146 0.161 0.144 0.000 0.203 0.232 0.368
    72 HRES 0.54 0.367 0.395 0.384 0.010 0.280 0.323 0.385 0.011 0.202 0.253 0.339
    KF 0.551 0.382 0.404 0.385 0.010 0.243 0.289 0.352 0.030 0.222 0.277 0.372
    RF 0.606 0.394 0.392 0.316 0.011 0.181 0.216 0.266 0.001 0.279 0.369 0.487
    UNet (MSE) 0.619 0.392 0.363 0.296 0.011 0.157 0.165 0.155 0.002 0.287 0.426 0.595
    UNet (MAE) 0.618 0.393 0.363 0.285 0.011 0.143 0.162 0.167 0.001 0.302 0.423 0.593
    UNet (WMAE) 0.601 0.392 0.368 0.293 0.011 0.156 0.170 0.168 0.002 0.287 0.411 0.589
    UNet (MAE_MR) 0.605 0.418 0.427 0.413 0.011 0.161 0.183 0.203 0.000 0.174 0.255 0.402
    96 HRES 0.506 0.322 0.346 0.330 0.010 0.321 0.363 0.454 0.011 0.237 0.297 0.391
    KF 0.515 0.33 0.359 0.335 0.010 0.287 0.335 0.402 0.031 0.260 0.321 0.422
    RF 0.556 0.352 0.326 0.257 0.011 0.217 0.236 0.267 0.001 0.327 0.445 0.589
    UNet (MSE) 0.576 0.356 0.271 0.220 0.011 0.186 0.178 0.136 0.002 0.369 0.553 0.719
    UNet (MAE) 0.584 0.359 0.295 0.214 0.011 0.195 0.202 0.154 0.001 0.340 0.505 0.685
    UNet (WMAE) 0.569 0.360 0.312 0.256 0.011 0.210 0.216 0.219 0.001 0.321 0.483 0.676
    UNet (MAE_MR) 0.572 0.367 0.361 0.347 0.011 0.233 0.259 0.313 0.000 0.203 0.325 0.458
    120 HRES 0.477 0.282 0.286 0.277 0.011 0.350 0.412 0.501 0.011 0.277 0.367 0.450
    KF 0.484 0.299 0.299 0.281 0.010 0.321 0.386 0.418 0.032 0.298 0.390 0.469
    RF 0.502 0.31 0.226 0.208 0.011 0.293 0.311 0.405 0.001 0.391 0.564 0.700
    UNet (MSE) 0.513 0.292 0.279 0.190 0.011 0.292 0.325 0.364 0.009 0.324 0.465 0.586
    UNet (MAE) 0.523 0.285 0.266 0.181 0.010 0.263 0.295 0.291 0.011 0.362 0.512 0.682
    UNet (WMAE) 0.481 0.293 0.282 0.213 0.009 0.241 0.255 0.302 0.045 0.423 0.618 0.713
    UNet (MAE_MR) 0.498 0.327 0.313 0.287 0.011 0.289 0.321 0.392 0.002 0.229 0.368 0.494
    144 HRES 0.445 0.24 0.24 0.244 0.011 0.400 0.470 0.562 0.012 0.330 0.413 0.490
    KF 0.449 0.244 0.244 0.248 0.010 0.373 0.448 0.538 0.033 0.351 0.433 0.505
    RF 0.481 0.264 0.184 0.128 0.011 0.366 0.373 0.361 0.001 0.413 0.643 0.834
    UNet (MSE) 0.512 0.255 0.195 0.086 0.011 0.279 0.278 0.215 0.001 0.426 0.649 0.859
    UNet (MAE) 0.520 0.259 0.205 0.106 0.011 0.285 0.292 0.252 0.001 0.417 0.618 0.831
    UNet (WMAE) 0.477 0.256 0.231 0.181 0.011 0.305 0.344 0.346 0.003 0.398 0.560 0.739
    UNet (MAE_MR) 0.487 0.284 0.264 0.25 0.011 0.322 0.353 0.393 0.001 0.294 0.468 0.602
    168 HRES 0.419 0.208 0.201 0.174 0.011 0.443 0.531 0.671 0.011 0.373 0.463 0.588
    KF 0.422 0.214 0.210 0.179 0.010 0.419 0.508 0.645 0.033 0.398 0.492 0.608
    RF 0.464 0.228 0.163 0.101 0.011 0.320 0.346 0.418 0.001 0.488 0.681 0.845
    UNet (MSE) 0.481 0.234 0.161 0.060 0.011 0.304 0.287 0.327 0.004 0.506 0.737 0.900
    UNet (MAE) 0.488 0.236 0.175 0.078 0.011 0.273 0.259 0.172 0.002 0.567 0.785 0.924
    UNet (WMAE) 0.458 0.231 0.199 0.146 0.011 0.319 0.325 0.442 0.002 0.429 0.588 0.762
    UNet (MAE_MR) 0.465 0.253 0.25 0.194 0.011 0.308 0.392 0.417 0.001 0.334 0.513 0.683
     | Show Table
    DownLoad: CSV

    To address this issue, weighted MAE (WMAE) is employed as the loss function, applying different weights per wind force. The formula for WMAE is shown:

    WMAE=1wi×1NNi=1wi|FiOi|, (9)

    where Fj is the wind speed forecast calculated by the UNet model, Oj is the ERA5 wind speed, N is the total number of grid points, and wi are the weight coefficients for wind forces 1–3, 4, 5, and 6 or higher, respectively. Sensitivity experiments are conducted to determine the weight coefficients for wind forces. To improve high wind correction, larger weights apply to forces 5 and 6 or higher. However, increasing high wind error weights can reduce low speed accuracy. After extensive testing, weights of 1, 2, 5, and 10 are selected for forces 1–3, 4, 5, and 6 or higher, balancing accuracy.

    A new loss function called MAE_MR is proposed to address UNet’s inability to correctly forecast strong winds with conventional losses. The underlying idea of this loss function is to augment the weight assigned to missed samples, i.e., samples that are under-forecasted, during model training.

    MAE_MR=1NNi=1|ˆεiεi|+α×1mmj=1|ˆFjOj|,m{Oi>Fi}, (10)
    ˆFj=Fjˆεj. (11)

    MAE_MR consists of two components: a standard MAE calculation for all samples and an additional term that is only applied to samples where the actual wind speed exceeds the forecasted wind speed. The variable ˆεi represents the wind speed forecast error predicted by the UNet model, εi is the original wind speed forecast error, and m represents the points where the actual wind speed is higher than the forecast value, which indicates under-forecasting. The wind speed forecast (Fj) is calculated as the difference between the raw HERS wind speed forecast and the forecast error (ˆεi) predicted by the UNet model. The variable Oj is the actual wind speed from ERA5, and α is a weight coefficient.

    The weight coefficient α plays a crucial role in the correction effect of the MAE_MR. Sensitivity tests are conducted on the UNet model using different values of α (Fig. 4). Averaged over the lead times of 24–168 h, as the weight coefficient increases, total speed accuracy gradually declines but remains above raw HRES. In contrast, the forecasting accuracy of strong winds exhibits a clear upward trend, and when the coefficient reaches 0.8, the forecast accuracy of strong winds begins to surpass that of HRES strong wind forecasting. As the coefficient continues to increase, the forecast accuracy of strong winds further improves. Thus, MAE_MR significantly improves strong winds but loses some total speed accuracy. When the weight coefficient is 1.0, total and strong wind accuracy after correction peak, so this value is used.

    The corrected wind speeds using KF, RF, and UNet with MSE, MAE, WMAE, and MAE_MR loss functions are evaluated. MAE quantifies total wind speed forecast accuracy. Figure 5 compares the statistical post-processing outcomes for the 6 methods across 24–168-h lead times concerning MAE and percentage improvement over raw HRES in 2020. Table 4 presents the MAE of wind speed forecast for each method. UNet (MAE_MR) achieves the lowest MAE at all lead times, outperforming the other methods. Averaged over 24–168 h, UNet (MAE_MR) improves MAE by 22.8% versus raw HRES. UNet (MAE) and UNet (WMAE) have average improvements of 18.9%. For 24–96 h, UNet (MAE) has lower MAE than UNet (WMAE), while UNet (WMAE) performs better at 120–168 h. UNet (MSE) and RF improve MAE by 17.9% and 13.8% on average, respectively, while KF shows the weakest improvement at 4.3%. All methods demonstrate larger improvements at shorter lead times. Compared to raw HRES, KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) reduce 24-h forecast MAE by 7.1%, 20.0%, 24.3%, 25.7%, 22.8%, and 27.1%, respectively. For the 168-h forecast, the improvements are 2.8%, 12.8%, 16.3%, 15.6%, 18.4%, and 19.9%.

    Wind force accuracy on the Beaufort scale is evaluated by SC, FAR, and MR. Figure 6 compares the raw HRES and corrected forecast accuracy for forces 1–3, 4, 5, and 6 or higher across 24–168 h. UNet (MAE_MR) significantly outperforms the other methods, achieving the highest accuracy across all forces and lead times. Despite the weakest overall correction, KF still improves the accuracy for all forces. For lighter winds, RF performs well, improving forces 1–3 and 4 by 10.3% and 10.0% on average. However, RF shows negative correction for force 5 after 72 h and force 6 or higher at all lead times, with declining accuracy as lead time increases.

    The UNet (MSE) and UNet (MAE) corrections are similar. UNet (MAE) performs better than UNet (MSE), significantly improving forces 1–3 and 4 accuracy from 24–168 h. UNet (MAE) increases average accuracy for forces 1–3 and 4 by 13.9% and 15.5%, and by 9.1% and 9.9%, respectively. However, like RF, UNet (MAE) and UNet (MSE) show negative correction for force 5 after 72 h and force 6 or higher at all leads. In comparison, UNet (WMAE) has the lowest accuracy for forces 1–3 and 4 among the UNet models but performs better for forces 5 and 6 or higher, though still with negative correction versus raw HRES. The UNet (MAE_MR) model achieves higher accuracy than raw HRES for forces 1–3, averaging 11.7% higher, albeit slightly less than UNet (MAE). For forces 4 and 5, UNet (MAE_MR) improves accuracy over both raw HRES and UNet (MAE) at all lead times, averaging 16.9% and 11.6% higher than HRES. Critically, UNet (MAE_MR) significantly enhances the accuracy for force 6 or higher, outperforming raw HRES at all leads and averaging 6.4% higher.

    Table 5 shows the SC, FAR, and MR results for wind forces at 24–168-h lead times. All methods reduce FAR versus raw HRES for each force. However, for force 5 or higher, UNet (MSE) and UNet (MAE) produce substantially higher MR than raw HRES, rapidly increasing with lead time. Although UNet (WMAE) improves force 5 or higher accuracy somewhat, it still shows negative correction versus HRES. In contrast, UNet (MAE_MR) effectively mitigates the high strong wind MR caused by MAE/MSE loss functions at all leads. For instance, at 72 h, the MR for HRES, UNet (MAE), and UNet (MAE_MR) are 0.339, 0.593, and 0.402 for strong winds. At 168 h, they are 0.558, 0.924, and 0.683.

    In summary, UNet (MAE_MR) overcomes limitations of UNet models with conventional loss functions for correcting strong winds. It demonstrates enhanced accuracy across 24–168-h lead times compared to raw HRES forecasts.

    Figure 7 displays the probability density distribution of wind speeds, including ERA5, raw HRES, and corrected forecasts. Hypothesis testing assesses performance differences between correction methods using the Friedman test. The results demonstrate that there are significant differences among the six approaches overall according to the Friedman test (p < 0.05). Further pairwise analyses reveal that the combination of KF, RF, and UNet (MSE) methods yields a Friedman test p-value of 0.86, exceeding 0.05, signifying no statistically significant differences among these three approaches. In contrast, for methods based on the UNet model, namely UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR), the Friedman test p-value is approximately 0, substantially below 0.05, indicating significant performance variances. Subsequent post-hoc Nemenyi tests provide further discernment of individual method efficacy. The results demonstrate the Nemenyi test p-values for UNet (MSE) and UNet (WMAE) are 0.4, above 0.05, showing no significant differences. The Nemenyi test p-value for UNet (MAE) and UNet (MAE_MR) is 0.001, under 0.05, signifying a statistically significant distinction between these two methods.

    Fig  7.  Probability density distribution of wind speeds at a 24-h forecast lead time in the study area in 2020, comparing ERA5 and HRES raw wind speed forecast, as well as corrected wind by KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods.

    The analysis of the spatial distribution primarily focuses on the KF, RF, and UNet (MAE_MR) methods, as the corrected wind speed forecasts using UNet (MSE), UNet (MAE), and UNet (WMAE) methods have comparable spatial distributions. Figure 8 illustrates the MAE spatial distribution for KF, RF, and UNet (MAE_MR) at 24 h versus ERA5 in 2020. The HRES wind speed forecast (Fig. 8a) has an average wind speed forecast MAE ranging from 0.75 to 1.50 m s−1 over the sea and from 1.25 to 1.75 m s−1 near the coastline. The MAE of wind speed forecasts on land are unevenly distributed, with larger MAE ranging from 0.75 to 2 m s−1 in the northwest of Hebei, Shanxi, and central Inner Mongolia, and smaller MAE mostly below 0.75 m s−1 in the southeast of North China, Huang–Huai region, and middle–lower reaches of the Yangtze River. This uneven distribution of MAE in land wind speed forecasts is largely due to the influence of topography. The Taihang Mountains serve as a boundary between regions with different topography, resulting in an uneven distribution of MAE. The region west of the Taihang Mountains has complex topography, which leads to larger MAE for wind speed forecasts. In contrast, the region to the east, which is mostly plain, has smaller MAE. The wind speed forecast using the KF method (Fig. 8b) can improve the MAE in the raw wind speed forecast from HRES in some areas, particularly near the coast and in northwest Hebei and Shanxi, where the MAE are reduced to below 0.75 and 1 m s−1, respectively. The wind speed forecast using the RF method (Fig. 8c) has a more noticeable correction effect on the MAE in complex terrain areas such as northwest Hebei and Shanxi, reducing them to below 0.75 m s−1. The wind speed forecast using the UNet (MAE_MR) method (Fig. 8d) has the best correction effect among the three methods, significantly reducing the MAE in complex terrain areas and over the sea to below 0.5 m s−1. In the southeast coastal area, the MAE are reduced to below 0.75 m s−1.

    Fig  8.  Spatial distributions of MAE for wind speed forecasts from (a) HRES and corrected wind using (b) KF, (c) RF, and (d) UNet (MAE_MR) methods at a 24-h forecast lead time compared to ERA5 wind speed in 2020.

    On 29 and 30 December 2020, a large-scale cold wave with strong winds occurred in central and eastern China. Figures 9 and 10 show the ERA5 wind speeds during this period, along with the 72-h HRES forecasts and corrected forecasts from various methods.

    Fig  9.  (a) ERA5 wind speeds (m s−1) in the study area at 0000 UTC 29 December 2020, and 72-h wind speed forecasts initialized at 0000 UTC 26 December from (b) HRES and corrected wind speed using (c) KF, (d) RF, (e) UNet (MAE), and (f) UNet (MAE_MR) methods.
    Fig  10.  (a) ERA5 wind speeds (m s−1) in the study area at 0000 UTC 30 December 2020, and 72-h wind speed forecasts initialized at 0000 UTC 27 December from (b) HRES and corrected wind speed using (c) KF, (d) RF, (e) UNet (MAE), and (f) UNet (MAE_MR) methods.

    At 0000 UTC 29 December, ERA5 wind speed in most of the study areas exceeded 5.5 m s−1, reaching 10.8 m s−1 in the northwest Bohai Sea and Yellow Sea and exceeding 13.9 m s−1 in some areas (Fig. 9a). Figures 9b–f present the 72-h HRES forecast from 26 December, along with corrected forecasts using KF, RF, UNet (MAE), and UNet (MAE_MR). Compared to ERA5, the region-average MAE is 1.17 m s−1 for HRES, 1.16 m s−1 for KF, 1.14 m s−1 for RF, 0.85 m s−1 for UNet (MAE), and 0.73 m s−1 for UNet (MAE_MR). The HRES wind speed forecast (Fig. 9b) deviates substantially from the ERA5 wind speed in two regions. First, winds exceed 5.5 m s−1 in northwest Hebei/southeast Inner Mongolia, while HRES forecasts lower speeds. Second, speeds exceed 13.9 m s−1 (force 7) in the northwest Bohai/Yellow Seas, but HRES only forecasts force 7 near Liaodong. The KF and RF methods did not significantly improvethe HRES raw wind speed forecast. The UNet (MAE) method have good correction performance for light winds (Fig. 9e), correcting HRES raw wind forecasts of force 3 in the northwest of Hebei, southeast of Inner Mongolia, and central Shandong to force 4. However, for strong winds over the sea, the UNet (MAE) method have a noticeable smoothing effect, reducing the wind speeds in the northwest of the Bohai Sea and Yellow Sea to force 6 after correction. The wind speed forecast using the UNet (MAE_MR) method (Fig. 9f) shows good correction effects for all forces on Beaufort scale. There is a clearly visible area of wind force 7 in the northwest of the Bohai Sea and Yellow Sea, which is more consistent with the area of strong winds in reality. In addition, there is a noticeable correction effect in the area of force 5 in the north and central parts of Jiangsu.

    At 0000 UTC 30 December, ERA5 winds generally exceeded 10.8 m s−1, reaching 13.9 m s−1 in the Yellow/East Seas and 17.2 m s−1 in eastern Zhejiang (Fig. 10a). Inland wind speeds above 5.5 m s−1 occurred mainly in northern Hebei, southeast Inner Mongolia, the Shandong Peninsula, and eastern Jiangsu/Zhejiang coasts. Figures 10–f present the 72-h HRES forecast from December 27 and corrected forecasts. Compared to the ERA5 wind speed, the HRES wind speed forecast has an average MAE of 0.85 m s−1. The MAEs of the corrected wind speed forecasts based on the various methods are 0.79, 0.71, 0.65, and 0.63 m s−1, respectively. The HRES wind speed forecast (Fig. 10b) is generally consistent with the ERA5 wind speed, but there are larger wind speed deviations near the coastline, where the HRES wind speed forecast is slightly lower. The KF and RF methods do not significantly improve the HRES wind speed forecast. The UNet (MAE) method reduces the intensity of strong winds in certain areas of the Yellow Sea and the East Sea, resulting in a smaller range of strong winds compared to the HRES raw forecast. The UNet (MAE_MR) method effectively corrects the strong wind areas offshore and also corrects the issue of the HRES raw wind speed forecast being underestimated in the north of the Shandong Peninsula and the west of the Liaodong Peninsula. Additionally, the range of wind speeds over 17.2 m s−1 in the eastern part of Zhejiang is also more accurate according to the UNet (MAE_MR) method.

    The method for analyzing the importance of predictors, proposed by Breiman (2001) for random forests, is used to conduct sensitivity experiments on the predictors of the deep learning model. These experiments aim to analyze the relative importance of these predictors in the statistical post-processing of wind speed.

    The method for sensitivity experiments on the predictors of the deep learning model is based on the work of Zhou et al. (2022). Using the UNet (MAE_MR) model, the performance difference (ΔP) between the prediction result with a single predictor x and the prediction result with all predictors is calculated by extracting a single predictor x. The larger ΔP, the more critical the predictor x. The relative importance of a single predictor x is evaluated by using the wind speed forecast accuracy. UNet (MAE_MR) model is run five times, averaging results to reduce randomness. The formula for calculating the relative importance of each predictor is as follows:

    ΔPx=SCSCxSC×100%, (12)

    where SC is accuracy with all predictors and SCx is accuracy with predictor x removed.

    Figure 11 ranks the top 20 predictors by importance for each Beaufort force. Three key findings emerge: (1) 10- and 100-m wind speeds are the most important predictors for all forces on Beaufort scale; (2) as the wind force increases, the importance of 10-m wind speed decreases significantly, while the importance of 100-m wind speed gradually increases; and (3) the importance of upper-air predictors decreases with increasing height of the layer pressure, with the main predictors being the wind speed, vertical velocity, divergence, and potential vorticity of the low upper-air layers. For wind forces 1–3 (Fig. 11a), 10-m speed dominates at over 25% importance, while 100-m speed is around 6%. Other predictors are below 2%. For wind force 4 (Fig. 11b), 10- and 100-m speeds are 15% and 7%. Key factors also include 1000/925-hPa winds and vertical velocity at 1.5%–4% importance. For wind force 5 (Fig. 11c), 10- and 100-m speeds are equally important at ~10%. Other key inputs are low-level upper-air winds, height, and vorticity from 1%–2%. For wind forces 6 and higher (Fig. 11d), 100-m speed leads at 10%, followed by 10-m speed at 8%. This indicates stronger winds relate to larger systems. Other notable predictors are 1000/925/850-hPa winds, height, and total column water at 1.5%–2.5% importance.

    Fig  11.  Ranking of the importance of predictors in UNet model for wind forces (a) 1–3, (b) 4, (c) 5, and (d) 6 or higher.

    The selection of predictors is a critical factor in determining the performance of a model. Including weakly-ranked predictors may improve RF models depending on correlations with the target and predictor independences. Sensitivity tests on the RF wind correction model examine removing lower-ranked predictors. Removing the bottom 10 and 20 predictors leaves MAE virtually unchanged versus all 95 factors. However, eliminating the bottom 30 or 50 increases MAE by 3.1% and 13.8%. Hence, analyzing predictor importance can inform RF model development, preventing overfitting and conserving resources by excluding irrelevant variables.

    This study presents improvements to 24–168-h ECMWF HRES wind speed forecasts in East China using KF, RF, and UNet methods. KF method is a traditional statistical post-processing technique, RF method is a machine learning method, and UNet method is a deep learning method. To address the non-Gaussian distribution of wind speeds, forecast errors are used as the target variable for correction modeling. Errors have a more normal distribution and are relatively stable spatially and temporally. A new UNet loss function called MAE_MR incorporates a physical constraint to enhance strong wind correction. UNet is evaluated with MSE, MAE, WMAE, and MAE_MR loss functions. The key findings are:

    (1) Comparing KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR), UNet (MAE_MR) demonstrates the most significant improvements in wind speed forecasting for lead times ranging from 24 to 168 h. On average, the wind speed forecast by the UNet (MAE_MR) model exhibits an increase in MAE of 22.8% compared to the raw HRES. UNet (MAE), UNet (WMAE), UNet (MSE), RF, and KF methods exhibit an average improvement in MAE of 18.9%, 18.9%, 17.9%, 13.8%, and 4.3%, respectively.

    (2) In comparison to the raw HRES, UNet (MAE_MR) improves the accuracy of wind speed forecasts by 11.7%, 16.9%, 11.6%, and 6.4% for wind forces 1–3, 4, 5, and 6 or higher, respectively, on average over 24–168 h. KF shows the weakest correction. RF, UNet (MSE), UNet (MAE), and UNet (WMAE) have the potential to improve the precision of wind speed forecasts for wind forces 1–3 and 4, but may have a negative impact on the correction for wind forces 6 or higher.

    (3) During the strong wind event that occurred in East China on 29 and 30 December 2020, UNet (MAE_MR) method is found to be the most effective in correcting wind speed forecasts, especially for strong winds, outperforming KF, RF, and UNet (MAE).

    (4) Sensitivity analysis reveals that 10- and 100-m speeds are the most important predictors in UNet wind speed forecasting, along with low-level upper-air winds, vertical velocity, divergence, and vorticity.

    This explores using DL to correct non-Gaussian elements like wind speed, incorporating physical constraints into the DL model and handling imbalanced data. Two measures are employed. The first is a statistical post-processing scheme based on forecast error, which helps to normalize the distribution of wind speed forecast errors and reduce skew. A second measure proposed in this study involves incorporating the MR as a physical constraint into a deep learning loss function called MAE_MR. The UNet (MAE_MR) model performs well in correcting wind speed forecasts at all lead-times, but it does have some loss in accuracy for wind forces 1–3 compared to the UNet (MAE) model.

    In this paper, various methods including KF, RF, and UNet models with different loss functions including MSE, MAE, WMAE, and MAE_MR are employed to correct the gridded wind speed forecasts. Although the KF method is able to correct the systematic bias of the NWP model, its correction effect is relatively weak compared to the other methods. However, it shows potential in correcting strong wind forecasts. Both RF and DL methods are sensitive to imbalanced data. The RF method employs an ensemble of decision trees and typically handles imbalanced data through data resampling. The loss function of RF is computed as the average of the results from each decision tree. The DL model incorporates a physical constraint term into the loss function to improve interpretability and significantly enhance the forecast accuracy for strong winds. In the field of DL, there are various approaches to dealing with imbalanced data, and there has been significant research on “long-tailed learning” (Liu et al., 2019; Zhong et al., 2021). Long-tailed learning divides imbalanced data into head and tail categories, representing the majority and minority sample distributions, respectively. Several works have reviewed the development and latest progress of long-tailed learning (Zhang et al., 2021) and summarized the methods for DL to handle imbalanced data, including data augmentation, loss function design, and model design. Research has shown that when the loss function is modified to address data imbalance, DL models can improve the learning ability of minority class samples while slightly decreasing the prediction ability of head class samples (Liu et al., 2019). This is consistent with the results in this study, where the accuracy of corrected wind forces 1–3 by the UNet (MAE_MR) model is relatively lower than that by the UNet (MAE) model. As many meteorological elements, such as wind speed and precipitation, are imbalanced data, it is important to consider the balance of samples when applying AI techniques to non-Gaussian forecast elements for statistical post-processing.

  • Fig.  8.   Spatial distributions of MAE for wind speed forecasts from (a) HRES and corrected wind using (b) KF, (c) RF, and (d) UNet (MAE_MR) methods at a 24-h forecast lead time compared to ERA5 wind speed in 2020.

    Fig.  1.   Topography distribution and grid points in the study area (28.125°–44°N, 109°–124.875°E).

    Fig.  2.   Sample of wind speed data from ERA5 and the forecast error for 24-h lead time during 2015–2020 for (a) the probability density distribution and (b) a box plot of the forecast error distribution for wind forces on the Beaufort scale.

    Fig.  3.   The flowchart of UNet bias correction model employed in this study.

    Fig.  4.   Relationship between the weight coefficient of MAE_MR and the accuracy of wind speed forecasting with an average lead time ranging from 24 to 168 h. The error bars represent the 95% confidence intervals of the results.

    Fig.  5.   (a) MAE (m s−1) and (b) MAE improvement percentage (%) of wind speed forecasts for the KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods at 24–168-h lead times, compared to ERA5 wind speed in 2020.

    Fig.  6.   Accuracy of wind forces (a) 1–3, (b) 4, (c) 5, and (d) 6 or higher compared between HRES raw forecasts and corrected wind using KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods at 24–168-h lead times in 2020. The bar charts represent the mean values obtained from 10 independent training runs, with the short vertical lines indicating the standard deviation.

    Fig.  7.   Probability density distribution of wind speeds at a 24-h forecast lead time in the study area in 2020, comparing ERA5 and HRES raw wind speed forecast, as well as corrected wind by KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods.

    Fig.  9.   (a) ERA5 wind speeds (m s−1) in the study area at 0000 UTC 29 December 2020, and 72-h wind speed forecasts initialized at 0000 UTC 26 December from (b) HRES and corrected wind speed using (c) KF, (d) RF, (e) UNet (MAE), and (f) UNet (MAE_MR) methods.

    Fig.  10.   (a) ERA5 wind speeds (m s−1) in the study area at 0000 UTC 30 December 2020, and 72-h wind speed forecasts initialized at 0000 UTC 27 December from (b) HRES and corrected wind speed using (c) KF, (d) RF, (e) UNet (MAE), and (f) UNet (MAE_MR) methods.

    Fig.  11.   Ranking of the importance of predictors in UNet model for wind forces (a) 1–3, (b) 4, (c) 5, and (d) 6 or higher.

    Table  1   Predictors used in the RF and deep learning models

    Category Predictor Pressure layer (hPa)
    Upper air
    (0.25° × 0.25°)
    Divergence, geopotential height, potential vorticity, specific humidity, relative humidity, temperature, wind speed, wind direction, vertical velocity 200, 300, 400, 500, 700, 850, 925, 1000
    Surface
    (0.125° × 0.125°)
    Convective available potential energy, convective precipitation, 2-m temperature, 2-m dewpoint temperature, 0°C isothermal level, forecast albedo, total cloud cover, large-scale precipitation, 2-m minimum temperature in the last 6 hours, 2-m maximum temperature in the last 6 hours, mean sea level pressure, sea surface temperature, total cloud cover, total column water, total column water vapour, total precipitation, 10-m wind speed, 10-m wind direction, 100-m wind speed, 100-m wind direction
    Terrain
    (1 km × 1 km)
    Altitude, slope, aspect
    Download: Download as CSV

    Table  2   The Beaufort wind force scale

    Force 1–3 4 5 6 or higher
    Wind speed (m s−1) 0.3–5.4 5.5–7.9 8.0–10.7 ≥10.8
    Download: Download as CSV

    Table  3   Statistics of the wind speed forecast error for total wind speed and strong wind samples for 24–168-h lead times from 2015 to 2020

    Forecast lead time (h) Total wind speed sample Strong wind sample
    Mean value (m s−1) Variance (m s−1) Mean value (m s−1) Variance (m s−1)
    24 0.21 0.94 −0.07 1.26
    48 0.22 1.11 −0.41 1.72
    72 0.23 1.26 −0.70 2.12
    96 0.23 1.44 −1.00 2.38
    120 0.24 1.60 −1.45 2.62
    144 0.25 1.79 −1.96 3.00
    168 0.22 1.97 −2.86 3.27
    Download: Download as CSV

    Table  4   MAE of wind speed forecasts using the KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods at 24–168-h lead times in 2020 compared to ERA5 wind speed

    Forecast result Lead time (h)
    24 48 72 96 120 144 168
    HRES 0.70 0.82 0.93 1.04 1.15 1.28 1.41
    KF 0.65 0.77 0.88 1.00 1.11 1.24 1.37
    RF 0.56 0.67 0.79 0.89 1.09 1.19 1.23
    UNet (MSE) 0.53 0.64 0.76 0.87 1.05 1.10 1.18
    UNet (MAE) 0.52 0.63 0.75 0.85 1.02 1.12 1.19
    UNet (WMAE) 0.51 0.63 0.74 0.86 0.98 1.08 1.16
    UNet (MAE_MR) 0.51 0.60 0.71 0.82 0.93 1.02 1.13
    Download: Download as CSV

    Table  5   SC, FAR, and MR of wind forces 1–3, 4, 5, and 6 or higher compared between HRES raw forecasts and corrected wind using KF, RF, UNet (MSE), UNet (MAE), UNet (WMAE), and UNet (MAE_MR) methods at 24–168-h forecast lead times. Bolding indicates the best results of several methods

    Lead time (h) SC FAR MR
    Wind force 1–3 4 5 ≥6 1–3 4 5 ≥6 1–3 4 5 ≥6
    24 HRES 0.621 0.487 0.533 0.541 0.010 0.209 0.231 0.279 0.011 0.128 0.137 0.206
    KF 0.637 0.514 0.552 0.553 0.009 0.168 0.193 0.235 0.029 0.146 0.159 0.235
    RF 0.697 0.552 0.545 0.525 0.011 0.114 0.140 0.159 0.001 0.187 0.219 0.325
    UNet (MSE) 0.721 0.551 0.563 0.528 0.011 0.097 0.115 0.122 0.002 0.177 0.217 0.356
    UNet (MAE) 0.731 0.565 0.572 0.526 0.011 0.098 0.112 0.112 0.002 0.169 0.220 0.366
    UNet (WMAE) 0.712 0.568 0.576 0.53 0.011 0.107 0.120 0.132 0.001 0.156 0.207 0.348
    UNet (MAE_MR) 0.727 0.577 0.60 0.589 0.011 0.101 0.113 0.117 0.002 0.163 0.216 0.323
    48 HRES 0.576 0.42 0.462 0.469 0.010 0.247 0.275 0.308 0.011 0.165 0.197 0.278
    KF 0.589 0.439 0.465 0.471 0.009 0.210 0.241 0.270 0.030 0.183 0.220 0.313
    RF 0.653 0.463 0.469 0.434 0.011 0.153 0.168 0.160 0.001 0.223 0.295 0.431
    Unet (MSE) 0.665 0.465 0.473 0.421 0.011 0.144 0.156 0.142 0.002 0.216 0.296 0.467
    UNet (MAE) 0.677 0.473 0.479 0.431 0.011 0.144 0.157 0.138 0.001 0.208 0.283 0.436
    UNet (WMAE) 0.636 0.461 0.480 0.433 0.011 0.120 0.118 0.087 0.003 0.247 0.354 0.537
    UNet (MAE_MR) 0.664 0.489 0.515 0.497 0.011 0.146 0.161 0.144 0.000 0.203 0.232 0.368
    72 HRES 0.54 0.367 0.395 0.384 0.010 0.280 0.323 0.385 0.011 0.202 0.253 0.339
    KF 0.551 0.382 0.404 0.385 0.010 0.243 0.289 0.352 0.030 0.222 0.277 0.372
    RF 0.606 0.394 0.392 0.316 0.011 0.181 0.216 0.266 0.001 0.279 0.369 0.487
    UNet (MSE) 0.619 0.392 0.363 0.296 0.011 0.157 0.165 0.155 0.002 0.287 0.426 0.595
    UNet (MAE) 0.618 0.393 0.363 0.285 0.011 0.143 0.162 0.167 0.001 0.302 0.423 0.593
    UNet (WMAE) 0.601 0.392 0.368 0.293 0.011 0.156 0.170 0.168 0.002 0.287 0.411 0.589
    UNet (MAE_MR) 0.605 0.418 0.427 0.413 0.011 0.161 0.183 0.203 0.000 0.174 0.255 0.402
    96 HRES 0.506 0.322 0.346 0.330 0.010 0.321 0.363 0.454 0.011 0.237 0.297 0.391
    KF 0.515 0.33 0.359 0.335 0.010 0.287 0.335 0.402 0.031 0.260 0.321 0.422
    RF 0.556 0.352 0.326 0.257 0.011 0.217 0.236 0.267 0.001 0.327 0.445 0.589
    UNet (MSE) 0.576 0.356 0.271 0.220 0.011 0.186 0.178 0.136 0.002 0.369 0.553 0.719
    UNet (MAE) 0.584 0.359 0.295 0.214 0.011 0.195 0.202 0.154 0.001 0.340 0.505 0.685
    UNet (WMAE) 0.569 0.360 0.312 0.256 0.011 0.210 0.216 0.219 0.001 0.321 0.483 0.676
    UNet (MAE_MR) 0.572 0.367 0.361 0.347 0.011 0.233 0.259 0.313 0.000 0.203 0.325 0.458
    120 HRES 0.477 0.282 0.286 0.277 0.011 0.350 0.412 0.501 0.011 0.277 0.367 0.450
    KF 0.484 0.299 0.299 0.281 0.010 0.321 0.386 0.418 0.032 0.298 0.390 0.469
    RF 0.502 0.31 0.226 0.208 0.011 0.293 0.311 0.405 0.001 0.391 0.564 0.700
    UNet (MSE) 0.513 0.292 0.279 0.190 0.011 0.292 0.325 0.364 0.009 0.324 0.465 0.586
    UNet (MAE) 0.523 0.285 0.266 0.181 0.010 0.263 0.295 0.291 0.011 0.362 0.512 0.682
    UNet (WMAE) 0.481 0.293 0.282 0.213 0.009 0.241 0.255 0.302 0.045 0.423 0.618 0.713
    UNet (MAE_MR) 0.498 0.327 0.313 0.287 0.011 0.289 0.321 0.392 0.002 0.229 0.368 0.494
    144 HRES 0.445 0.24 0.24 0.244 0.011 0.400 0.470 0.562 0.012 0.330 0.413 0.490
    KF 0.449 0.244 0.244 0.248 0.010 0.373 0.448 0.538 0.033 0.351 0.433 0.505
    RF 0.481 0.264 0.184 0.128 0.011 0.366 0.373 0.361 0.001 0.413 0.643 0.834
    UNet (MSE) 0.512 0.255 0.195 0.086 0.011 0.279 0.278 0.215 0.001 0.426 0.649 0.859
    UNet (MAE) 0.520 0.259 0.205 0.106 0.011 0.285 0.292 0.252 0.001 0.417 0.618 0.831
    UNet (WMAE) 0.477 0.256 0.231 0.181 0.011 0.305 0.344 0.346 0.003 0.398 0.560 0.739
    UNet (MAE_MR) 0.487 0.284 0.264 0.25 0.011 0.322 0.353 0.393 0.001 0.294 0.468 0.602
    168 HRES 0.419 0.208 0.201 0.174 0.011 0.443 0.531 0.671 0.011 0.373 0.463 0.588
    KF 0.422 0.214 0.210 0.179 0.010 0.419 0.508 0.645 0.033 0.398 0.492 0.608
    RF 0.464 0.228 0.163 0.101 0.011 0.320 0.346 0.418 0.001 0.488 0.681 0.845
    UNet (MSE) 0.481 0.234 0.161 0.060 0.011 0.304 0.287 0.327 0.004 0.506 0.737 0.900
    UNet (MAE) 0.488 0.236 0.175 0.078 0.011 0.273 0.259 0.172 0.002 0.567 0.785 0.924
    UNet (WMAE) 0.458 0.231 0.199 0.146 0.011 0.319 0.325 0.442 0.002 0.429 0.588 0.762
    UNet (MAE_MR) 0.465 0.253 0.25 0.194 0.011 0.308 0.392 0.417 0.001 0.334 0.513 0.683
    Download: Download as CSV
  • Bauer, P., A. Thorpe, and G. Brunet, 2015: The quiet revolution of numerical weather prediction. Nature, 525, 47–55. doi: 10.1038/nature14956
    Boeing, G., 2016: Visual analysis of nonlinear dynamical systems: Chaos, fractals, self-similarity and the limits of prediction. Systems, 4, 37. doi: 10.3390/systems4040037
    Breiman, L., 2001: Random forests. Mach. Learn., 45, 5–32. doi: 10.1023/A:1010933404324
    Chantry, M., H. Christensen, P. Dueben, et al., 2021: Opportunities and challenges for machine learning in weather and climate modelling: Hard, medium and soft AI. Philos. Trans. Roy. Soc. A Math. Phys. Eng. Sci., 379, 20200083. doi: 10.1098/rsta.2020.0083
    Cho, D., C. Yoo, J. Im, et al., 2020: Comparative assessment of various machine learning-based bias correction methods for numerical weather prediction model forecasts of extreme air temperatures in urban areas. Earth Space Sci., 7, e2019EA000740. doi: 10.1029/2019ea000740
    Cui, B., Z. Toth, Y. J. Zhu, et al., 2012: Bias correction for global ensemble forecast. Wea. Forecasting, 27, 396–410. doi: 10.1175/Waf-D-11-00011.1
    Dabernig, M., G. J. Mayr, J. W. Messner, et al., 2017: Spatial ensemble post-processing with standardized anomalies. Quart. J. Roy. Meteor. Soc., 143, 909–916. doi: 10.1002/qj.2975
    Dabernig, M., I. Schicker, A. Kann, et al., 2020: Statistical post-processing with standardized anomalies based on a 1 km gridded analysis. Meteor. Z., 29, 265–275. doi: 10.1127/metz/2020/1022
    Daw, A., A. Karpatne, W. Watkins, et al., 2022: Physics-Guided Neural Networks (PGNN): An application in lake temperature modeling. Knowledge Guided Machine Learning, A. Karpatne, R. Kannan, and V. Kumar, Eds., Chapman and Hall, New York, 1–20.
    Dupuy, F., O. Mestre, M. Serrurier, et al., 2021: ARPEGE cloud cover forecast postprocessing with convolutional neural network. Wea. Forecasting, 36, 567–586. doi: 10.1175/waf-d-20-0093.1
    Glahn, H. R., and D. A. Lowry, 1972: The use of Model Output Statistics (MOS) in objective weather forecasting. J. Appl. Meteor., 11, 1203–1211. doi: 10.1175/1520-0450(1972)011<1203:TUOMOS>2.0.CO;2
    Han, L., M. X. Chen, K. K. Chen, et al., 2021: A deep learning method for bias correction of ECMWF 24–240 h forecasts. Adv. Atmos. Sci., 38, 1444–1459. doi: 10.1007/s00376-021-0215-y
    Haupt, S. E., W. Chapman, S. V. Adams, et al., 2021: Towards implementing artificial intelligence post-processing in weather and climate: Proposed actions from the Oxford 2019 workshop. Philos. Trans. Roy. Soc. A Math. Phys. Eng. Sci., 379, 20200091. doi: 10.1098/rsta.2020.0091
    Hewson, T. D., and F. M. Pillosu, 2021: A low-cost post-processing technique improves weather forecasts around the world. Commun. Earth Environ., 2, 132. doi: 10.1038/s43247-021-00185-9
    Hinton, G. E., S. Osindero, and Y. W. Teh, 2006: A fast learning algorithm for deep belief nets. Neural Comput., 18, 1527–1554. doi: 10.1162/neco.2006.18.7.1527
    Huang, G., Z. Liu, L. Van Der Maaten, et al., 2017: Densely connected convolutional networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Honolulu, HI, USA, 2261–2269.
    Jin, R. H., K. Dai, R. X. Zhao, et al., 2019: Progress and challenge of seamless fine gridded weather forecasting technology in China. Meteor. Mon., 45, 445–457. (in Chinese) doi: 10.7519/j.issn.1000-0526.2019.04.001
    Kashinath, K., M. Mustafa, A. Albert, et al., 2021: Physics-informed machine learning: Case studies for weather and climate modelling. Philos. Trans. Roy. Soc. A Math. Phys. Eng. Sci., 379, 20200093. doi: 10.1098/rsta.2020.0093
    Kingma, D. P., and J. Ba, 2015: Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA, 15 pp. Available online at http://www.arxiv.org/pdf/1412.6980.pdf. Accessed on 10 November 2023.
    Kirkwood, C., T. Economou, H. Odbert, et al., 2021: A framework for probabilistic weather forecast post-processing across models and lead times using machine learning. Philos. Trans. Roy. A Math. Phys. Eng. Sci., 379, 20200099. doi: 10.1098/rsta.2020.0099
    LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436–444. doi: 10.1038/nature14539
    Li, H. C., C. Yu, J. J. Xia, et al., 2019: A model output machine learning method for grid temperature forecasts in the Beijing Area. Adv. Atmos. Sci., 36, 1156–1170. doi: 10.1007/s00376-019-9023-z
    Liu, Z. W., Z. Q. Miao, X. H. Zhan, et al., 2019: Large-scale long-tailed recognition in an open world. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Long Beach, CA, USA, 2532–2541.
    Men, X. L., R. L. Jiao, D. Wang, et al., 2019: A temperature correction method for multi-model ensemble forecast in North China based on machine learning. Climatic Environ. Res., 24, 116–124. doi: 10.3878/j.issn.1006-9585.2018.18049
    Peng, X. D., Y. Z. Che, and J. Chang, 2013: A novel approach to improve numerical weather prediction skills by using anomaly integration and historical data. J. Geophys. Res. Atmos., 118, 8814–8826. doi: 10.1002/jgrd.50682
    Reichstein, M., G. Camps-Valls, B. Stevens, et al., 2019: Deep learning and process understanding for data-driven Earth system science. Nature, 566, 195–204. doi: 10.1038/s41586-019-0912-1
    Ronneberger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Munich, Germany, 234–241.
    Salazar, A. A., Y. Z. Che, J. F. Zheng, et al., 2022: Multivariable neural network to postprocess short-term, hub-height wind forecasts. Energy Sci. Eng., 10, 2561–2575. doi: 10.1002/ese3.928
    Shen, X. S., J. J. Wang, Z. C. Li, et al., 2020: Research and operational development of numerical weather prediction in China. J. Meteor. Res., 34, 675–698. doi: 10.1007/s13351-020-9847-6
    Sun Q. D., R. L. Jiao, J. J. Xia, et al., 2019: Adjusting wind speed prediction of numerical weather forecast model based on machine learning methods. Meteor. Mon., 45, 426–436. (in Chinese) doi: 10.7519/j.issn.1000-0526.2019.03.012
    Teng, W. J., X. F. Wang, Y. Q. Meng, et al., 2019: An improved support vector clustering approach to dynamic aggregation of large wind farms. CSEE J. Power Energy Syst., 5, 215–223. doi: 10.17775/cseejpes.2016.01600
    Vannitsem, S., J. B. Bremnes, J. Demaeyer, et al., 2021: Statistical postprocessing for weather forecasts: Review, challenges, and avenues in a big data world. Bull. Amer. Meteor. Soc., 102, E681–E699. doi: 10.1175/bams-d-19-0308.1
    Wang, Y. R., Y. Yu, S. Y. Cao, et al., 2020: A review of applications of artificial intelligent algorithms in wind farms. Artif. Intell. Rev., 53, 3447–3500. doi: 10.1007/s10462-019-09768-7
    Willard, J., X. W. Jia, S. M. Xu, et al., 2023: Integrating scientific knowledge with machine learning for engineering and environmental systems. ACM Comput. Surv., 55, 66. doi: 10.1145/3514228
    Xia, J. J., H. C. Li, Y. Y. Kang, et al., 2020: Machine learning-based weather support for the 2022 Winter Olympics. Adv. Atmos. Sci., 37, 927–932. doi: 10.1007/s00376-020-0043-5
    Xiang, L., J. P. Guan, J. Xiang, et al., 2022: Spatiotemporal model based on transformer for bias correction and temporal downscaling of forecasts. Front. Environ. Sci., 10, 1039764. doi: 10.3389/fenvs.2022.1039764
    Yang, L., L. Y. Song, H. Jing, et al., 2022: Fusion prediction and correction technique for high-resolution wind field in Winter Olympic Games area under complex terrain. Meteor. Mon., 48, 162–176. doi: 10.7519/j.issn.1000-0526.2021.092902
    Yang, X., K. Dai, and Y. J. Zhu, 2022: Progress and challenges of deep learning techniques in intelligent grid weather forecasting. Acta Meteor. Sinica, 80, 649–667. (in Chinese) doi: 10.11676/qxxb2022.051
    Yu, X. D., and Y. G. Zheng, 2020: Advances in severe convective weather research and operational service in China. Acta Meteor. Sinica, 78, 391–418. (in Chinese) doi: 10.11676/qxxb2020.035
    Zanetta, F., D. Nerini, T. Beucler, et al., 2023: Physics-constrained deep learning postprocessing of temperature and humidity. Artificial Intelligence for the Earth Systems, 2, 1–10.
    Zeng, X. Q., F. Xue, L. Yao, et al., 2019: Comparative study of different error correction methods on model output wind field. J. Appl. Meteor. Sci., 30, 49–60. doi: 10.11898/1001-7313.20190105
    Zhang, Y. B., M. X. Chen, L. Han, et al., 2022: Multi-element deep learning fusion correction method for numerical weather prediction. Acta Meteor. Sinica, 80, 153–167. (in Chinese) doi: 10.11676/qxxb2021.066
    Zhang, Y. F., B. Y. Kang, B. Hooi, et al., 2021: Deep long-tailed learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 45, 10,795–10,816. doi: 10.1109/TPAMI.2023.3268118
    Zhong, Z. S., J. Q. Cui, S. Liu, et al., 2021: Improving calibration for long-tailed recognition. IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Nashville, TN, USA, 16,484–16,493.
    Zhou, K. H., Y. G. Zheng, B. Li, et al., 2019: Forecasting different types of convective weather: A deep learning approach. J. Meteor. Res., 33, 797–809. doi: 10.1007/s13351-019-8162-6
    Zhou, K. H., J. S. Sun, Y. G. Zheng, et al., 2022: Quantitative precipitation forecast experiment based on basic NWP variables using deep learning. Adv. Atmos. Sci., 39, 1472–1486. doi: 10.1007/s00376-021-1207-7
  • Related Articles

  • Other Related Supplements

  • Cited by

    Periodical cited type(5)

    1. Yuchi Xie, Linye Song, Mingxuan Chen, et al. A Segmented Classification and Regression Machine Learning Approach for Correcting Precipitation Forecast at 4–6 h Leadtimes. Journal of Meteorological Research, 2025, 39(1): 79. DOI:10.1007/s13351-025-4117-2
    2. Jieli Liu, Chunxiang Shi, Lingling Ge, et al. A Spatial Downscaling Approach for Enhanced Accuracy in High Wind Speed Estimation Using Hybrid Attention Transformer. Journal of Meteorological Research, 2025, 39(2): 303. DOI:10.1007/s13351-025-4105-6
    3. Yu Huang, Zhiqun Hu, Lin Li, et al. Leveraging Deep Learning to Extract Raindrop Size Gamma Distribution Parameters from Wind Profile Radar Data. Journal of Meteorological Research, 2025, 39(2): 404. DOI:10.1007/s13351-025-4157-7
    4. Shaohan Li, Min Chen, Lu Yi, et al. MIESTC: A Multivariable Spatio-Temporal Model for Accurate Short-Term Wind Speed Forecasting. Atmosphere, 2025, 16(1): 67. DOI:10.3390/atmos16010067
    5. Xin Liu, Zhimin Li, Yanbo Shen. Study on Downscaling Correction of Near-Surface Wind Speed Grid Forecasts in Complex Terrain. Atmosphere, 2024, 15(9): 1090. DOI:10.3390/atmos15091090

    Other cited types(0)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return