Prediction of Primary Climate Variability Modes at the Beijing Climate Center

+ Author Affiliations + Find other works by these authors
  • Corresponding author: Hong-Li REN, renhl@cma.gov.cn
  • Funds:

    Supported by the National (Key) Basic Research and Development (973) Program of China (2015CB453203), China Meteorological Administration Special Public Welfare Research Fund (GYHY201506013 and GYHY201406022), National Natural Science Foundation of China (41205058, 41375062, 41405080, 41505065, 41606019, and 41605116), US National Science Foundation (AGS-1406601), US Department of Energy (DOE) (DE-SC000511), and also partly supported by the UK–China Research & Innovation Partnership Fund through the Met Office Climate Science for Service Partnership (CSSP) China as part of the Newton Fund

  • doi: 10.1007/s13351-017-6097-3

PDF

  • Climate variability modes, usually known as primary climate phenomena, are well recognized as the most important predictability sources in subseasonal–interannual climate prediction. This paper begins by reviewing the research and development carried out, and the recent progress made, at the Beijing Climate Center (BCC) in predicting some primary climate variability modes. These include the El Niño–Southern Oscillation (ENSO), Madden–Julian Oscillation (MJO), and Arctic Oscillation (AO), on global scales, as well as the sea surface temperature (SST) modes in the Indian Ocean and North Atlantic, western Pacific subtropical high (WPSH), and the East Asian winter and summer monsoons (EAWM and EASM, respectively), on regional scales. Based on its latest climate and statistical models, the BCC has established a climate phenomenon prediction system (CPPS) and completed a hindcast experiment for the period 1991–2014. The performance of the CPPS in predicting such climate variability modes is systematically evaluated. The results show that skillful predictions have been made for ENSO, MJO, the Indian Ocean basin mode, the WPSH, and partly for the EASM, whereas less skillful predictions were made for the Indian Ocean Dipole (IOD) and North Atlantic SST Tripole, and no clear skill at all for the AO, subtropical IOD, and EAWM. Improvements in the prediction of these climate variability modes with low skill need to be achieved by improving the BCC’s climate models, developing physically based statistical models as well as correction methods for model predictions. Some of the monitoring/prediction products of the BCC-CPPS are also introduced in this paper.
  • 加载中
  • Fig. 1.  Schematic diagrams of the BCC’s (a) CPPS, (b) SEMAP2.1, and (c) IMPRESS2.0.

    Fig. 2.  TCC skill scores for (a) the Niño3.4 index in BCC_CSM1.1(m), ADEPS (analogue–dynamical ENSO prediction sub-system), the statistical model, and their MME-mean predictions in SEMAP2.1, and (b) those for the Niño3, Niño3.4, Niño CT, and Niño WP indices in the MME-mean predictions, based on the independent validations during 1996–2015. Panels (c–f) are the MME-mean TCC scores of the four indices as a function of lead month (x-axis) and initial month (y-axis), and panels (g–j) are the same but with target month as the y-axis.

    Fig. 3.  Plume plot of Niño3.4 index results (K) based on observation (black curve) and the updated SEMAP2.1 MME (multi-method ensemble)-mean predictions, as initiated in each month since January 2014. The predictions initiated in boreal winter, spring, summer, and autumn are indicated by the blue, green, orange, and purple lines.

    Fig. 4.  Skill scores for the RMM (real-time multivariate MJO) indices predicted by IMPRESS2.0, as initialized in spring (SPR), summer (SUM), autumn (AUT), winter (WIN), and all seasons (ALL). (a) Correlation coefficient (COR), (b) root-mean-square error (RMSE), and (c) mean-square skill score (MSSS). The dashed lines are the references 0.5 in (a), $\sqrt 2 $ in (b), and 0 in (c).

    Fig. 5.  TCC skill scores of BCC_CSM1.1(m) for the (a, d, g) IOBM (Indian Ocean basin mode), (b, e, h) IOD (Indian Ocean Dipole), and (c, f, i) SIOD (subtropical IOD) indices. (a–c) TCCs for all months, (d–f) dependence of the TCCs on initial calendar months, and (g–i) dependence of the TCCs on target calendar months.

    Fig. 6.  The (a) IOBM (Indian Ocean basin mode), (b) IOD (Indian Ocean Dipole), and (c) SIOD (subtropical IOD) indices (K) in observations (blue bars and black dots) and the predictions (red bars) made by BCC_CSM1.1(m), as initiated in October 2015.

    Fig. 7.  Temporal correlation coefficient (TCC) skill scores of NASTI (North Atlantic SST Tripole index) prediction by BCC_CSM1.1(m) for (a) all months, and the dependence of the TCCs on (b) the initial calendar month and (c) the target month, where the dashed line denotes statistical significance at the 99% confidence level based on the Student’s t-test.

    Fig. 8.  NASTI (North Atlantic SST Tripole index) results from the (a) April–May mean and (b) June–August mean hindcasts of BCC_CSM1.1(m), as initiated in March (red lines), and from OISSTv2 (black lines). The TCC (temporal correlation coefficient, corr) between the hindcast and observed NAST indices is given in the top-right corner of each panel. The monitoring and prediction products of the NASTI, as initiated in October 2015, are shown in (c), in which the solid bars represent the monitoring of the past 12 months using OISSTv2, the hatched bars are the forecasts for the future 12 months according to BCC_CSM1.1(m), and the black dots are the observations from OISSTv2.

    Fig. 9.  TCC (temporal correlation coefficient) skill scores of (a) monthly mean AO (Arctic Oscillation) index prediction by BCC_CSM1.1(m) as a function of lead month and initial calendar month, and (b) winter-mean AO index prediction by the same model as a function of lead month. The dashed line denotes statistical significance at the 95% confidence level based on the Student’s t-test.

    Fig. 10.  The monitoring and prediction products of the AO (Arctic Oscillation) index (AOI) as initiated in October 2015, in which the solid bars are the monitoring of the past 12 months using the NCEP data (NRA1), the hatched bars are the forecasts for the future 12 months by BCC_CSM1.1(m), and the black dots are the observations from NRA1.

    Fig. 11.  (a) TCCs (temporal correlation coefficients) of the JJA (June–August)-mean WPSHI (western Pacific subtropical high index) between BCC_CSM1.1(m) predictions and observation (NCEP data). The grey solid line represents statistical significance at the 95% confidence level. (b) The observational summer WPSHI (solid bars) and the corresponding hindcasts (black dots), as well as the prediction of the WPSHI in 2015 summer, as initiated in March 2015 (hatched bar), and the corresponding observation (green dot). COR is the correlation between the observation and prediction.

    Fig. 12.  TCCs (temporal correlation coefficients) for (a) the two EASMIs (East Asian summer monsoon indices) and (b) the two EAWMIs (East Asian winter monsoon indices) between BCC_CSM1.1(m) predictions and observations (NCEP data). EASMI-S96 is the EASMI of Shi et al. (1996), EASMI-Z03 is the EASMI of Zhang et al. (2003), EAWMI-S96 is the EAWMI of Shi et al. (1996), and EAWMI-G94 is the EAWMI of Guo (1994).

    Fig. 13.  As in Fig. 11b, but for (a) EASMI-S96 [the East Asian summer monsoon index (EASMI) of Shi et al. (1996)] and (b) EASMI-Z03 [the EASMI of Zhang et al. (2003)].

    Fig. 14.  As in Fig. 11b, but for (a) EAWMI-S96 [the EAWMI (East Asian winter monsoon index) of Shi et al. (1996)] and (b) EAWMI-G94 [the EAWMI of Guo (1994)], as initiated in October 2015.

    Table 1.  Definitions of the indices of the primary climate modes

    Climate modeIndex definition
    ENSONiño 3.4 SST index: SST anomaly averages over the region (5°S–5°N, 170°–120°W); Other Niño indices are defined in the
    text of Section 3
    MJORMM indices of Wheeler and Hendon (2004): Time series of the first two principle components of the multivariate EOF of
    850-/200-hPa zonal wind and OLR
    IOBMIndex: Area-averaged SST anomalies over 20°S–20°N, 40°–110°E
    IODArea-averaged SST anomaly difference over 10°S–10°N, 50°–70°E and 10°S–0°N, 90°–110°E (Saji et al., 1999)
    SIODArea-averaged SST anomaly difference over 45°S–30°S, 45°–75°E and 25°S–15°S, 80°–100°E (Behera and Yamagata,
    2001)
    NASTDifference between the SST anomalies averaged over 34°–44°N, 72°–62°W and the sum of regional-averaged SST
    anomalies over 0°–18°N, 46°–24°W and 44°–56°N, 40°–24°W (Zuo et al., 2012)
    AONormalized principal component of the leading EOF of monthly mean SLP north of 20°N (Thompson and Wallace, 2000)
    WPSHNormalized summer-mean 850-hPa geopotential height anomalies over 15°–25°N, 115°–150°E (Wang et al., 2013)
    EASM(1) Difference in the normalized summer-mean SLP over 20°–50°N, 110°E–160°E in 5° intervals (Shi et al.,
    1996); (2) Difference in the summer-mean 850-hPa zonal wind anomalies over the domain averages of 10°–20°N,
    100°–150°E and 25°–30°N, 100°–150°E (Zhang et al., 2003)
    EAWM(1) As per the EASM index definition of Shi et al. (1996), but for the winter mean; (2) Winter-mean SLP anomalies
    averaged at 60°N, 100°E; 60°N, 90°E; and 50°N, 100°E (Guo, 1994)
    Notes: RMM, real-time multivariate MJO; IOBM, Indian Ocean basin mode; IOD, Indian Ocean Dipole; SIOD, subtropical IOD; NAST, North Atlantic SST Tripole; AO, Arctic Oscillation; WPSH, western Pacific subtropical high; EASM, East Asian summer monsoon; and EAWM, East Asian winter monsoon.
    Download: Download as CSV
  • [1]

    Ahn, J. B., and H. J. Kim, 2014: Improvement of 1-month lead predictability of the wintertime AO using a realistically varying solar constant for a CGCM. Meteor. Appl., 21, 415–418. doi:  10.1002/met.1372.
    [2]

    Ashok, K., S. K. Behera, S. A. Rao, et al., 2007: El Niño Modoki and its possible teleconnection. J. Geophys. Res., 112, C11007.
    [3]

    Barnston, A. G., M. K. Tippett, M. L. L'Heureux, et al., 2012: Skill of real-time seasonal ENSO model predictions during 2002–11: Is our capability increasing? Bull. Amer. Meteor. Soc., 93, 631–651.
    [4]

    Behera, S. K., and T. Yamagata, 2001: Subtropical SST dipole events in the southern Indian Ocean. Geophys. Res. Lett., 28, 327–330.
    [5]

    Brönnimann, S., 2007: The impact of El Niño–Southern Oscillation on European climate. Rev. Geophys., 45, RG3003. doi:  10.1029/2006RG000199.
    [6]

    Cane, M. A., S. E. Zebiak, and S. C. Dolan, 1986: Experimental forecasts of El Niño. Nature, 321, 827–832.
    [7]

    Capotondi, A., A. T. Wittenberg, M. Newman, et al., 2015: Understanding ENSO diversity. Bull. Amer. Meteor. Soc., 96, 921–938.
    [8]

    Chang, C.-P., 2004: East Asian Monsoon (World Scientific Series on Meteorology of East Asia). World Scientific Publishing, Singapore, 1–572.
    [9]

    Chen, D. K., S. E. Zebiak, A. J. Busalacchi, et al., 1995: An improved procedure for El Niño forecasting: Implications for predictability. Science, 269, 1699–1702.
    [10]

    Cheng, Y. B., H.-L. Ren, and G. R. Tan, 2016: Empirical orthogonal function-analogue correction of extra-seasonal dyna-mical prediction of East Asian summer monsoon. J. Appl. Meteor. Sci., 27, 285–292. (in Chinese).
    [11]

    Cheng, Y. J., Y. M. Tang, X. B. Zhou, et al., 2010: Further analysis of singular vector and ENSO predictability in the Lamont model—Part I: Singular vector and the control factors. Climate Dyn., 35, 807–826.
    [12]

    Christensen, J. H., K. K. Kumar, E. Aldrian, et al., 2013: Chapter 14: Climate phenomena and their relevance for future regional climate change. IPCC WGI Fifth Assessment Report. Stocker, T. F., D. Qin, G.-K. Plattner, et al., Eds., Cambridge University Press, Cambridge, United Kingdom, New York, NY.
    [13]

    Cohen, J., D. Salstein, and K. Saito, 2002: A dynamical framework to understand and predict the major Northern Hemisphere mode. Geophys. Res. Lett., 29, 51-1–51-4.
    [14]

    Derome, J., H. Lin, and G. Brunet, 2005: Seasonal forecasting with a simple general circulation model: Predictive skill in the AO and PNA. J. Climate, 18, 597–609.
    [15]

    Ding, R. Q., K. J. Ha, and J. P. Li, 2010: Interdecadal shift in the relationship between the East Asian summer monsoon and the tropical Indian Ocean. Climate Dyn., 34, 1059–1071.
    [16]

    Ding, Y. H., and J. C. L. Chan, 2005: The East Asian summer monsoon: An overview. Meteor. Atmos. Phys., 89, 117–142. doi:  10.1007/s00703-005-0125-z.
    [17]

    Ding, Y. H., Y. M. Liu, Y. J. Song, et al., 2002: Research and experiments of the dynamical model system for short-term climate prediction. Climatic Environ. Res., 7, 236–246. (in Chinese).
    [18]

    Duan, W. S., and J. Y. Hu, 2016: The initial errors that induce a significant " spring predictability barrier” for El Niño events and their implications for target observation: Results from an earth system model. Climate Dyn., 46, 3599–3615. doi:  10.1007/s00382-015-2789-5.
    [19]

    Duan, W. S., P. Zhao, J. Y. Hu, et al., 2016: The role of nonlinear forcing singular vector tendency error in causing the " spring predictability barrier” for ENSO. J. Meteor. Res., 30, 853–866. doi:  10.1007/s13351-016-6011-4.
    [20]

    Fan, K., B. Q. Tian, and H. J. Wang, 2016: New approaches for the skillful prediction of the winter North Atlantic Oscillation based on coupled dynamic climate models. Int. J. Climatol., 36, 82–94.
    [21]

    Feng, J., L. Wang, W. Chen, et al., 2010: Different impacts of two types of Pacific Ocean warming on Southeast Asian rainfall during boreal winter. J. Geophys. Res., 115, D24122. doi:  10.1029/2010JD014761.
    [22]

    García-Serrano, J., and C. Frankkignoul, 2014: High predictability of the winter Euro–Atlantic climate from cryospheric variability. Nat. Geosci., 7, E1. doi:  10.1038/ngeo2118.
    [23]

    Griffies, S. M., A. Gnanadesikan, K. W. Dixon, et al., 2005: Formulation of an ocean model for global climate simulations. Ocean Science, 1, 45–79.
    [24]

    Guo, Q. Y., 1994: Relationship between the variations of East Asian winter monsoon and temperature anomalies in China. Quart. J. Appl. Meteor., 5, 218–225. (in Chinese).
    [25]

    Ham, Y.-G., J.-S. Kug, and I.-S. Kang, 2009: Optimal initial perturbations for El Niño ensemble prediction with ensemble Kalman filter. Climate Dyn., 33, 959–973.
    [26]

    Hendon, H. H., E. Lim, G. M. Wang, et al., 2009: Prospects for predicting two flavors of El Niño. Geophys. Res. Lett., 36, L19713.
    [27]

    Hu, K. M., G. Huang, and R. H. Huang, 2011: The impact of tropical Indian Ocean variability on summer surface air temperature in China. J. Climate, 24, 5365–5377.
    [28]

    Hu, Z.-Z., A. Kumar, B. Huang, et al., 2017: Interdecadal variations of ENSO around 1999/2000. J. Meteor. Res., 31, 73–81. doi:  10.1007/s13351-017-6074-x.
    [29]

    Imada, Y., H. Tatebe, M. Ishii, et al., 2015: Predictability of two types of El Niño assessed using an extended seasonal prediction system by MIROC. Mon. Wea. Rev., 143, 4597–4617.
    [30]

    Izumo, T., J. Vialard, M. Lengaigne, et al., 2010: Influence of the state of the Indian Ocean Dipole on the following year’s El Niño. Nature Geoscience, 3, 168–172.
    [31]

    Jeong, H.-I., D. Y. Lee, K. Ashok, et al., 2012: Assessment of the APCC coupled MME suite in predicting the distinctive climate impacts of two flavors of ENSO during boreal winter. Climate Dyn., 39, 475–493.
    [32]

    Jeong, H.-I., J.-B. Ahn, J.-Y. Lee, et al., 2015: Interdecadal change of interannual variability and predictability of two types of ENSO. Climate Dyn., 44, 1073–1091.
    [33]

    Jia, X. L., and C. Y. Li, 2005: Dipole oscillation in the southern Indian Ocean and its impacts on climate. Chinese J. Geophy., 48, 1238–1249. (in Chinese).
    [34]

    Jia, X. L., L. J. Chen, F. M. Ren, et al., 2011: Impacts of the MJO on winter rainfall and circulation in China. Adv. Atmos. Sci., 28, 521–533.
    [35]

    Jin, E. K., J. L. Kinter III, B. Wang, et al., 2008: Current status of ENSO prediction skill in coupled ocean–atmosphere models. Climate Dyn., 31, 647–664.
    [36]

    Kalnay, E., M. Kanamitsu, R. Kistler, et al., 1996: The NCEP/NCAR 40-year reanalysis project. Bull. Amer. Meteor. Soc., 77, 437–472.
    [37]

    Kang, D., M. I. Lee, J. Im, et al., 2014: Prediction of the Arctic Oscillation in boreal winter by dynamical seasonal forecasting systems. Geophys. Res. Lett., 41, 3577–3585.
    [38]

    Kao, H. Y., and J. Y. Yu, 2009: Contrasting eastern-Pacific and central-Pacific types of ENSO. J. Climate, 22, 615–632. doi:  10.1175/2008JCLI2309.1.
    [39]

    Kim, H.-J., and J.-B. Ahn, 2015: Improvement in prediction of the Arctic Oscillation with a realistic ocean initial condition in a CGCM. J. Climate, 28, 8951–8967.
    [40]

    Kirtman, P. B., 2003: The COLA anomaly coupled model: Ensemble ENSO prediction. Mon. Wea. Rev., 131, 2324–2341.
    [41]

    Klein, S. A., B. J. Soden, and N.-C. Lau, 1999: Remote sea surface temperature variations during ENSO: Evidence for a tropical atmospheric bridge. J. Climate, 12, 917–932.
    [42]

    Kug, J.-S., F.-F. Jin, and S.-I. An, 2009: Two types of El Niño events: Cold tongue El Niño and warm pool El Niño. J. Climate, 22, 1499–1515.
    [43]

    Kumar, A., M. Y. Chen, Y. Xue, et al., 2015: An analysis of the temporal evolution of ENSO prediction skill in the context of the equatorial Pacific Ocean observing system. Mon. Wea. Rev., 143, 3204–3213.
    [44]

    Latif, M., D. Anderson, T. Barnett, et al., 1998: A review of the predictability and prediction of ENSO. J. Geophys. Res., 103, 14375–14393.
    [45]

    Li, C. Y., and M. Q. Mu, 2001: The influence of the Indian Ocean dipole on atmospheric circulation and climate. Adv. Atmos. Sci., 18, 831–843.
    [46]

    Li, S. L., J. Lu, G. Huang, et al., 2008: Tropical Indian Ocean basin warming and East Asian summer monsoon: A multiple AGCM study. J. Climate, 21, 6080–6088.
    [47]

    Li, T., 2014: Recent advance in understanding the dynamics of 236 the Madden–Julian oscillation. J. Meteor. Res., 28, 1–33.
    [48]

    Lin, H., G. Brunet, J. Derome, 2008: Forecast skill of the Madden-Julian oscillation in two Canadian atmospheric models. Mon. Wea. Rev., 136, 4130–4149.
    [49]

    Liu, X. W., T. W. Wu, S. Yang, et al., 2015: Performance of the seasonal forecasting of the Asian summer monsoon by BCC_CSM1.1(m). Adv. Atmos. Sci., 32, 1156–1172.
    [50]

    Lu, B., and H.-L. Ren, 2016: Improving ENSO periodicity simulation by adjusting cumulus entrainment in BCC_CSMs. Dyn. Atmos. Oceans, 76, 127–140. doi:  10.1016/j.dynatmoce.2016.10.005.
    [51]

    Luo, J. J., S. Masson, S. Behera, et al., 2005: Seasonal climate predictability in a coupled OAGCM using a different approach for ensemble forecasts. J. Climate, 18, 4474–4497.
    [52]

    Luo, J. J., S. Masson, S. Behera, et al., 2007: Experimental forecasts of the Indian Ocean dipole using a coupled OAGCM. J. Climate, 20, 2178–2190.
    [53]

    Luo, J. J., S. Behera, Y. Masumoto, et al., 2008: Successful prediction of the consecutive IOD in 2006 and 2007. Geophys. Res. Lett., 35, L14S02.
    [54]

    MacLachlan, C., A. Arribas, K. A. Peterson, et al., 2015: Global seasonal forecast system version 5 (GloSea5): A high-resolution seasonal forecast system. Quart. J. Roy. Meteor. Soc., 141, 1072–1084.
    [55]

    McPhaden, M. J., 2003: Tropical Pacific Ocean heat content variations and ENSO persistence barriers. Geophys. Res. Lett., 30, 1480.
    [56]

    North, G. R., F. J. Moeng, T. L. Bell, et al., 1982: The latitude dependence of the variance of zonally averaged quantities. Mon. Wea. Rev., 110, 319–326.
    [57]

    Qi, Y. J., and R. H. Zhang, 2015: A review of the intraseasonal oscillation associated with rainfall over eastern China and its operational application. J. Trop. Meteor., 31, 566–576. (in Chinese).
    [58]

    Qian, Z. L., H. J. Wang, and J. Q. Sun, 2011: The hindcast of winter and spring Arctic and Antarctic Oscillation with the coupled climate models. Acta Meteor. Sinica, 25, 340–354.
    [59]

    Rashid, H. A., H. H. Hendon, M. C. Wheeler, et al., 2011: Prediction of the Madden–Julian oscillation with the POAMA dynamical prediction system. Climate Dyn., 36, 649–661.
    [60]

    Rasmusson, E. M., and T. H. Carpenter, 1982: Variations in tropical sea surface temperature and surface wind fields associated with the Southern Oscillation/El Niño. Mon. Wea. Rev., 110, 354–384.
    [61]

    Ren, H.-L., and F.-F. Jin, 2011: Niño indices for two types of ENSO. Geophys. Res. Lett., 38, L04704. doi:  10.1029/2010GL046031.
    [62]

    Ren, H.-L., and F.-F. Jin, 2013: Recharge oscillator mechanisms in two types of ENSO. J. Climate, 26, 6506–6523.
    [63]

    Ren, H.-L., and Y. Y. Shen, 2016: A new look at impacts of MJO on weather and climate in China. Adv. Meteor. Sci. Tech., 6, 97–105. (in Chinese).
    [64]

    Ren, H.-L., F.-F. Jin, J.-S. Kug, et al., 2009: A kinematic mechanism for positive feedback between synoptic eddies and NAO. Geophys. Res. Lett., 36, L11709. doi:  10.1029/2009GL037294.
    [65]

    Ren, H.-L., F.-F. Jin, J.-S. Kug, et al., 2011: Transformed eddy-PV flux and positive synoptic eddy feedback onto low-frequency flow. Climate Dyn., 36, 2357–2370.
    [66]

    Ren, H.-L., F.-F. Jin, and L. Gao, 2012: Anatomy of synoptic eddy-NAO interaction through eddy structure decomposition. J. Atmos. Sci., 69, 2171–2191.
    [67]

    Ren, H.-L., F.-F. Jin, M. F. Stuecker, et al., 2013: ENSO regime change since the late 1970s as manifested by two types of ENSO. J. Meteor. Soc. Japan, 91, 835–842.
    [68]

    Ren, H.-L., Y. Liu, F.-F. Jin, et al., 2014: Application of the analogue-based correction of errors method in ENSO prediction. Atmos. Oceanic Sci. Lett., 7, 157–161.
    [69]

    Ren, H.-L., J. Wu, C. B. Zhao, et al., 2015: Progresses of MJO prediction researches and developments. J. Appl. Meteor. Sci., 26, 658–668. (in Chinese).
    [70]

    Ren, H.-L., F.-F. Jin, B. Tian, et al., 2016a: Distinct persistence barriers in two types of ENSO. Geophys. Res. Lett., 43, 10973–10979. doi:  10.1002/2016GL071015.
    [71]

    Ren, H.-L., J. Wu, C. B. Zhao, et al., 2016b: MJO ensemble prediction in BCC_CSM1.1(m) using different initialization schemes. Atmos. Oceanic Sci. Lett., 9, 60–65.
    [72]

    Ren, H.-L., Y. Liu, J. Q. Zuo, et al., 2016c: The new generation of ENSO prediction system in Beijing Climate Centre and its predictions for 2014/2016 super El Niño event. Meteor. Mon., 42, 521–531, doi: 10.7519/j.issn.1000-0526.2016.05.001. (in Chinese).
    [73]

    Ren, H.-L., J. Q. Zuo, F.-F. Jin, et al., 2016d: ENSO and annual cycle interaction: The combination mode representation in CMIP5 models. Climate Dyn., 46, 3753–3765. doi:  10.1007/s00382-015-2802-z.
    [74]

    Reynolds, R. W., N. A. Rayner, T. M. Smith, et al., 2002: An improved in situ and satellite SST analysis for climate. J. Climate, 15, 1609–1625.
    [75]

    Riddle, E. E., A. H. Butler, J. C. Furtado, et al., 2013: CFSv2 ensemble prediction of the wintertime Arctic Oscillation. Climate Dyn., 41, 1099–1116.
    [76]

    Saha, S., S. Moorthi, X. R. Wu, et al., 2014: The NCEP climate forecast system version 2. J. Climate, 27, 2185–2208. doi:  10.1175/JCLI-D-12-00823.1.
    [77]

    Saji, N. H., and T. Yamagata, 2003: Possible impacts of Indian Ocean Dipole Mode events on global climate. Climate Res., 25, 151–169.
    [78]

    Saji, N. H., B. N. Goswami, P. N. Vinayachandran, et al., 1999: A dipole mode in the tropical Indian Ocean. Nature, 401, 360–363.
    [79]

    Scaife, A. A., A. Arribas, E. Blockley, et al., 2014: Skillful long-range prediction of European and North American winters. Geophys. Res. Lett., 41, 2514–2519.
    [80]

    Shi, L., H.-H Hendon, O. Alves, et al., 2012: How predictable is the Indian Ocean Dipole? Mon. Wea. Rev., 140, 3867–3884.
    [81]

    Shi, N., J. J. Lu, and Q. G. Zhu, 1996: East Asian winter/summer monsoon intensity indices with their climatic change in 1873–1989. J. Nanjing Inst. Meteor., 19, 168–177. (in Chinese).
    [82]

    Stuecker, M. F., A. Timmermann, F.-F. Jin, et al., 2013: A combination mode of the annual cycle and the El Niño/Southern Oscillation. Nature Geoscience, 6, 540–544.
    [83]

    Sun, J. Q., and J.-B. Ahn, 2015: Dynamical seasonal predictability of the Arctic Oscillation using a CGCM. Int. J. Climatol., 35, 1342–1353.
    [84]

    Thompson, D. W. J., and J. M. Wallace, 1998: The Arctic oscillation signature in the wintertime geopotential height and temperature fields. Geophys. Res. Lett., 25, 1297–1300.
    [85]

    Thompson, D. W. J., and J. M. Wallace, 2000: Annular modes in the extratropical circulation. Part I: Month-to-month variability. J. Climate, 13, 1000–1016.
    [86]

    Thompson, D. W. J., and J. M. Wallace, 2001: Regional climate impacts of the Northern Hemisphere annular mode. Science, 293, 85–89.
    [87]

    Visbeck, M. H., J. W. Hurrell, L. Polvani, et al., 2001: The North Atlantic oscillation: Past, present, and future. Proc. Natl. Acad. Sci. USA, 98, 12876–12877.
    [88]

    Vitart, F., 2014: Evolution of ECMWF sub-seasonal forecast skill scores. Quart. J. Roy. Meteor. Soc., 140, 1889–1899.
    [89]

    Waliser, D., K. Weickmann, R. Dole, et al., 2006: The experimental MJO prediction project. Bull. Amer. Meteor. Soc., 87, 425–431.
    [90]

    Wallace, J. M., 2000: North Atlantic Oscillation/annular mode: Two paradigms–one phenomenon. Quart. J. Roy. Meteor. Soc., 126, 791–805.
    [91]

    Wang, B., J.-Y. Lee, I.-S. Kang, et al., 2009: Advance and prospectus of seasonal prediction: Assessment of the APCC/CliPAS 14-model ensemble retrospective seasonal prediction (1980–2004). Climate Dyn., 33, 93–117.
    [92]

    Wang, B., B. Q. Xiang, and J.-Y. Lee, 2013: Subtropical high predictability establishes a promising way for monsoon and tropical storm predictions. Proc. Natl. Acad. Sci. USA, 110, 2718–2722.
    [93]

    Wang, H. J., K. Fan, J. Q. Sun, et al., 2015: A review of seasonal climate prediction research in China. Adv. Atmos. Sci., 32, 149–168.
    [94]

    Wang, R., and H.-L. Ren, 2017: The linkage of two ENSO types/modes with the interdecadal changes of ENSO around the year 2000. Atmos. Oceanic Sci. Lett., 10, 168–174. doi:  10.1080/16742834.2016.1258952.
    [95]

    Wang, S. W., Z. C. Zhao, D. Y. Gong, et al., 2005: An Introduction to Modern Climate Science. China Meteorological Press, Beijing, 1–241. (in Chinese).
    [96]

    Wang, W. Q., M. P. Hung, S. J. Weaver, et al., 2014: MJO prediction in the NCEP Climate Forecast System version 2. Climate Dyn., 42, 2509–2520.
    [97]

    Wang, X. J., Z. H. Zheng, G. L. Feng, et al., 2015: Summer prediction of sea surface temperatures in key areas in BCC_CSM model. Chinese J. Atmos. Sci., 39, 271–288. (in Chinese).
    [98]

    Webster, P. J., and S. Yang, 1992: Monsoon and ENSO: Selectively interactive systems. Quart. J. Roy. Meteor. Soc., 118, 877–926.
    [99]

    Webster, P. J., A. M. Moore, J. P. Loschnigg, et al., 1999: Coupled ocean–atmosphere dynamics in the Indian Ocean during 1997–98. Nature, 401, 356–360.
    [100]

    Weng, H. Y., K. Ashok, S. K. Behera, et al., 2007: Impacts of recent El Niño Modoki on dry/wet conditions in the Pacific Rim during boreal summer. Climate Dyn., 29, 113–129. doi:  10.1007/s00382-007-0234-0.
    [101]

    Wheeler, M. C., and H. H. Hendon, 2004: An all-season real-time multivariate MJO index: Development of an index for monitoring and prediction. Mon. Wea. Rev., 132, 1917–1932.
    [102]

    Winton, M., 2000: A reformulated three-layer sea ice model. J. Atmos. Ocean. Tech., 17, 525–531.
    [103]

    Wu, J., H.-L. Ren, C. B. Zhao, et al., 2016a: Research and application of operational MJO monitoring and prediction products in Beijing Climate Center. J. Appl. Meteor. Sci., 27, 641–653. (in Chinese) doi:  10.11898/1001-7313.20160601.
    [104]

    Wu, J., H.-L. Ren, J. Q. Zuo, et al., 2016b: MJO prediction skill, predictability, and teleconnection impacts in the Beijing Climate Center Atmospheric General Circulation Model. Dyn. Atmos. Oceans, 75, 78–90. doi:  10.1016/j.dynatmoce.2016.06.001.
    [105]

    Wu, T. W., R. C. Yu, F. Zhang, et al., 2010: The Beijing Climate Center atmospheric general circulation model: Description and its performance for the present-day climate. Climate Dyn., 34, 123–147.
    [106]

    Wu, T. W., W. P. Li, J. J. Ji, et al., 2013: Global carbon budgets simulated by the Beijing Climate Center climate system model for the last century. J. Geophys. Res., 118, 4326–4347. doi:  10.1002/jgrd.50320.
    [107]

    Wu, T. W., L. C. Song, W. P. Li, et al., 2014: An overview of BCC climate system model development and application for climate change studies. J. Meteor. Res., 28, 34–56.
    [108]

    Wu, Z. W., B. Wang, J. P. Li, et al., 2009: An empirical seasonal prediction model of the East Asian summer monsoon using ENSO and NAO. J. Geophys. Res., 114, D18120. doi:  10.1029/2009JD011733.
    [109]

    Xiang, B. Q., B. Wang, and T. Li, 2013: A new paradigm for the predominance of standing central Pacific warming after the late 1990s. Climate Dyn., 41, 327–340. doi:  10.1007/s00382-012-1427-8.
    [110]

    Xiao, Z. N., H. M. Yan, and C. Y. Li, 2002: The relationship between Indian Ocean SSTA dipole index and the precipitation and temperature over China. J. Trop. Meteor., 18, 335–344. (in Chinese).
    [111]

    Xie, S.-P., K. M. Hu, J. Hafner, et al., 2009: Indian Ocean capacitor effect on Indo–western Pacific climate during the summer following El Niño. J. Climate, 22, 730–747.
    [112]

    Xue, F., Q. C. Zeng, R. H. Huang, et al., 2015: Recent advances in monsoon studies in China. Adv. Atmos. Sci., 32, 206–229.
    [113]

    Yang, J. L., Q. Y. Liu, and Z. Y. Liu, 2010: Linking observations of the Asian monsoon to the Indian Ocean SST: Possible roles of Indian Ocean basin mode and dipole mode. J. Climate, 23, 5889–5902.
    [114]

    Yang, J. L., Q. Y. Liu, S.-P. Xie, et al., 2007: Impact of the Indian Ocean SST basin mode on the Asian summer monsoon. Geophys. Res. Lett., 34, L02708.
    [115]

    Yang, J. S., and X. W. Jiang, 2014: Prediction of eastern and central Pacific ENSO events and their impacts on East Asian climate by the NCEP climate forecast system. J. Climate, 27, 4451–4472. doi:  10.1175/JCLI-D-13-00471.1.
    [116]

    Yang, M. Z., and Y. H. Ding, 2007: A study of the impact of South Indian Ocean Dipole on the summer rainfall in China. Chinese J. Atmos. Sci., 31, 685–694. (in Chinese).
    [117]

    Yang, M. Z., Y. H. Ding, W. J. Li, et al., 2007: Leading mode of Indian ocean SST and its impacts on Asian summer monsoon. Acta Meteor. Sinica, 65, 527–536. (in Chinese).
    [118]

    Yang, Q. M., 2006: Indian Ocean subtropical dipole and variations of global circulations and rainfall in China. Acta Oceanologica Sinica, 28, 47–56. (in Chinese).
    [119]

    Yu, J. Y., and S. T. Kim, 2010: Identification of central-Pacific and eastern-Pacific types of ENSO in CMIP3 models. Geophys. Res. Lett., 37, L15705.
    [120]

    Yuan, Y., H. Yang, W. Zhou, et al., 2008: Influences of the Indian Ocean dipole on the Asian summer monsoon in the following year. Int. J. Climatol., 28, 1849–1859. doi:  10.1002/joc.1678.
    [121]

    Zebiak, S. E., and M. A. Cane, 1987: A model El Niño–Southern Oscillation. Mon. Wea. Rev., 115, 2262–2278.
    [122]

    Zhai, P. M., R. Yu, Y. J. Guo, et al., 2016: The strong El Niño of 2015/16 and its dominant impacts on global and China’s climate. J. Meteor. Res., 30, 283–297.
    [123]

    Zhang, C.-D., 2013: Madden–Julian Oscillation: Bridging weather and climate. Bull. Amer. Meteor. Soc., 94, 1849–1870.
    [124]

    Zhang, Q. Y., S. Y. Tao, and L. T. Chen, 2003: The inter-annual variability of East Asian summer monsoon indices and its association with the pattern of general circulation over East Asia. Acta Meteor. Sinica, 61, 559–568. (in Chinese).
    [125]

    Zhang, R., A. Sumi, and M. Kimoto, 1996: Impact of El Niño on the East Asian monsoon: A diagnostic study of the ’86/87 and ’91/92 events. J. Meteor. Soc. Japan, 74, 49–62.
    [126]

    Zhang, W. J., F.-F. Jin, J. P. Li, et al., 2011: Contrasting impacts of two-type El Niño over the western North Pacific during boreal autumn. J. Meteor. Soc. Japan, 89, 563–569.
    [127]

    Zhang, W. J., F.-F. Jin, H.-L. Ren, et al., 2012: Differences in teleconnection over the North Pacific and rainfall shift over the USA associated with two types of El Niño during boreal autumn. J. Meteor. Soc. Japan, 90, 535–552.
    [128]

    Zhao, C. B., T. J. Zhou, L. C. Song, et al., 2014: The boreal summer intraseasonal oscillation simulated by four Chinese AGCMs participating in CMIP5 project. Adv. Atmos. Sci., 31, 1167–1180.
    [129]

    Zhao, C. B., H.-L. Ren, L. C. Song, et al., 2015: Madden–Julian Oscillation simulated in BCC climate models. Dyn. Atmos. Oceans, 72, 88–101.
    [130]

    Zheng, F., J. Zhu, R.-H. Zhang, et al., 2006: Ensemble hindcasts of SST anomalies in the tropical Pacific using an intermediate coupled model. Geophys. Res. Lett., 33, L19604. doi:  10.1029/2006GL026994.
    [131]

    Zhou, T. J., and L. W. Zou, 2010: Understanding the predictability of East Asian summer monsoon from the reproduction of land–sea thermal contrast change in AMIP-Type simulation. J. Climate, 23, 6009–6026.
    [132]

    Zhou, W., M. Y. Chen, W. Zhuang, et al., 2016: Evaluation of the tropical variability from the Beijing Climate Center’s real-time operational global ocean data assimilation system. Adv. Atmos. Sci., 33, 208–220. doi:  10.1007/s00376-015-4282-9.
    [133]

    Zuo, J. Q., W. J. Li, H.-L. Ren, et al., 2012: Change of the relationship between spring NAO and East Asian summer monsoon and its possible mechanism. Chinese J. Geophys., 55, 384–395. (in Chinese).
    [134]

    Zuo, J. Q., W. J. Li, C. H. Sun, et al., 2013: Impact of the North Atlantic sea surface temperature tripole on the East Asian summer monsoon. Adv. Atmos. Sci., 30, 1173–1186.
    [135]

    Zuo, J. Q., H.-L. Ren, and W. L. Li, 2015: Contrasting impacts of the Arctic Oscillation on surface air temperature anomalies in southern China between early and middle-to-late winter. J. Climate, 28, 4015–4026.
    [136]

    Zuo, J. Q., H.-L. Ren, J. Wu, et al., 2016a: Subseasonal variability and predictability of the Arctic Oscillation/North Atlantic Oscillation in BCC_AGCM2.2. Dyn. Atmos. Oceans, 75, 33–45.
    [137]

    Zuo, J. Q., H.-L. Ren, W. J. Li, et al., 2016b: Interdecadal variations in the relationship between the winter North Atlantic Oscillation and temperature in south–central China. J. Climate, 29, 7477–7493. doi:  10.1175/JCLI-D-15-0873.1.
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Prediction of Primary Climate Variability Modes at the Beijing Climate Center

    Corresponding author: Hong-Li REN, renhl@cma.gov.cn
  • 1. Laboratory for Climate Studies, National Climate Center, China Meteorological Administration, Beijing 100081, China
  • 2. CMA–NJU Joint Laboratory for Climate Prediction Studies, Institute for Climate and Global Change Research, School of Atmospheric Sciences, Nanjing University, Nanjing 210093, China
  • 3. Department of Atmospheric Sciences, University of Hawaii, Honolulu 96822, U.S.A.
  • 4. Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science & Technology, Nanjing 210044, China
Funds: Supported by the National (Key) Basic Research and Development (973) Program of China (2015CB453203), China Meteorological Administration Special Public Welfare Research Fund (GYHY201506013 and GYHY201406022), National Natural Science Foundation of China (41205058, 41375062, 41405080, 41505065, 41606019, and 41605116), US National Science Foundation (AGS-1406601), US Department of Energy (DOE) (DE-SC000511), and also partly supported by the UK–China Research & Innovation Partnership Fund through the Met Office Climate Science for Service Partnership (CSSP) China as part of the Newton Fund

Abstract: Climate variability modes, usually known as primary climate phenomena, are well recognized as the most important predictability sources in subseasonal–interannual climate prediction. This paper begins by reviewing the research and development carried out, and the recent progress made, at the Beijing Climate Center (BCC) in predicting some primary climate variability modes. These include the El Niño–Southern Oscillation (ENSO), Madden–Julian Oscillation (MJO), and Arctic Oscillation (AO), on global scales, as well as the sea surface temperature (SST) modes in the Indian Ocean and North Atlantic, western Pacific subtropical high (WPSH), and the East Asian winter and summer monsoons (EAWM and EASM, respectively), on regional scales. Based on its latest climate and statistical models, the BCC has established a climate phenomenon prediction system (CPPS) and completed a hindcast experiment for the period 1991–2014. The performance of the CPPS in predicting such climate variability modes is systematically evaluated. The results show that skillful predictions have been made for ENSO, MJO, the Indian Ocean basin mode, the WPSH, and partly for the EASM, whereas less skillful predictions were made for the Indian Ocean Dipole (IOD) and North Atlantic SST Tripole, and no clear skill at all for the AO, subtropical IOD, and EAWM. Improvements in the prediction of these climate variability modes with low skill need to be achieved by improving the BCC’s climate models, developing physically based statistical models as well as correction methods for model predictions. Some of the monitoring/prediction products of the BCC-CPPS are also introduced in this paper.

    • A number of climate modes exist in the climate system, which are intrinsic in the atmosphere and ocean variability on intraseasonal and interannual timescales. Usually, such climate variability modes, characterized by certain large-scale spatial patterns and specific attributes, and imposing prominent effects on global/regional climate, are also referred to as climate phenomena in routine operations and services (Christensen et al., 2013). As is known in climate monitoring, the major climate phenomena that strongly influence climate variations in China include the El Niño–Southern Oscillation (ENSO), Madden–Julian Oscillation (MJO), Arctic Oscillation (AO), the sea surface temperature (SST) modes in the Indian Ocean and North Atlantic, the western Pacific subtropical high (WPSH), and the East Asian winter and summer monsoons (EAWM and EASM, respectively). These phenomena have been recognized as the most important predictability sources on different timescales in short-term climate prediction.

      ENSO is a dominant interannual variability mode with remarkable climate impacts worldwide (e.g., Rasmusson and Carpenter, 1982; Zhang et al., 1996; Brönnimann, 2007; Zhai et al., 2016). Owing to considerable advancements and much-improved interannual variability prediction skill over the past three decades (e.g., Cane et al., 1986; Zebiak and Cane, 1987; Chen et al., 1995; Kirtman, 2003; Luo et al., 2005; Zheng et al., 2006; Ham et al., 2009; Cheng et al., 2010; Ren et al., 2014), ENSO prediction has become an indispensable part of climate prediction. However, in terms of real-time ENSO forecasts by models developed at the key international research centers across the world, the prediction skill of ENSO experienced a clear interdecadal decline since the year 2000 (e.g., Barnston et al., 2012). This is concurrent with ENSO’s interdecadal change around 2000 (Wang and Ren, 2016; Hu et al., 2017) and is believed to be probably associated with the more frequent occurrence of the central Pacific (CP) type ENSO, which has SST anomalies centered in the equatorial CP and distinct mechanisms and impacts on the East Asian climate, compared to the eastern Pacific (EP) type (e.g., Ashok et al., 2007; Weng et al., 2007; Kao and Yu, 2009; Kug et al., 2009; Ren and Jin, 2011, 2013; Xiang et al., 2013). Such diversity with respect to ENSO requires a further understanding (Capotondi et al., 2015) and the need for continued improvements in ENSO prediction techniques.

      The MJO is well-known as a dominant mode of intraseasonal variability (ISV) and an important driver of subseasonal climate variation in the tropics and extratropics (Zhang, 2013; Li, 2014), and has particularly remarkable impacts on climate in China (e.g., Jia et al., 2011; Qi and Zhang, 2015; Ren and Shen, 2016). Predicting the MJO has become a primary objective in subseasonal-to-seasonal forecasting (Waliser et al., 2006; Vitart, 2014; Wang et al., 2014). In recent years, substantial progress has been made in improving the performance of MJO prediction with dynamical climate models as well as statistical models (e.g., Ren et al., 2015, and references therein). The useful prediction skill of the real-time multivariate MJO (RMM) index proposed by Wheeler and Hendon (2004) can generally reach and exceed 20 days before the bivariate anomaly correlation coefficient between forecasts and observation drops under 0.5 or the root-mean-square error (RMSE) increases to the level of climatological prediction. Further improvement in MJO prediction will depend on developments in high-quality data assimilation, ensemble methodologies, and enhanced model physics.

      The AO, also known as the Northern Annular Mode, is the dominant mode or prevailing phenomenon in mid–high latitude climate variability and controls a large portion of the climate variance on the interannual timescale (Thompson and Wallace, 1998, 2000; Wallace, 2000). A number of previous studies have revealed that changes in the AO tend to be accompanied by large-scale weather events and climate anomalies in the extratropical Northern Hemisphere during boreal winter (e.g., Thompson and Wallace, 1998, 2001; Zuo et al., 2015, and references therein). Therefore, skillful prediction of the AO is important for mid–high latitude seasonal-mean and subseasonal-to-seasonal climate predictions (Zuo et al., 2016a). Prediction of the AO is a challenging issue and its prediction skill was low with respect to the winter-mean AO index in early climate models because the initial atmospheric states in the mid–high latitudes are so chaotic that it is difficult to supply enough information for future states (Cohen et al., 2002; Qian et al., 2011). However, Derome et al. (2005) showed statistically significant predictive skill for winter AO index based on a simple atmospheric general circulation model. Recently, some state-of-the-art climate models have shown significant improvements in the prediction of the winter-mean AO index, with useful skill at a lead time of up to two months (Riddle et al., 2013; Ahn and Kim, 2014; Kang et al., 2014; Kim and Ahn, 2015; MacLachlan et al., 2015; Sun and Ahn, 2015).

      The major SST anomaly modes in the Indian Ocean include the Indian Ocean basin mode (IOBM), Indian Ocean dipole (IOD) mode, and the subtropical IOD (SIOD) (e.g., Klein et al., 1999; Saji et al., 1999; Webster et al., 1999; Behera and Yamagata, 2001; Xie et al., 2009). Many studies have focused on the impacts of these modes on climate variations worldwide (e.g., Saji and Yamagata, 2003; Jia and Li, 2005; Yang J. et al., 2007, 2010; Yuan et al., 2008) and particularly in China (e.g., Li and Mu, 2001; Xiao et al., 2002; Yang, 2006; Yang and Ding, 2007; Yang M. et al., 2007; Li et al., 2008; Ding et al., 2010; Hu et al., 2011). Skillful IOD prediction is generally limited to a lead time of one season (Luo et al., 2007), although strong IOD events can be predicted three seasons ahead (Luo et al., 2008). Shi et al. (2012), based on a selection of operational seasonal forecasting models including POAMA-1.5 and -2.4, the NCEP’s CFSv1 and CFSv2, the ECMWF’s ECSys3, and SINTEX-F, indicated that the eastern Indian Ocean SST is hard to predict with these climate models, consistent with the limited prediction skill of the IOD.

      The North Atlantic SST Tripole (NAST) dominates the regional interannual variability of this ocean, and is greatly affected by the North Atlantic Oscillation (NAO) (e.g., Visbeck et al., 2001). Studies show that the NAST plays a key role in bridging the NAO’s impacts in winter and the EASM (Wu et al., 2009; Zuo et al., 2012, 2013). So far, little attention has been paid to the prediction of the NAST, and the predictability of its interannual variations during summer appears to be low (Wang X. et al., 2015)—probably because of unrealistic local air–sea interaction in models.

      Rainfall and surface temperature anomalies in China are directly impacted by the WPSH as well as the EAWM and EASM, and a great number of studies have focused on these three aspects (e.g., Chang, 2004; Ding and Chan, 2005; Wang et al., 2013; Xue et al., 2015). From the climate perspective, adequately predicting the summer WPSH and EAWM and EASM is of great importance in predicting anomalies of rainfall and surface temperature in China in summer and winter—an indispensable part of short-term climate prediction (Wang et al., 2009; Zhou and Zou, 2010; Liu et al., 2015; Wang H. et al., 2015; Cheng et al., 2016).

      Anomalies of these primary climate phenomena usually give rise to significant impacts on China’s climate, and may bring considerable predictability into local climate predictions. In other words, as the physical basis of short-term climate prediction, predicting these major climate phenomena can provide crucial reference for the prediction of rainfall, air temperature, etc. in China (e.g., Wang et al., 2005). Based on previous research, the world’s main operational climate centers, such as the Climate Prediction Center of the National Oceanic and Atmospheric Administration (NOAA), and the UK Met Office Hadley Centre, routinely generate and issue a suite of monitoring/analysis/prediction products for climate phenomena using their climate models, which are capable of reproducing these climate phenomena well (e.g., Saha et al., 2014; MacLachlan et al., 2015). In contrast, although the Beijing Climate Center (BCC) has constructed a relatively complete chain of operational monitoring products for these climate phenomena, the development of corresponding routine predictions has been delayed, and this despite the first version of its ENSO prediction system having been set up more than 15 yr ago (Ding et al., 2002); indeed, in recent years the system has been unable to appropriately catch the increasing complexities in ENSO behavior.

      Since the latter part of 2012, considerable time and effort have been invested at the BCC into developing a climate phenomenon prediction system (CPPS). Such a system involves the prediction of the aforementioned primary climate phenomena (ENSO, MJO, AO, IOBM/IOD/SIOD, WPSH, and EAWM/EASM), which are well-known to impact directly or indirectly on climate anomalies in China. A diagram illustrating the construction of the CPPS is given in Fig. 1a, including the different components for predicting the climate phenomena using both a dynamical climate system model and a physics-based statistical model. Two other components studying climate phenomena and climate prediction applications (Figs. 1b and 1c), which are closely related to the CPPS, are also illustrated in Fig. 1.

      Figure 1.  Schematic diagrams of the BCC’s (a) CPPS, (b) SEMAP2.1, and (c) IMPRESS2.0.

      In this paper, we first report recent progress in research and development with respect to the CPPS at the BCC, including an introduction to the basic ideas, key methods, operational techniques/systems, and some of the main products and their applications. The performance of the CPPS in predicting climate phenomena is then systematically evaluated. The remainder of the paper is organized as follows. Data and verification methods are introduced in Section 2; and then from Section 3 through Section 7, the different components of the CPPS are described, along with results from evaluating their skill and real-time applications of the prediction products. A summary and further discussion is provided in Section 8.

    2.   Data and verification method
    • In this study, all observational SST indices are calculated by using the Optimum Interpolation Sea Surface Temperature, version 2 (OISSTv2) product (Reynolds et al., 2002), and the atmospheric indices are computed by using the NCEP–NCAR reanalysis (R1) data (Kalnay et al., 1996). To verify the MJO hindcast, the observed daily RMM indices are calculated by using the NCEP–NCAR R1 zonal wind data and NOAA outgoing longwave radiation (OLR) data during the period 1994–2014, following the method of Wheeler and Hendon (2004). The winter mean is taken as the average of December–January–February (DJF) values and the summer mean as the average of June–July–August (JJA) values.

      The hindcast data for seasonal prediction used in this study are generated from the second generation of the operational seasonal prediction system at the BCC/China Meteorological Administration (CMA), which is based on BCC_CSM1.1(m) [BCC Climate System Model, version 1.1 (moderate resolution)] (Wu et al., 2013, 2014). Its atmospheric component is the BCC AGCM, with a T106 horizontal resolution and 26 vertical hybrid sigma/pressure levels (Wu et al., 2010); and its land component is the BCC Atmosphere and Vegetation Interaction Model, version 1.0. The ocean component of BCC_CSM1.1(m) is the Geophysical Fluid Dynamics Laboratory Modular Ocean Model, version 4, with 40 levels (Griffies et al., 2005); and the sea ice component is the Sea Ice Simulator (Winton, 2000). All of these components are coupled without flux adjustment. The hindcast is initialized for each calendar month from 1991 to 2014, with a 13-month time integration. The real-time forecast is started in 2015. Both the hindcast and real-time forecast, spanning from January 1991 through August 2016, are used to evaluate the prediction of ENSO. The atmospheric initial values are initialized from the four-time daily NCEP–NCAR R1 data, and the oceanic initial values from the ocean temperature of the Global Oceanic Data Assimilation System (GODAS), using a nudging scheme with a timescale of two days; the sea–ice was not initialized. The BCC ensemble system includes 24 ensemble members, of which 9 are from an empirical singular vector scheme (Cheng et al., 2010) and 15 are generated from the lagged average forecasting scheme in which the atmospheric states on the first 5 days of each month and the ocean states on the first 3 days are combined to generate 15 perturbations for BCC_CSM1.1(m).

      The hindcast data for subseasonal prediction of the MJO and, more generally, ISV forecasting, are based on BCC_AGCM2.2 and the upgraded version of BCC_CSM1.1(m), i.e., BCC_CSM1.2, in which only the vertical levels have been increased to 40 from 26. BCC_AGCM2.2 is initialized by using NCEP reanalysis data for the period 1994–2014, and driven by maintaining the OISSTv2 observation at each initial time. BCC_CSM1.2 is initialized by using the atmospheric conditions from NCEP reanalysis data for the period 1994–99 and NCEP FNL Operational Global Analysis data for the period 2000–14 (http://rda.ucar.edu/datasets/ds083.2/); and the ocean component of BCC_CSM1.2 is initialized directly from the BCC global ocean data assimilation system (Zhou et al., 2016). For both models, four member runs (0000, 0600, 1200, and 1800 UTC) are initialized each day, starting 1 January 1994, and each run lasts for 60 days. The daily mean is calculated as the ensemble mean of the eight ensemble members generated from the uncoupled/coupled climate models. The model outputs are interpolated to a 2.5° × 2.5° horizontal resolution, prior to verification.

      To verify the performance of daily prediction, we first derive anomalies of the prediction by subtracting the model climatology from the original forecast data, where the climatology is calculated by using the daily hindcast data during 1991–2010 and is a function of the initial calendar date and lead day. The interannual component of daily prediction is eliminated by removing the previous 120-day mean that merges both the observation and forecast prior to the target date. Then, for the observational data, the daily anomalies are derived by removing the observational climatology during 1981–2010, and the same period is used for seasonal prediction verification.

      The verification of the monthly-mean predictions of the climate phenomena is conducted by calculating the temporal correlation coefficient (TCC) skill score between the predicted and observational indices used for representing the climate phenomena. The skill scores for the verification of the MJO prediction are the bivariate correlation (COR), RMSE, and mean-square skill score (MSSS) of the RMM indices (Lin et al., 2008).

    3.   Prediction of ENSO
    • At the BCC, a new generation System of ENSO Monitoring, Analysis and Prediction (SEMAP2.0) has been developed (Ren et al., 2016c), which is based on the latest climate system model [BCC_CSM1.1(m)] and the dynamical mechanisms proposed for the two types of ENSO (e.g., Ren and Jin, 2013). SEMAP2.0 provides ENSO forecasts using three approaches: a physics-based statistical model, BCC_CSM1.1(m), and its analogue-based correction of errors (this part is named the analogue–dynamical ENSO prediction sub-system, denoted as ADEPS) (Ren et al., 2014). In the statistical model, the predictors are the zonal-mean equatorial-Pacific thermocline variation, western tropical Pacific westerly anomaly, and Indian Ocean Dipole mode, which are involved in a multi-element linear regression equation. A multi-method ensemble (MME) mean is calculated arithmetically by averaging the forecasts of the three approaches. Recently, the system was upgraded to SEMAP2.1, with many tropical atmosphere–ocean monitoring and diagnosis products developed (see Fig. 1b).

      Firstly, we focus on the results from evaluating the ENSO predictions in SEMAP2.1. As direct indicators of ENSO features, tropical Pacific SST anomaly indices provide basic statistics for operating the monitoring and prediction of ENSO. The traditional Niño3.4, Niño3, and Niño4 SST anomaly indices are defined as the SST anomaly averages over the regions (5°S–5°N, 170°–120°W), (5°S–5°N, 150°–90°W), and (5°S–5°N, 160°E–150°W), respectively. The Niño cold-tongue index (NiñoCTI) and Niño warm-pool index (NiñoWPI) proposed by Ren and Jin (2011) are also used for representing the EP and CP ENSO types, respectively. The formula is as follows:

      $$\left\{ \begin{array}{l} {N_{{\rm{CT}}}}={N_{\rm{3}}} - {\rm{\alpha }}{{{N}}_4}\\ {N_{{\rm{WP}}}}{\rm{ = }}{N_{\rm{4}}} - {\rm{\alpha }}{N_3} \end{array} \right.,\alpha= \left\{ \begin{array}{l} 2/5,\;\;{N_{\rm{3}}}{N_{\rm{4}}} > 0\\ 0,\;\;\;\;\;\;{\rm{otherwise}} \end{array} \right,$$ (1)

      where NCT and NWP stand for NiñoCTI and NiñoWPI, respectively, and N3 and N4 for the Niño3 and Niño4 indices, respectively. In the new operational standard for monitoring ENSO events, as issued by the CMA in April 2016, the Niño3.4 index is utilized to define the start/end time, duration and intensity of the ENSO event, and meanwhile, NiñoCTI and NiñoWPI are employed to determine the type of ENSO event.

      A rolling independent validation is used for 1996–2015, and besides the years before 1996, the total number of years for training the statistical model increases as the target predicted year goes forward. Figure 2 presents the TCC skills in terms of different approaches and Niño indices. As seen in Fig. 2a, BCC_CSM1.1(m) demonstrates relatively reliable prediction skill during 1996–2015, with TCC skills of around 0.7 at the 6-month lead time, and its analogue-based correction demonstrates a slightly better skill level than the original model prediction. The statistical model appears less skillful, due to the fact that its initial values are generally earlier by half a month compared to the two other model-based approaches. Since the three approaches possess prediction skill at the same level, their MME-mean prediction is constructed simply using equal weights. It is clear from Fig. 2b that the MME mean in SEMAP2.1 is more skillful, with TCC scores of 0.8, 0.75, and 0.71 for three of the four indices (Niño3.4, Niño3, and NiñoCTI) at the 6-month lead time, based on a 20-yr independent validation during 1996–2015. Thus, the MME mean is capable of improving the prediction skill of the indices, compared with the BCC model only. In particular, for the Niño3.4 index, the target duration of useful prediction, with TCCs higher than 0.5, can last up to around 1 yr, and the TCC score of the MME mean is 0.8 at the 6-month lead time, which is a relatively high level of skill compared with other models (e.g., Latif et al., 1998; Jin et al., 2008; Wang et al., 2009; Barnston et al., 2012, Kumar et al., 2015).

      Figure 2.  TCC skill scores for (a) the Niño3.4 index in BCC_CSM1.1(m), ADEPS (analogue–dynamical ENSO prediction sub-system), the statistical model, and their MME-mean predictions in SEMAP2.1, and (b) those for the Niño3, Niño3.4, Niño CT, and Niño WP indices in the MME-mean predictions, based on the independent validations during 1996–2015. Panels (c–f) are the MME-mean TCC scores of the four indices as a function of lead month (x-axis) and initial month (y-axis), and panels (g–j) are the same but with target month as the y-axis.

      Furthermore, for the first time SEMAP2.1 provides independent forecasting of the two types of ENSO, where the skill of the EP type appears to be higher than that of the CP type, and both are lower than those of the Niño3.4 and Niño3 indices. Recently, efforts have been made to evaluate the skill of dynamical models in predicting the two types of ENSO (Hendon et al., 2009; Jeong et al., 2012, 2015; Yang and Jiang, 2014; Imada et al., 2015), indicating better performance for EP ENSO prediction than CP ENSO prediction. This conclusion is also confirmed by the SEMAP2.1 results in this study.

      The dependence of the MME-mean prediction skill of the four Niño indices upon initial calendar months is shown in Figs. 2cf. As we know, the TCC skill of ENSO prediction is usually subject to a fast decline during boreal spring, traditionally called the spring predictability barrier (e.g., Webster and Yang, 1992; McPhaden, 2003; Duan and Hu, 2016; Duan et al., 2016). It can be seen that the prediction scores vary greatly with the different initial calendar months. Obvious barriers of skill are apparent, with the TCCs dropping sharply during springtime, and where the barrier is clearer when the skill is considered as a function of the target month (Figs. 2gj). However, such barriers appear to be much weaker for the MME-mean predictions, as compared to the other methods in SEMAP2.0 (Ren et al., 2016c). Recently, through defining a new quantitative measurement that can be applied to studying the predictability characteristics of dynamical predictions, Ren et al. (2016a) showed that the two ENSO types have distinct persistence barriers in terms of timing and intensity.

      To verify the performance of SEMAP2.1 in predicting recent ENSO events, Fig. 3 displays the MME-mean predictions produced at each initial month since January 2014, in comparison with observations. It can be seen that the predictions of SEMAP2.1 are able to capture the variations in ENSO behavior from early 2014, except that the fastest growing stage of the SST index during summer 2015 is not predicted well. In particular, the predictions reflect the SST fluctuations in 2014 to some degree and appropriately capture the peak time and strong intensity in 2015 as well as the decaying of this ENSO event, as initiated from July 2015. Overall, for this ENSO event of 2014–16, the SEMAP2.1 MME-mean predictions successfully provide useful information, indicating that the evolution of this event can be reasonably predicted 3–6 months in advance. Indeed, the strong intensity of this event during its mature phase is difficult to forecast when initiated in June 2015 and before, which is a common problem in other operational prediction models (Ren et al., 2016c).

      Figure 3.  Plume plot of Niño3.4 index results (K) based on observation (black curve) and the updated SEMAP2.1 MME (multi-method ensemble)-mean predictions, as initiated in each month since January 2014. The predictions initiated in boreal winter, spring, summer, and autumn are indicated by the blue, green, orange, and purple lines.

    4.   Prediction of the MJO
    • Prediction of the MJO based on dynamical climate models is a challenging issue, strongly dependent on the performance of the models in simulating the MJO’s features, the quality of the data used as the initial value of the model, and the techniques for initializing the models and generating ensemble members (e.g., Ren et al., 2015). At the end of 2014, the BCC established version 1.0 of an ISV/MJO monitoring and prediction system (IMPRESS1.0), based on BCC_AGCM2.2 taking the persistent SST at the initial time as a low boundary condition, and the verifications show a fair skill for 16–17 days before the RMM COR score becomes lower than 0.5 (Ren et al., 2015; Wu et al., 2016a). Further developing the initialization schemes and ensemble method has led to improvements at the BCC in predicting the MJO (Ren et al., 2016b). Recently, this first version of the MJO prediction system has been upgraded to version 2.0 (IMPRESS2.0) by constructing a new ensemble scheme that combines the four forecast times generated from the atmosphere-only BCC_AGCM2.2 and the coupled BCC_CSM1.2 (see Fig. 1c). The new ensemble scheme can be expressed as follows:

      $$ \begin{aligned} {\rm{Ens\_fcst}}\left( {{{{t}}_{\rm{0}}}} \right) = & \frac{{\rm{1}}}{{\rm{6}}}\left[ {\mathop \sum \limits_{{{i = }}{{{t}}_{\rm{0}}}{\rm{ - 2}}}^{{{{t}}_{\rm{0}}}} {\rm{En}}{{\rm{s}}_{{\rm{BCC\_AGCM2}}{\rm{.2}}}}\left( {{i}} \right) } \right.\\ & +\left. {\mathop \sum \limits_{{{i = }}{{{t}}_{\rm{0}}}{\rm{ - 2}}}^{{{{t}}_{\rm{0}}}} {\rm{En}}{{\rm{s}}_{{\rm{BCC\_CSM1}}{\rm{.2}}}}\left({{i}} \right)} \right] ,\end{aligned} $$ (2)

      where Ens_fcst is the ensemble forecast and Ens (i) is the ensemble mean of the model forecasts using the initial values at each of the four times on day i, where i = t0–2, t0–1, and t0 denote lead day 2 through day 0 of the initial day t0.

      Figure 4 shows the skill scores of the RMM indices predicted by this new ensemble scheme of the models in IMPRESS2.0 over the period 1994–2014. Overall, if taking 0.5 as the baseline of useful COR skill, the MJO can be predicted up to a lead time of 20 days. In addition, there is a significant seasonal-dependence of the prediction skill. The highest COR score is in autumn (September–October–November), when the number of days with useful COR skill reaches 26, but only 17 days in summer (JJA). In winter (DJF) and spring (Mar–April–May), the COR skill extends to 20 days. When we consider the RMSE and MSSS, the useful prediction skill of the RMM indices is beyond 30 days if taking $\sqrt 2 $ and 0 as the baseline of the RMSE and MSSS, respectively (Lin et al., 2008). These results indicate that the MJO prediction skill of IMPRESS2.0 at the BCC is actually comparable with other models (see the reviews in the introduction of this study).

      Figure 4.  Skill scores for the RMM (real-time multivariate MJO) indices predicted by IMPRESS2.0, as initialized in spring (SPR), summer (SUM), autumn (AUT), winter (WIN), and all seasons (ALL). (a) Correlation coefficient (COR), (b) root-mean-square error (RMSE), and (c) mean-square skill score (MSSS). The dashed lines are the references 0.5 in (a), $\sqrt 2 $ in (b), and 0 in (c).

      It is also noted there is a remarkable dependency of the prediction skill upon the seasonality in Fig. 4, which is generally consistent with previous studies. For example, Rashid et al. (2011) showed higher prediction skill of the RMM indices in cold seasons, but lower skill in warm seasons. However, why the best predictions are generated in autumn and not in winter, as found by Wu et al. (2016b), is worthy of further examination. Moreover, their evaluation of the real-time forecasts in 2015 by IMPRESS1.0 shows that this system successfully captures the initiation and evolution of the three to four strong MJO events during 2015, with a useful prediction skill of 18 days. Further evaluation of IMPRESS2.0 is needed.

    5.   Prediction of the SST modes of the Indian Ocean and North Atlantic
    • As is well-known, the IOBM is closely associated with simultaneous ENSO but the IOD mode tends to be related to ENSO with different lead times (e.g., Xie et al., 2009; Izumo et al., 2010). Developments at the BCC in predicting the SST modes of the Indian Ocean and North Atlantic started slightly later than for ENSO and have been based mainly on the outputs of the BCC_CSM1.1(m) seasonal prediction system.

    • The prediction skill of BCC_CSM1.1(m) with respect to the Indian Ocean SST modes is evaluated in this next part of the study. The definitions of the IOBM index, IOD index, and SIOD index are listed in Table 1. We quantitatively compare the three indices between observations and the model predictions.

      Climate modeIndex definition
      ENSONiño 3.4 SST index: SST anomaly averages over the region (5°S–5°N, 170°–120°W); Other Niño indices are defined in the
      text of Section 3
      MJORMM indices of Wheeler and Hendon (2004): Time series of the first two principle components of the multivariate EOF of
      850-/200-hPa zonal wind and OLR
      IOBMIndex: Area-averaged SST anomalies over 20°S–20°N, 40°–110°E
      IODArea-averaged SST anomaly difference over 10°S–10°N, 50°–70°E and 10°S–0°N, 90°–110°E (Saji et al., 1999)
      SIODArea-averaged SST anomaly difference over 45°S–30°S, 45°–75°E and 25°S–15°S, 80°–100°E (Behera and Yamagata,
      2001)
      NASTDifference between the SST anomalies averaged over 34°–44°N, 72°–62°W and the sum of regional-averaged SST
      anomalies over 0°–18°N, 46°–24°W and 44°–56°N, 40°–24°W (Zuo et al., 2012)
      AONormalized principal component of the leading EOF of monthly mean SLP north of 20°N (Thompson and Wallace, 2000)
      WPSHNormalized summer-mean 850-hPa geopotential height anomalies over 15°–25°N, 115°–150°E (Wang et al., 2013)
      EASM(1) Difference in the normalized summer-mean SLP over 20°–50°N, 110°E–160°E in 5° intervals (Shi et al.,
      1996); (2) Difference in the summer-mean 850-hPa zonal wind anomalies over the domain averages of 10°–20°N,
      100°–150°E and 25°–30°N, 100°–150°E (Zhang et al., 2003)
      EAWM(1) As per the EASM index definition of Shi et al. (1996), but for the winter mean; (2) Winter-mean SLP anomalies
      averaged at 60°N, 100°E; 60°N, 90°E; and 50°N, 100°E (Guo, 1994)
      Notes: RMM, real-time multivariate MJO; IOBM, Indian Ocean basin mode; IOD, Indian Ocean Dipole; SIOD, subtropical IOD; NAST, North Atlantic SST Tripole; AO, Arctic Oscillation; WPSH, western Pacific subtropical high; EASM, East Asian summer monsoon; and EAWM, East Asian winter monsoon.

      Table 1.  Definitions of the indices of the primary climate modes

      The IOBM is the first leading mode of Indian Ocean SST, explaining 38% of the total SST variance, and usually develops in boreal winter before reaching its peak phase in the following spring. As shown in Figs. 5a, 5d, and 5g, BCC_CSM1.1(m) is capable of predicting the IOBM, with a TCC higher than 0.5 for all lead months. Specifically, the IOBM prediction as initialized in boreal winter is the most reliable during its developing phase. The successful prediction of the IOBM is possibly related to the relatively good ENSO prediction skill of BCC_CSM1.1(m) (Fig. 2), due to the fact that the IOBM is strongly affected by ENSO (Klein et al., 1999).

      Figure 5.  TCC skill scores of BCC_CSM1.1(m) for the (a, d, g) IOBM (Indian Ocean basin mode), (b, e, h) IOD (Indian Ocean Dipole), and (c, f, i) SIOD (subtropical IOD) indices. (a–c) TCCs for all months, (d–f) dependence of the TCCs on initial calendar months, and (g–i) dependence of the TCCs on target calendar months.

      The IOD, as the second leading mode of Indian Ocean SST, explains 11% of the total SST variance. Similar to ENSO, the IOD is characterized by a pronounced phase locking, with its peak phase in boreal autumn. As seen in Figs. 5b, 5e, and 5h, the prediction skill for the IOD is much lower than that for the IOBM. The TCCs quickly drop below 0.5 after the first two lead months. Prediction of the IOD is a difficult issue, and Shi et al. (2012) showed that IOD prediction is unreliable when the lead time exceeds three months. In contrast, IOD prediction becomes relatively more reliable if initialized in boreal summer, with TCCs of more than 0.5 during the first six months. This is consistent with the fact that the IOD usually develops in boreal summer and peaks in autumn. The IOD prediction skill is poor when initialized in boreal winter and, actually, the skill of IOD prediction always drops abruptly in boreal winter whenever the prediction is started from after May, reflecting the so-called winter prediction barrier of the IOD (Luo et al., 2007).

      The SIOD is another major mode in the subtropical southern Indian Ocean, which usually develops in early boreal winter and then peaks during January–March. As seen in Figs. 5c, 5f, and 5i, the TCCs drop below 0.5 only after one month, indicating that the SIOD has a very short predictability period of around one month, and is difficult to predict skillfully using BCC_CSM1.1(m).

      Figure 6 shows the real-time prediction of the three modes in October 2015. In comparison with observation, the IOBM prediction is successful. BCC_CSM1.1(m) adequately predicts a continual positive IOBM phase from October 2015, which has been validated by the updated observations in recent months, as seen in Fig. 6. The decaying phase of the positive IOD event since September 2015 is also successfully predicted by the model, in spite of the overly fast decaying speed. It is interesting that BCC_CSM1.1(m) is able to properly predict the negative phase of the SIOD starting from November 2015. Overall, the real-time prediction of the Indian Ocean SST modes in October 2015 is encouraging. The strong ENSO event that has been rapidly developing since summer 2015 may have provided favorable background signals for predicting these modes.

      Figure 6.  The (a) IOBM (Indian Ocean basin mode), (b) IOD (Indian Ocean Dipole), and (c) SIOD (subtropical IOD) indices (K) in observations (blue bars and black dots) and the predictions (red bars) made by BCC_CSM1.1(m), as initiated in October 2015.

    • The NAST is the leading SST mode in the North Atlantic, with a typical tripole pattern. In its positive phase, positive SST anomalies exist in the Northwest Atlantic and negative anomalies in the subpolar and tropical Atlantic Ocean, and vice versa. Despite the importance of the NAST in climate prediction, the retrospective forecasts of global middle–low latitude SST during 1981–2007 in BCC_CSM1.1(m) show low skill with respect to summer NAST prediction (Wang X. et al., 2015). Figure 7 shows the results from evaluating the capability of BCC_CSM1.1(m) in predicting the NAST mode. We follow the definition of Zuo et al. (2012), which is listed in Table 1 and denoted as the NAST index (NASTI).

      Figure 7.  Temporal correlation coefficient (TCC) skill scores of NASTI (North Atlantic SST Tripole index) prediction by BCC_CSM1.1(m) for (a) all months, and the dependence of the TCCs on (b) the initial calendar month and (c) the target month, where the dashed line denotes statistical significance at the 99% confidence level based on the Student’s t-test.

      As seen in Fig. 7a, BCC_CSM1.1(m) has a TCC of 0.78 between its prediction and observation at the lead time of 0 month, which then reduces to 0.59 at the lead time of 1 month. It is clear that the TCCs drop below the 0.01 significance level at lead times of 7–8 months. In terms of the dependence of the TCCs upon the initial calendar month, Fig. 7b shows that the skill scores are relatively higher during January–July as the initial months, where useful predictions can be obtained at lead 2 months in advance if taking the TCC of 0.5 as a reference. Meanwhile, we can see from Fig. 7c that, in terms of target month, the NASTI predictions during March–September possess higher skill than other months. By contrast, the lowest prediction ability of the model appears during boreal autumn.

      According to the requirements of the March prediction consultation meeting held at the BCC every year, we also evaluate the model hindcast skill of the spring and summer NASTI initiated in March. As shown in Figs. 8a and 8b, during the 24-yr period (1991–2014), the TCC of the April–May mean NASTI between the hindcast and observation is 0.74, and the TCC of the summer mean NASTI is relatively lower, at 0.50. This is still useful and valuable in terms of providing an outlook regarding the EASM.

      Figure 8.  NASTI (North Atlantic SST Tripole index) results from the (a) April–May mean and (b) June–August mean hindcasts of BCC_CSM1.1(m), as initiated in March (red lines), and from OISSTv2 (black lines). The TCC (temporal correlation coefficient, corr) between the hindcast and observed NAST indices is given in the top-right corner of each panel. The monitoring and prediction products of the NASTI, as initiated in October 2015, are shown in (c), in which the solid bars represent the monitoring of the past 12 months using OISSTv2, the hatched bars are the forecasts for the future 12 months according to BCC_CSM1.1(m), and the black dots are the observations from OISSTv2.

      Figure 8c presents the real-time monitoring and prediction products of NASTI issued in October 2015. In the previous year, the observed index is maintained in a positive phase of the NAST. The forecasts as initiated in October 2015 using BCC_CSM1.1(m) show continual positive values in the following half-year, and then weakly negative values appearing thereafter. Until April 2016, this prediction has been validated to be successful compared to the observational NASTI values, albeit the amplitude of the predicted index during winter is somewhat weaker.

    6.   Prediction of the AO
    • The AO, as the dominant mode of extratropical low-frequency variability in the Northern Hemisphere (NH), is typically characterized by a seesaw pattern of pressure between the Arctic and midlatitude regions of the Northern Hemisphere. The AO pattern is usually defined as the leading empirical orthogonal function (EOF) of monthly mean sea level pressure (SLP) north of 20°N (Thompson and Wallace, 2000). For the EOF analysis, the SLP is properly weighted by the square root of the cosine of latitude (North et al., 1982). The AO index is then defined as the normalized principal component time series. To evaluate the prediction skill of the AO, the model-predicted AO index is firstly calculated by projecting the monthly mean SLP anomalies of the hindcast onto the observed AO EOF pattern in the same period. Then, the TCCs between the observed and predicted AO indices are calculated to represent the prediction skill of the AO in BCC_CSM1.1(m).

      Figure 9a shows the prediction skill of monthly AO indices as a function of lead time and initial calendar month. Overall, BCC_CSM1.1(m) has a low capability of AO prediction. The maximum prediction skill is only 0.75 at the lead time of 0 month, initiated in June, and sharply reduces to 0.41 at the lead time of 1 month. More specifically, the model prediction skill is only significant in December, where if taking 0.2 as the lowest prediction skill threshold, then the acceptable prediction limit is about 3 months, while there is almost no prediction skill for other initial months. Indeed, if we still take 0.5 as the reference of useful prediction, this model has no prediction skill.

      Figure 9.  TCC (temporal correlation coefficient) skill scores of (a) monthly mean AO (Arctic Oscillation) index prediction by BCC_CSM1.1(m) as a function of lead month and initial calendar month, and (b) winter-mean AO index prediction by the same model as a function of lead month. The dashed line denotes statistical significance at the 95% confidence level based on the Student’s t-test.

      In particular, the prediction skill for the winter (DJF) mean AO index is also assessed, as shown in Fig. 9b. Note that this model has rolling predictions covering lead times of 0–12 months in each initial month. Thus, the winter AO predictions at lead times of 0–10 months are generated from the 11 initial months: December, November, ..., February, respectively. It can be seen from Fig. 9b that the TCC scores drop rapidly for lead times exceeding 1 month, even though the relatively significant scores are visible at lead times of 0 or 3 months. Thus, objectively speaking, BCC_CSM1.1(m) has almost no significant prediction skill in predicting the winter AO index in spite of 1-month lead. As pointed out in previous studies, several dynamical seasonal forecasting systems exhibit valuable prediction skill (higher than the persistence prediction based on observation) (Riddle et al., 2013; Kang et al., 2014; Sun and Ahn, 2015).

      Figure 10 presents an example of real-time monitoring and prediction of the AO index by this model, as initiated in October 2015. It is not beyond expectation that the model prediction is much weaker than the observation, though interestingly, the sharp transition of both the AO signs and values from early to late winter is captured slightly by the prediction. Therefore, further efforts need to be made for improving the skill of the AO prediction in BCC_CSM1.1(m). Moreover, next we focus on the prediction of the NAO, whose index is highly correlated with the AO index during winter, because rapid progress has also been made recently (e.g., Scaife et al., 2014; Fan et al., 2016).

      Figure 10.  The monitoring and prediction products of the AO (Arctic Oscillation) index (AOI) as initiated in October 2015, in which the solid bars are the monitoring of the past 12 months using the NCEP data (NRA1), the hatched bars are the forecasts for the future 12 months by BCC_CSM1.1(m), and the black dots are the observations from NRA1.

    7.   Prediction of the East Asian climate phenomena
    • In East Asia, the WPSH and winter/summer monsoon are the most important climate phenomena due to their direct and strong influences on climate anomalies and even disasters. Products that enable routine predictions are being developed based on BCC_CSM1.1(m) outputs.

    • The WPSH, as an important part of the East Asian climate system, has a close relationship with weather and climate variations over the eastern regions of China. Here, we use the WPSH index (WPSHI) of Wang et al. (2013) (see Table 1) to evaluate the prediction performance of BCC_CSM1.1(m) for the WPSH. Note that the summer mean is the average of June, July, and August, and the model has rolling predictions covering lead times of 0–12 months in each initial month. Therefore, the summer WPSH predictions at lead times of 0–10 months are generated from the 11 initial months: June, May, ..., January, and previous December, November, ..., August, respectively.

      The skill of the summer WPSHI is shown in Fig. 11a. The TCC scores decrease slowly as the lead time (in months) increases. It is fairly interesting to see that the TCC score can reach 0.8 at the lead time of 1 month and still exceed the 95% confidence level at the lead time of up to 9 months. This means that BCC_CSM1.1(m) can provide a useful prediction for the summer WPSHI, as initialized from September of the previous year, with an expected TCC skill of 0.5.

      Figure 11.  (a) TCCs (temporal correlation coefficients) of the JJA (June–August)-mean WPSHI (western Pacific subtropical high index) between BCC_CSM1.1(m) predictions and observation (NCEP data). The grey solid line represents statistical significance at the 95% confidence level. (b) The observational summer WPSHI (solid bars) and the corresponding hindcasts (black dots), as well as the prediction of the WPSHI in 2015 summer, as initiated in March 2015 (hatched bar), and the corresponding observation (green dot). COR is the correlation between the observation and prediction.

      The real-time monitoring and prediction products of the summer WPSHI, as initiated in March 2015, are shown in Fig. 11b. It is clear that large positive values of the WPSHI mostly occur in the summer decay phase of strong El Niño events, e.g., 1983, 1998, and 2010. Actually, in these summers, the flooding areas were mainly located over the Yangtze River valley and southern China, primarily due to the fact that a strongly positive WPSH favors more moisture transport into these areas, forming positive rainfall anomalies. In summer 2015, the observed value of the WPSHI is 0.11 and the prediction of BCC_CSM1.1(m) issued in March 2015 is 0.48. That is, the model prediction gives the correct sign of the WPSHI in summer 2015 at the lead time of 3 months, consistent with the observation.

    • There are many indices for representing the EASM and EAWM. To meet the needs of the BCC’s climate prediction operations, here, as the first stage in the development of monsoon prediction, we choose the two EASM indices as defined by Shi et al. (1996) and Zhang et al. (2003), which are denoted as the EASMI-S96 and EASMI-Z03, respectively. Meanwhile, we choose the two EAWM indices defined by Guo (1994) and Shi et al. (1996) and denoted as EAWM-S96 and EAWM-G94, respectively. Details of the definitions of these indices can be found in Table 1 and the corresponding references. Alternative indices will also be considered later.

      Figure 12 gives the TCC skill scores of the four monsoon indices. Note that the method used to produce the figure for the EAWMIs is the same as that for Fig. 9b, and that for the EASMIs is the same as that for Fig. 11a. Generally speaking, the two EASMIs have much higher prediction skills than the two EAWMIs. In particular, EASMI-Z03 has slowly declining TCC scores that are above 0.7 at the lead time of 1 month, and exceed the 95% confidence level and remain higher than 0.5 at lead times of up to 8 months, indicating the high predictability of this index in BCC_CSM1.1(m). As a comparison, the other EASMI shows relatively lower skill, with big fluctuations, but at the lead time of 1 month, its TCC score is still more than the 95% confidence level. These results indicate that BCC_CSM1.1(m) is, to a certain extent, able to provide skillful predictions of the EASM. That is, the skill of prediction is dependent on the indices of the EASM, which is consistent with the conclusions of Liu et al. (2015) and Cheng et al. (2016).

      Figure 12.  TCCs (temporal correlation coefficients) for (a) the two EASMIs (East Asian summer monsoon indices) and (b) the two EAWMIs (East Asian winter monsoon indices) between BCC_CSM1.1(m) predictions and observations (NCEP data). EASMI-S96 is the EASMI of Shi et al. (1996), EASMI-Z03 is the EASMI of Zhang et al. (2003), EAWMI-S96 is the EAWMI of Shi et al. (1996), and EAWMI-G94 is the EAWMI of Guo (1994).

      In contrast to the EASMIs, there is almost no useful skill in the prediction of the two EAWMIs by BCC_CSM1.1(m). This is presumably associated with the poor skill in predicting the AO in this model, which is intimately linked with East Asian climate anomalies (e.g., Zuo et al., 2015, 2016b, and references therein). Two other EAWMIs are also evaluated, producing similar results (figure omitted). Therefore, the current versions of the BCC’s climate models have insufficient capability in capturing the variations of extratropical leading modes to a reasonable level. More in-depth evaluation for mid–high latitude dynamics in the models is needed.

      Figures 13 and 14 show the products of EASM and EAWM indices, respectively, to introduce their climatic monitoring and real-time predictions by BCC_CSM1.1(m). As can be seen from Fig. 13, the predictions of the two EASMIs generated in March 2015 both fail, with opposite signs to the two observed indices, which are also not consistent with each other due to their different definitions. Slightly better prediction can be found for the EAWM, as shown in Fig. 14. The model can correctly predict one of the two EAWMIs in winter 2015, as initiated in October 2015. Indeed, it is always difficult to discuss model performance for a single case based on two simple plots, especially for events of weak amplitude.

      Figure 13.  As in Fig. 11b, but for (a) EASMI-S96 [the East Asian summer monsoon index (EASMI) of Shi et al. (1996)] and (b) EASMI-Z03 [the EASMI of Zhang et al. (2003)].

      Figure 14.  As in Fig. 11b, but for (a) EAWMI-S96 [the EAWMI (East Asian winter monsoon index) of Shi et al. (1996)] and (b) EAWMI-G94 [the EAWMI of Guo (1994)], as initiated in October 2015.

    8.   Summary and discussion
    • Nowadays, the key focus of short-term climate prediction is not only climate variable forecasting (e.g., rainfall and surface air temperature), but also the forecasting of major climate phenomena (i.e., climate variability modes), which have strong impacts on, and may provide extra predictability to, local climate on the subseasonal–interannual timescales. In this paper, we first briefly review the current status of research and development at the BCC relating to predictions of these primary climate phenomena, and then report recent prominent progress made at the BCC. The climate phenomena that we focus on include ENSO, MJO, AO, IOBM, IOD, SIOD, NAST, WPSH, EAWM, and EASM. A number of studies have revealed the impacts of these climate phenomena on China’s climate.

      Our review shows that predictions of these climate phenomena have been performed at operational climate centers and scientific research institutes worldwide. The BCC, similar to the National Climate Center of China and the Regional Climate Center of the World Meteorological Organization, has been pushing research and development to achieve an operational climate phenomenon prediction system (CPPS), based on its latest generation of climate system models and statistical models. Accordingly, a system of ENSO monitoring, analysis, and prediction (SEMAP2.1) has been developed and applied to real-time climate predictions, and shows comparable prediction skill to other operational forecast systems; the TCC score of the Niño3.4 index between prediction and observation during 1996–2015 reaches 0.8 at the lead time of half a year. An encouraging result is that the IOBM is highly predictable in the BCC-CPPS, with more than 12 target months of high TCC scores above 0.5. For MJO prediction, the BCC has been establishing an upgraded ISV/MJO monitoring and prediction system (IMPRESS2.0), and relative to version 1.0, the 1994–2014 prediction skill of the RMM indices reach up to 20 days from 16 days. Prediction of the summer WPSH also shows a high TCC skill score of 0.8, with a 1-month lead time, during 1991–2014; and meanwhile, the EASM can be skillfully predicted with a TCC score higher than 0.7 at the lead time of 1 month, despite its index dependence.

      Compared with these high-skill predictions, relatively lower skill is found for the IOD and NAST, where during 1991–2014, the former has a useful prediction 2–3 months ahead and the latter has a useful prediction 2 months ahead, if taking the TCC of 0.5 as the standard of reference. In particular, the TCC score of the summer NAST prediction initiated in March can reach 0.5. In addition, our results show that there is almost no clear skill in the prediction of the AO, SIOD, and EAWM; however, this is a preliminary conclusion and further verifications, e.g., with more indices involved, are needed. In fact, the relatively low prediction skill relating to these modes is also a common feature of other prediction systems, and it should be realized that the potential predictability of these modes is relatively low, with even a “perfect” model and initial conditions not guaranteeing a “reasonable” prediction skill. Moreover, this paper also introduces some of the products generated from the BCC-CPPS, along with their application in the real-time monitoring and prediction of climate phenomena.

      Based on the verification results, we discuss some attributions of the poorly predicted climate phenomena with low skill in the system. Further, it can be suggested that improvement for predicting these climate phenomena can be explored by improving the BCC climate models and developing physics-based statistical models or correction methods of dynamical model predictions. In the following, we summarize some aspects for further developing climate phenomenon prediction.

      SEMAP2.1 demonstrates an encouraging predictive skill in terms of the TCC because during the period 2000–14, the model predictability of ENSO tends to be lower than before (Barnston et al., 2012). Indeed, the performance of the BCC coupled models in reproducing ENSO needs to be further improved by optimizing the model physics. For example, the ENSO period in BCC_CSM1.1(m) can be improved by inflating cumulus entrainment (Lu and Ren, 2016). Further, we need to focus on the prediction of ENSO’s impacts besides ENSO itself, which have considerable implications for the prediction of rainfall and temperature in East Asia. For example, the so-called combination mode, the result of ENSO and annual cycle nonlinear interaction, has potential impacts on the climate variations of East Asia and western Pacific (Stuecker et al., 2013; Ren et al., 2016d).

      Two types of ENSO (CP and EP) coexist in the current climate system and play a key role in historical interdecadal ENSO regime changes (e.g., Ren et al., 2013). Since the prediction skill regarding the two types of ENSO, particularly the CP type, is still relatively low, and the two types have fairly distinct impacts on China’s climate (e.g., Feng et al., 2010; Zhang et al., 2011, 2012), we need to carry out more relevant research on their dynamical mechanisms and predictability. Considering that current climate models are unable to satisfactorily reproduce the differences between the two types of ENSO (e.g., Yu and Kim, 2010), new prediction methods that are more effective in this respect need to be developed, based on both dynamical and statistical models.

      The MJO or, more generally, the ISV, is not reproduced sufficiently well in the current versions of the BCC’s climate models (Zhao et al., 2014, 2015). For example, these models tend to generate too much precipitation over the western Pacific but too little over the tropical Indian Ocean, which can yield a far weaker intensity of the intraseasonal spectra than observed in the MJO developing regions. Other deficiencies of the MJO in the models include a much shorter main periodicity and a faster eastward propagation, compared with observation, which is also found in the hindcasts (Wu et al., 2016b). These model deficiencies have similar features in the coupled and uncoupled versions, and may lead to the former failing to achieve a better prediction skill for the MJO compared to the latter. Indeed, there are also other possibilities that can influence the performance of the coupled prediction model, such as the inconsistent initial conditions of atmospheric data around 2000, and the potential bias in the BCC’s ocean assimilation data used as initial values.

      The Indian Ocean and North Atlantic SST modes impact greatly on China’s climate, and thus we need to focus more on these modes to develop corresponding and effective approaches to improve the prediction skill. For example, for these modes, such as the SIOD and NAST, there are few papers regarding their prediction issues, very little is known about their mechanisms, and there has been barely any work on their precursors. These kinds of issues need to be comprehensively addressed, particularly using models.

      Predicting the AO remains a challenging subject for the BCC’s models. An in-depth evaluation is needed to systematically examine the AO-related atmospheric dynamics and mechanisms of maintenance, well-known to be induced by the feedback of synoptic eddies (e.g., Ren et al., 2009, 2011, 2012) in these models, compared to observation. Moreover, unrealistic sea–ice modeling and initialization in the BCC’s coupled model may be another error source for AO prediction, because the sea–ice variations of the Arctic in autumn are a good precursor for predicting winter AO variations (e.g., García-Serrano and Frankkignoul, 2014). Furthermore, statistical models, which are based on dynamics and mechanisms, need to be developed as a complement to the predictions of dynamical models.

      In terms of East Asian monsoon prediction, the current model version may be subject to certain deficiencies in model physics, even though some skill can be found in BCC_CSM1.1(m) in this regard (Liu et al., 2015). It is often difficult to improve the performance of models in simulating monsoon. To improve monsoon prediction in dynamical models with low skill, statistical correction methods are worthy of further development (Cheng et al., 2016), which not only complement the dynamical prediction but also provide a benchmark for the improvement of dynamical models.

      Finally, it is noted that the simple MME works to significantly improve the prediction skill of ENSO. Therefore, in the next steps of research and development for the BCC-CPPS, more effort needs to be made with respect to MME-based prediction methodologies and techniques, to improve the prediction of climate phenomena in BCC_CSM1.1(m). In addition, uncertainties in the prediction skill of climate phenomena remain a problem due to uncertainties in the indices chosen. Thus, it is highly necessary to form a comprehensive collection of indices for predicting the different climate phenomena.

      In addition, although evaluations of the BCC-CPPS have shown encouraging prediction skill with respect to primary climate modes, compared with those of other operational/research systems worldwide, the hindcast data of the BCC’s models, used for most of the evaluations, are mainly based on the historical data of the NCEP–NCAR R1 and GODAS reanalysis. However, in real-time forecasting, the BCC is usually unable to obtain timely updated analysis fields of these datasets and generally uses the CMA analysis for initializing atmosphere and ocean variables in the models. Besides the model’s deficiencies, the mismatch of initial values between the hindcast and real-time forecast, as well as the incompatibility between the prediction models and the data assimilation of the initial values, can substantially affect the performance of the BCC-CPPS and its application in short-term climate prediction in China. More effort is needed to solve these problems.

      Acknowledgments. The authors are grateful to the three anonymous reviewers for their insightful comments, which helped to improve the quality of the paper.

Reference (137)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return