Synthesis of True Color Images from the Fengyun Advanced Geostationary Radiation Imager

风云卫星扫描成像辐射计的真彩色图像合成技术

+ Author Affiliations + Find other works by these authors
  • Corresponding author: Xiuzhen HAN, hanxz@cma.gov.cn
  • Funds:

    Supported by the National Key Research and Development Program of China (2018YFC150650) and National Satellite Meteorological Center Mountain Flood Geological Disaster Prevention Meteorological Guarantee Project 2020 Construction Project (IN_JS_202004)

  • doi: 10.1007/s13351-021-1138-3

PDF

  • The production of true color images requires observational data in the red, green and blue bands. The Advanced Geostationary Radiation Imager (AGRI) onboard China’s Fengyun 4 (FY-4) series of geostationary satellites only has blue and red bands and we therefore have to synthesize a green band to produce RGB true color images. We used random forest regression and conditional generative adversarial networks to train the green band model using Himawari-8 Advanced Himawari Imager data. The model was then used to simulate the green channel reflectance of the FY-4 AGRI. A single-scattering radiative transfer model was used to eliminate the contribution of Rayleigh scattering from the atmosphere and a logarithmic enhancement was applied to process the true color image. The conditional generative adversarial network model was better than random forest regression for the green band model in terms of statistical significance (e.g., a higher determination coefficient, peak signal-to-noise ratio and structural similarity index). The sharpness of the images was significantly improved after applying a correction for Rayleigh scattering and the images were able to show natural phenomena more vividly. The AGRI true color images could be used to monitor dust storms, forest fires, typhoons, volcanic eruptions, and other natural events.
    制作卫星真彩色图像一般需要红光、绿光和蓝光波段的观测数据,而搭载在中国风云四号(FY-4)系列静止气象卫星上的扫描成像辐射计 (Advanced Geostationary Radiation Imager , AGRI) 只具有蓝光和红光波段,缺失绿光波段。本文基于随机森林和条件生成对抗网络 (conditional generative adversarial networks, CGAN),利用Himawari-8 AHI (Advanced Himawari Imager) 数据,训练绿光波段,生成模型。然后使用该模型来模拟 FY-4 AGRI 的绿光通道反射率。为了获得更好的 RGB 图像质量,使用单散射辐射传递模型来消除瑞利散射的贡献,在此基础上进行对数增强处理。结果表明,CGAN 模型在统计误差(例如更高的决定系数、峰值信噪比和结构相似性)方面优于随机森林。应用瑞利散射校正后,图像的清晰度明显提高,图像更生动地展现了大自然的场景。最后本文使用 AGRI 真彩色图像对沙尘暴、森林火灾、台风、火山爆发以及其他自然事件进行监测。
  • 加载中
  • Fig. 1.  Flow chart of the synthesis of AGRI true color images.

    Fig. 2.  Schematic diagram of a decision tree, where B is the blue band reflectance, G is the green band reflectance, R is the red band reflectance, NIR is the near-infrared band reflectance, N represents the sample point, Φsun represents the solar azimuth angle and Φsat represents the satellite azimuth angle.

    Fig. 3.  Schematic diagram of the CGAN model with U-Net as the generator and Path-Gan as the discriminator.

    Fig. 4.  (a) VIIRS true color image (https://worldview.earthdata.nasa.gov). (b) AHI true color image (https://www.eorc.jaxa.jp/ptree). (c) Spectral response curves of the AHI, AGRI, and VIIRS and the reflection spectra of grass and quartz.

    Fig. 5.  FY-4A AGRI true color images on 23 December 2020 using hybrid green band and log enhancement at (a) 2200 UTC, (b) 0000 UTC, (c) 0200 UTC and (d) 0400 UTC.

    Fig. 6.  FY-4A AGRI true color images at 0400 UTC in March, 2021 with logarithm enhancement and Rayleigh scattering correction (a) without a hybrid green band and (b) with a hybrid green algorithm.

    Fig. 7.  Comparison of different image enhancement algorithms using FY4A-AGRI data at 0400 UTC. March, 2021.

    Fig. 8.  Comparison of FY4A-AGRI true color images using hybrid green band and log enhancement at 05:00 UTC. March 23, 2020 (a) without atmospheric correction and (b) with atmospheric correction.

    Fig. 9.  (a) True color image affected by overcorrection for Rayleigh scattering. (b) True color image after correction for Rayleigh scattering and optimization of overcorrection.

    Fig. 10.  Four examples of AGRI true color images explaining atmospheric features. (a) Volcanic ash from Raikoke Island, Russia (June 22, 2019); (b) large-scale forest fires in Australia (January 13, 2020); (c) a dust storm that swept over half of China (March 15, 2021); and (d) two cyclones in the Pacific Ocean (April 21, 2021).

    Table 1.  Comparison of the channel settings of the new generation of geostationary meteorological satellites: Himawari-8 in Japan, FY-4A in China, GEO-KOMPSAT-2A in South Korea, and GOES-R in the USA

    Himawari-8/AHI FY-4A/AGRI GEO-KOMPSAT-2A/AMI GOES-R/ABI
    ChannelCenter
    wavelength
    (µm)
    Spatial
    resolution
    (km)
    ChannelCenter
    wavelength
    (µm)
    Spatial resolution
    (km)
    ChannelCenter
    wavelength
    (µm)
    Spatial resolution
    (km)
    ChannelCenter
    wavelength
    (µm)
    Spatial
    resolution
    (km)
    10.455110.47110.470110.471
    20.51120.5091
    30.6450.520.650.530.6390.520.640.5
    40.86130.825140.863130.861
    41.375251.37241.381
    51.61251.61261.61251.612
    62.26262.25462.262
    73.8527 & 83.752 & 473.83273.902
    86.25296.25486.21286.152
    96.952107.1496.94297.002
    107.352107.332107.402
    118.602118.54118.592118.502
    129.632129.622129.702
    1310.4521310.3521310.302
    1411.2021210.741411.2321411.202
    1512.3521312.041512.3621512.302
    1613.3021413.541613.2921613.302
    Download: Download as CSV

    Table 2.  Comparison of the average statistical results for the 12 months in 2020 between the random forest regression and the CGAN

    Metrics
    RMSEMAER2PSNR (dB)SSIM
    Random forest regression0.01020.00650.991239.670.9934
    CGAN0.00830.00610.993440.120.9946
    Download: Download as CSV

    Table 3.  Lv for three bands before and after correction for Rayleigh scattering

    RedGreenBlue
    Lv (before correction)0.07330.04320.0241
    Lv (after) correction0.08960.06340.0382
    Download: Download as CSV
  • [1]

    Aldahdooh, A., E. Masala, G. Van Wallendael, et al., 2018: Framework for reproducible objective video quality research with case study on PSNR implementations. Dig. Signal Process., 77, 195–206. doi: 10.1016/j.dsp.2017.09.013.
    [2]

    Aria, M., C. Cuccurullo, and A. Gnasso, 2021: A comparison among interpretative proposals for Random Forests. Mach. Learn. Appl., 6, 100094. doi: 10.1016/j.mlwa.2021.100094.
    [3]

    Bah, M. K., M. M. Gunshor, and T. J. Schmit, 2018: Generation of GOES-16 true color imagery without a green band. Earth Space Sci., 5, 549–558. doi: 10.1029/2018EA000379.
    [4]

    Bessho, K., K. Date, M. Hayashi, et al., 2016: An introduction to Himawari-8/9- Japan’s new-generation geostationary meteorological satellites. J. Meteor. Soc. Japan Ser. II, 94, 151–183. doi: 10.2151/jmsj.2016-009.
    [5]

    Bodhaine, B. A., N. B. Wood, E. G. Dutton, et al., 1999: On Rayleigh optical depth calculations. J. Atmos. Oceanic Technol., 16, 1854–1861. doi: 10.1175/1520-0426(1999)016<1854:Orodc>2.0.Co;2.
    [6]

    Breiman, L., 2001: Random forests. Mach. Learn., 45, 5–32. doi: 10.1023/A:1010933404324.
    [7]

    Broomhall, M. A., L. J. Majewski, V. O. Villani, et al., 2019: Correcting Himawari-8 advanced Himawari imager data for the production of vivid true-color imagery. J. Atmos. Oceanic Technol., 36, 427–442. doi: 10.1175/jtech-d-18-0060.1.
    [8]

    Cai, J. R., S. H. Gu, and L. Zhang, 2018: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process., 27, 2049–2062. doi: 10.1109/TIP.2018.2794218.
    [9]

    Chen, C., Q. F. Chen, J. Xu, et al., 2018: Learning to see in the dark. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, UT, USA, 3291–3300, doi: 10.1109/CVPR.2018.00347.
    [10]

    Gladkova, I., F. Shahriar, M. Grossberg, et al., 2011: Virtual green band for GOES-R. Proc. Volume 8153, Earth Observing Systems XVI, SPIE, San Diego, California, United States, 81531C, doi: 10.1117/12.893660.
    [11]

    Grossberg, M. D., F. Shahriar, I. Gladkova, et al., 2011: Estimating true color imagery for GOES-R. Proc. Volume 8048, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, SPIE, Orlando, Florida, United States, 80481A, doi: 10.1117/12.884020.
    [12]

    Hillger, D., T. Kopp, T. Lee, et al., 2013: First-light imagery from Suomi NPP VIIRS. Bull. Amer. Meteor. Soc., 94, 1019–1029. doi: 10.1175/bams-d-12-00097.1.
    [13]

    Hillger, D. W., L. Grasso, S. D. Miller, et al., 2011: Synthetic advanced baseline imager true-color imagery. J. Appl. Remote Sens., 5, 053520. doi: 10.1117/1.3576112.
    [14]

    Huang, Z. H., T. X. Zhang, Q. Li, et al., 2016: Adaptive gamma correction based on cumulative histogram for enhancing near-infrared images. Infrared Phys. Technol., 79, 205–215. doi: 10.1016/j.infrared.2016.11.001.
    [15]

    Hyndman, R. J., and A. B. Koehler, 2006: Another look at measures of forecast accuracy. Int. J. Forecasting, 22, 679–688. doi: 10.1016/j.ijforecast.2006.03.001.
    [16]

    Isola, P., J.-Y. Zhu, T. H. Zhou, et al., 2017: Image-to-image translation with conditional adversarial networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Honolulu, Hawaii, United States, 5967–5976, doi: 10.1109/CVPR.2017.632.
    [17]

    Jeong, I., and C. Lee, 2021: An optimization-based approach to gamma correction parameter estimation for low-light image enhancement. Multimed. Tools Appl., 80, 18027–18042. doi: 10.1007/s11042-021-10614-8.
    [18]

    Jose, A., and A. Francis, 2021: Reversible colour density compression of images using cGANs. https://arxiv.org/abs/2106.10542.(不确定文献类型、格式是否正确, 请联系作者确认).
    [19]

    Kingma, D. P., and M. Welling, 2014: Auto-encoding variational Bayes. https://arxiv.org/abs/1312.6114. (不确定文献类型、格式是否正确, 请联系作者确认).
    [20]

    Lyapustin, A., J. Martonchik, Y. J. Wang, et al., 2011: Multiangle implementation of atmospheric correction (MAIAC): 1. Radiative transfer basis and look-up tables. J. Geophys. Res. Atmos., 116, D03210. doi: 10.1029/2010JD014985.
    [21]

    Miller, S. D., T. L. Schmit, C. J. Seaman, et al., 2016: A sight for sore eyes: The return of true color to geostationary satellites. Bull. Amer. Meteor. Soc., 97, 1803–1816. doi: 10.1175/bams-d-15-00154.1.
    [22]

    Miller, S. D., D. T. Lindsey, C. J. Seaman, et al., 2020: GeoColor: A blending technique for satellite imagery. J. Atmos. Oceanic Technol., 37, 429–448. doi: 10.1175/jtech-d-19-0134.1.
    [23]

    Mirza, M., and S. Osindero, 2014: Conditional generative adversarial nets. https://arxiv.org/abs/1411.1784.(不确定文献类型、格式是否正确, 请联系作者确认).
    [24]

    Pech-Pacheco, J. L., G. Cristobal, J. Chamorro-Martinez, et al., 2000: Diatom autofocusing in brightfield microscopy: A comparative study. Proc. 15th International Conference on Pattern Recognition, IEEE, Barcelona, Spain, 314–317, doi: 10.1109/ICPR.2000.903548.
    [25]

    Ronneberger, O., P. Fischer, and T. Brox., 2015: U-Net: Convolutional networks for biomedical image segmentation. Proc. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Munich, doi: 10.1007/978-3-319-24574-4_28.
    [26]

    van den Oord, A., N. Kalchbrenner, O. Vinyals, et al., 2016: Conditional image generation with PixelCNN decoders. Proc. 30th International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook.缺文章页码或编号.
    [27]

    Wang, M. H., 2016: Rayleigh radiance computations for satellite remote sensing: Accounting for the effect of sensor spectral response function. Opt. Express, 24, 12414–12429. doi: 10.1364/OE.24.012414.
    [28]

    Wang, Z., A. C. Bovik, H. R. Sheikh, et al., 2004: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process., 13, 600–612. doi: 10.1109/TIP.2003.819861.
    [29]

    Yang, J., Z. Q. Zhang, C. Y. Wei, et al., 2017: Introducing the new generation of Chinese Geostationary Weather Satellites, Fengyun-4. Bull. Amer. Meteor. Soc., 98, 1637–1658. doi: 10.1175/bams-d-16-0065.1.
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Synthesis of True Color Images from the Fengyun Advanced Geostationary Radiation Imager

    Corresponding author: Xiuzhen HAN, hanxz@cma.gov.cn
  • 1. School of Remote Sensing & Geomatics Engineering, Nanjing University of Information Science & Technology, Nanjing 210044
  • 2. National Satellite Meteorological Centre, China Meteorological Administration, Beijing 100081
Funds: Supported by the National Key Research and Development Program of China (2018YFC150650) and National Satellite Meteorological Center Mountain Flood Geological Disaster Prevention Meteorological Guarantee Project 2020 Construction Project (IN_JS_202004)

Abstract: The production of true color images requires observational data in the red, green and blue bands. The Advanced Geostationary Radiation Imager (AGRI) onboard China’s Fengyun 4 (FY-4) series of geostationary satellites only has blue and red bands and we therefore have to synthesize a green band to produce RGB true color images. We used random forest regression and conditional generative adversarial networks to train the green band model using Himawari-8 Advanced Himawari Imager data. The model was then used to simulate the green channel reflectance of the FY-4 AGRI. A single-scattering radiative transfer model was used to eliminate the contribution of Rayleigh scattering from the atmosphere and a logarithmic enhancement was applied to process the true color image. The conditional generative adversarial network model was better than random forest regression for the green band model in terms of statistical significance (e.g., a higher determination coefficient, peak signal-to-noise ratio and structural similarity index). The sharpness of the images was significantly improved after applying a correction for Rayleigh scattering and the images were able to show natural phenomena more vividly. The AGRI true color images could be used to monitor dust storms, forest fires, typhoons, volcanic eruptions, and other natural events.

风云卫星扫描成像辐射计的真彩色图像合成技术

制作卫星真彩色图像一般需要红光、绿光和蓝光波段的观测数据,而搭载在中国风云四号(FY-4)系列静止气象卫星上的扫描成像辐射计 (Advanced Geostationary Radiation Imager , AGRI) 只具有蓝光和红光波段,缺失绿光波段。本文基于随机森林和条件生成对抗网络 (conditional generative adversarial networks, CGAN),利用Himawari-8 AHI (Advanced Himawari Imager) 数据,训练绿光波段,生成模型。然后使用该模型来模拟 FY-4 AGRI 的绿光通道反射率。为了获得更好的 RGB 图像质量,使用单散射辐射传递模型来消除瑞利散射的贡献,在此基础上进行对数增强处理。结果表明,CGAN 模型在统计误差(例如更高的决定系数、峰值信噪比和结构相似性)方面优于随机森林。应用瑞利散射校正后,图像的清晰度明显提高,图像更生动地展现了大自然的场景。最后本文使用 AGRI 真彩色图像对沙尘暴、森林火灾、台风、火山爆发以及其他自然事件进行监测。
    • A true color image consists of red, green and blue color components. Image synthesis is a balance between science and art: science extracts the key parameters from complex scenarios based on physical algorithms, whereas art maximizes the information in the product image and constructs a visually intuitive display (Miller et al., 2020). True color images of the Earth’s surface produced using data from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard NASA’s Terra and Aqua satellites show details of the land surface, coastal oceans, sea ice and clouds. The Visible and Infrared Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership and NOAA-20 satellites are used to generate composite true color imagery that can clearly detect vegetated versus barren land surfaces and oceanic and atmospheric features (Hillger et al., 2013). After the Himawari-8 satellite was launched on October 14, 2014, Miller et al. (2016) created true color imagery with an enhanced green band signal from the original Advanced Himawari Imager (AHI) native green band. Their land and vegetation colors are much closer to reality and smog and haze are better detected than in the single band grayscale visible band images.

      Although the Advanced Baseline Imager (ABI) was launched on the GOES-R satellite without a green band, a number of algorithms have been developed for true color imaging. However, some of the simpler methods (such as linear regression of multispectral parameters) do not achieve satisfactory results as a result of the potential non-linearity of the data (Gladkova et al., 2011). Hillger et al. (2011) synthesized the missing green band using the spectrally adjacent available bands. MODIS RGB data were input into a look-up table to determine the relationship between the green band reflectance and other bands. The look-up table was then used to estimate the ABI green band reflectance from the measurements in the blue, red and near-infrared bands. Grossberg et al. (2011) reconstructed the ABI green band by combining multiple linear regression and K-means clustering with hyperspectral datasets. Their algorithm is applicable to other satellites that do not have a green band.

      Similar to the ABI, the Advanced Geostationary Radiation Imager (AGRI) on China’s Fengyun-4 geostationary meteorological satellites also does not have the green band necessary for producing natural RGB images. We adapted two different methods—a random forest regression and a conditional generative adversarial network (CGAN)—to generate a synthetic green band (Breiman, 2001; Mirza and Osindero, 2014). The random forest model is the best performing and most often used model in machine learning frameworks (Aria et al., 2021). It ensures a high predictive precision, flexibility and immediacy. The CGAN is a deep generative model and can be used for image translation problems, such as translations from grayscale to color images and from daytime to night-time images (Isola et al., 2017). Both these models have achieved great success in specific fields and we therefore used them to reconstruct the green band.

      After the green band has been generated, image processing is required to obtain a vivid true color image (Bah et al., 2018). The most widely used method for image enhancement is function mapping—such as linear function and gamma correction (Huang et al., 2016; Jeong and Lee, 2021)—to stretch the image range distribution. Artificial intelligence methods are also used for image enhancement (Cai et al., 2018; Chen et al., 2018). We used an image enhancement algorithm based on a log function to improve the display of the image. Correction of Rayleigh scattering is also needed for remote sensing images and correction via a pre-computed Rayleigh scattering look-up table is commonly used (Lyapustin et al., 2011; Wang, 2016). Broomhall et al. (2019) used the atmospheric radiative transfer model MODTRAN to calculate the contribution of atmospheric Rayleigh scattering. We used a simplified single-scattering approximation me-thod based on the radiative transfer model for correction.

      The remainder of this article is organized as follows. Section 2 introduces the materials used in the research, Section 3 describes methods and Section 4 gives our results. Our discussion and conclusions are presented in Section 5.

    2.   Instruments and datasets
    • Fengyun 4 is a new generation of geostationary meteorological satellites launched by China. The two satellites, FY-4A and FY-4B, have a greatly improved ability to monitor and forecast high-impact weather events compared with the previous generation of geostationary satellites (Yang et al., 2017). FY-4A has been in orbit for nearly five years, whereas FY-4B was launched on 3 June 2021. Both satellites have onboard AGRI sensors with a spectral range from the visible to thermal infrared region. However, neither the AGRI nor the ABI has the green band essential for generating true color images (Table 1). We therefore used AGRI reflectance, satellite viewing angle and solar geometry data to generate the green band in true color images.

      Himawari-8/AHI FY-4A/AGRI GEO-KOMPSAT-2A/AMI GOES-R/ABI
      ChannelCenter
      wavelength
      (µm)
      Spatial
      resolution
      (km)
      ChannelCenter
      wavelength
      (µm)
      Spatial resolution
      (km)
      ChannelCenter
      wavelength
      (µm)
      Spatial resolution
      (km)
      ChannelCenter
      wavelength
      (µm)
      Spatial
      resolution
      (km)
      10.455110.47110.470110.471
      20.51120.5091
      30.6450.520.650.530.6390.520.640.5
      40.86130.825140.863130.861
      41.375251.37241.381
      51.61251.61261.61251.612
      62.26262.25462.262
      73.8527 & 83.752 & 473.83273.902
      86.25296.25486.21286.152
      96.952107.1496.94297.002
      107.352107.332107.402
      118.602118.54118.592118.502
      129.632129.622129.702
      1310.4521310.3521310.302
      1411.2021210.741411.2321411.202
      1512.3521312.041512.3621512.302
      1613.3021413.541613.2921613.302

      Table 1.  Comparison of the channel settings of the new generation of geostationary meteorological satellites: Himawari-8 in Japan, FY-4A in China, GEO-KOMPSAT-2A in South Korea, and GOES-R in the USA

    • The AHI is onboard the Himawari-8 satellite, a new generation of geostationary meteorological satellite launched by Japan (Bessho et al., 2016). The AHI sensor is equipped with 16 multispectral channels, including three visible light channels, three near-infrared channels and 10 infrared channels (Table 1). The AHI scans the full disk of the Earth in the Eastern Hemisphere every 10 minutes and the target area every 2.5 minutes, with a spatial resolution of 0.5–2 km.

      We used the AHI 2 km L1 full disk data with a spatial resolution of 2 km and the netCDF4 data format. We selected the AHI hourly data for the 23rd day of four months (March, June, September, and December) in 2020. Data for these four days were chosen to take into account the seasonal changes in solar radiation and sun glint. The observational area of a geostationary satellite is fixed, but the revolution of the Earth affects the satellite images, so daily and annual changes should be considered when selecting samples. The Sun moves between the Tropic of Capricorn and the Tropic of Cancer, leading to four different seasons, so these four months were chosen to represent the four seasons. The annual change in the Sun’s angle is < 50°, but the daily change is a full 180°. We think that the daily changes are more important, so we used 24-h data for each day. The AHI data at 0400 UTC on the 15th day of each month in 2020 were used to validate the model.

    3.   Methodology
    • Figure 1 is a technical flow chart for true color image production. First, the AHI data are used to train and validate the model. After the model has been trained, the AGRI data are input into the model to obtain the reflectance of the green band. A green band at 0.51 µm is then adjusted to create a hybrid green. Rayleigh scattering correction and image enhancement techniques are used to optimize the image quality before synthesizing the true color image.

      Figure 1.  Flow chart of the synthesis of AGRI true color images.

    • Random forest regression is a standard machine learning algorithm that can be used for missing value filling and numerical prediction. Random forest regression requires the selection of a number of trees (nodes) and maximum features (splitting) in each tree (node). In the random forest algorithm, the splitting number is largely affected by the input variables and, as a rule of thumb, is roughly the square root of the input variables.

      We randomly select ed10 million sample points from the four-day AHI data. Each sample point contained eight eigenvalues, which are the reflectance of AHI B1 (0.455 µm), B3 (0.645 µm), B4 (0.860 µm) and B5 (1.61 µm) and four angles (solar zenith angle, satellite zenith angle, solar azimuth angle and satellite azimuth angle). The target value is the reflectance of AHI B2 (0.510 µm). Figure 2 is a schematic diagram of a decision tree in the random forest. Our random forest generated 30 GB of model data after one hour of training.

      Figure 2.  Schematic diagram of a decision tree, where B is the blue band reflectance, G is the green band reflectance, R is the red band reflectance, NIR is the near-infrared band reflectance, N represents the sample point, Φsun represents the solar azimuth angle and Φsat represents the satellite azimuth angle.

    • CGAN is a type of image generation model with the ability to produce non-existent images. When choosing a generative model, we need to consider its imagination, speed of generating pictures, and the clarity of the generated pictures. CGAN is superior to other generation models, such as PixelCNN and the variational autoencoder. PixelCNN (van den Oord et al., 2016) generates images point-by-point, but takes a long time to generate pictures. The picture generated by the variational autoencoder can be a little fuzzy (Kingma and Welling, 2014).

      Mathematically, the CGAN learns a mapping function from an input image X to the output image Y (Fig. 3). The generator G is trained to produce an image close to the original. A discriminator D is conversely trained to identify the generated images (Jose and Francis, 2021). In terms of network architecture, we used U-Net (Ronneberger et al., 2015) to create G and Patch-GAN (Isola et al., 2017) for the discriminator. The loss function is expressed as:

      Figure 3.  Schematic diagram of the CGAN model with U-Net as the generator and Path-Gan as the discriminator.

      $$\begin{aligned} {L}_{\rm{pix2pix}}= & E\left[logD\left(X,Y\right)\right]\\ + & E\left[\mathrm{log}\left(1-D\left(X,{Y}_{\rm{cgan}}\right)\right)\right]+\lambda {L}_{1}. \end{aligned}$$ (1)
      $$ {L_1} = E\left[ {||y - {y_{\rm{cgan}}}|{|_1}} \right]. $$ (2)
      $$ {G^*} = arg\mathop {{\rm{min}}}\limits_G \mathop {{\rm{max}}}\limits_D \left\{ {{L_{\rm{pix2pix}}}} \right\} + \lambda E\left[ {||y - {y_{\rm{cgan}}}|{|_1}} \right]. $$ (3)

      where Ycgan is the image generated by G and λ is the weight parameter, which is set based on the magnitude of the data. L1 is a traditional standard loss method used to minimize the distance between the generated image (Ycgan) and the true output image (Y) for minimized blurring effects.

      When using the CGAN model, we regard the generation of the green band as an image translation problem, so we take the false color image synthesized by the AHI B1, B3, and B4 bands as the input value and the goal is to convert it into a grayscale image of the AHI green band reflectance. Data with a resolution of 2 km occupy a large amount of memory, which makes training difficult. To expand the training samples and improve the training efficiency, each picture is divided into four parts of the same size, and each part is resampled to (1024 × 1024) to give a total of 384 samples. Figure 3 shows a schematic diagram of the model structure, where X is the false color image synthesized by the three AHI channels, Y represents the grayscale image of the green channel and Ycgan is the picture generated by G. F and R represent the scores of Ycgan and Y by the discriminator, respectively. The values of F and R cannot be used as evaluation indicators of the model because D is also our training object. The parameters of D are constantly updated, causing the values of F and R to oscillate.

    • We used the root-mean-square error (RMSE), mean absolute error (MAE), determination coefficient (R2), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) to evaluate the green band generation model (Wang et al., 2004; Hyndman and Koehler, 2006; Aldahdooh et al., 2018). R2 is used for the evaluation of fitness and MAE and RMSE are used to measure the accuracy of the model. PSNR and SSIM are the evaluation indexes of image quality. The value of PSNR is inversely proportional to the difference between the two images—that is, the smaller the PSNR, the greater the difference between the images. The SSIM considers the three main characteristics of the image: brightness, contrast, and structure. The two images are similar when the SSIM is close to 1. The calculation formulas are:

      $$ {\rm{RMSE}}=\sqrt{\frac{1}{n}\sum\nolimits_{i=1}^{n}{\left({R}_{f,i}-{R}_{r,i}\right)}^{2}}. $$ (4)
      $$ {\rm{MAE}}=\frac{1}{n}\sum\nolimits_{i=1}^{n}\left(\left|{R}_{f,i}-\right.\left.{R}_{r,i}\right|\right). $$ (5)
      $${R^2} = 1 - \frac{{\displaystyle\mathop \sum \nolimits_{i = 1}^n {{\left( {{R_{f,i}} - {R_{r,i}}} \right)}^2}}}{{\displaystyle\mathop \sum \nolimits_{i = 1}^n {{\left( {{R_{f,i}} - {{\bar R}_f}} \right)}^2}}}.$$ (6)
      $$ {\rm{PSNR}}=10\times {log}_{10}\left(\frac{{R}_{\rm{max}}^{2}}{\dfrac{1}{n}\displaystyle\sum\nolimits_{i=1}^{n}{\left({R}_{f,i}-{R}_{r,i}\right)}^{2}}\right). $$ (7)
      $$ {\rm {S\!SIM }}= \frac{{\left( {2{{\bar R}_f}{{\bar R}_r} + {c_1}} \right) \times \left( {{\sigma _{f,r}} + {c_2}} \right)}}{{\left( {{{\bar R}_f}^2 + {{\bar R}_r}^2 + {c_1}} \right) \times \left( {\sigma _f^2 + \sigma _r^2 + {c_2}} \right)}}. $$ (8)

      where n is the number of image pixels, Rf is the green band reflectance generated by the model, Rr is the green band observation, ${{{\bar R}_f}} $ is the average of the green band reflectance generated by the model, ${{{\bar R}_r}} $ is the average of the green band observation, σf, r is the covariance between the observed value and the simulated value, σf is the variance of the green band reflectance generated by the model and σr is the variance of the green band observation.

    • Because the green band model in this study is based on AHI data, AGRI images may inherit some of the disadvantages of AHI—for example, Figs. 4a, 4b show true color images of Australia using the VIIRS and AHI. A comparison of the images shows that the area covered by vegetation in the AHI image is brownish, whereas the Gobi Desert is reddish. Figure 4c shows that the green band of the AHI is not aligned with the typical reflection peak area of green vegetation. The green band with a central wavelength of 0.51 µm has a lower reflectivity to minerals (such as quartz) compared with the band at 0.55 µm. To improve the display of AHI images, Miller et al. (2016) proposed a hybrid green solution. Here, we apply this solution to the AGRI images. The method is expressed as:

      Figure 4.  (a) VIIRS true color image (https://worldview.earthdata.nasa.gov). (b) AHI true color image (https://www.eorc.jaxa.jp/ptree). (c) Spectral response curves of the AHI, AGRI, and VIIRS and the reflection spectra of grass and quartz.

      $$\hspace{-70pt} {R}_{\rm{hybrid}}=\left(1-F\right)\times {R}_{g}+F\times {R}_{\rm{nir}}. $$ (9)

      where F is a scale factor from 0 to 1, Rg is the green band reflectance generated by the model and Rnir is the reflectance of AGRI band 3 (0.83 μm), which is a highly reflective area of vegetation and minerals (Fig. 4c).

    • Rayleigh scattering affects image quality. Visible light is Rayleigh-scattered by the gaseous constituents of the atmosphere, with the interaction more predominant at the blue end of the spectrum. This means that uncorrected color imagery from space will have an unwanted bluish haze (Broomhall et al., 2019). Our method to remove the influence of Rayleigh scattering is based on the single-scattering approximation of radiative transfer equation:

      $$\begin{aligned} R= & \frac{\omega }{4\left(1+\frac{{\mu }_{\rm{sat}}}{{\mu }_{\rm{sun}}}\right)}P\left({\mu }_{\rm{sat}},,{\mathrm{\Phi }}_{\rm{sat}},-{\mu }_{\rm{sun}},{\mathrm{\Phi }}_{\rm{sun}}\right)\\ & \left\{1-exp\left[-\tau \left(\frac{1}{{\mu }_{\rm{sun}}}+\frac{1}{{\mu }_{\rm{sat}}}\right)\right]\right\}. \end{aligned}$$ (10)
      $$ \tau =0.0088\left(\frac{{P}_{s}}{1013}\right){\lambda }^{-4.15+0.2\lambda }. $$ (11)

      where ω is the single-scattering albedo, μsun and μsat represent the cosine of the solar zenith angle and view zenith angle, respectively, and Φsun and Φsat represent the solar azimuth and the observation azimuth, respectively. τ is the optical thickness calculated by Eq. (11) (Bodhaine et al., 1999). PS is the surface pressure and λ is the wavelength in micrometers. The phase function P is expressed as:

      $$ P\left({\mu }_{\rm{sat}},{\mathrm{\Phi }}_{\rm{sat}},-{\mu }_{\rm{sun}},{\mathrm{\Phi }}_{\rm{sun}}\right)=\frac{3}{4}\left(1+{cos}^{2}\theta \right). $$ (12)
      $$\begin{aligned} \mathrm{cos}\theta = & {-\mu }_{\rm{sun}}{\mu }_{\rm{sat}}+{\left(1-{{\mu }_{\rm{sun}}}^{2}\right)}^{\frac{1}{2}}{\left(1-{{\mu }_{\rm{sat}}}^{2}\right)}^{\frac{1}{2}}\\ & \mathrm{cos}\left({\mathrm{\Phi }}_{\rm{sun}}-{\mathrm{\Phi }}_{\rm{sat}}\right). \end{aligned}$$ (13)

      As a result of the curvature of the Earth, the calculation error of Rayleigh scattering is high in the large-angle region, which may cause an overcorrection of Rayleigh scattering in the image at the edge of the Earth. A simple linear attenuation is applied to improve the image. Because the correction is nonlinearly decomposed on the long atmospheric path at the edge of the Earth, uncorrected reflectance data are blended into the overcorrected data gradually from satellite zenith angles of 75°–90° (Miller et al., 2016).

    • The reflectance of most surface features is usually low and the true color composite image after processing is dim, which makes visualization by the human eye difficult. Image enhancement processing is therefore needed. Linear scaling causes the brighter pixels (usually the clouds) in the image to oversaturate, losing some of the details. Other nonlinear transformations—such as the gamma transform and the log transform—give better results. We used a logarithmic normalization method to enhance the red, green and blue channels. The calculation formula is:

      $$ {B}_{\rm{rescaled}}=\frac{{log}_{10}\left({B}_{\rm{original}}\right)-{log}_{10}\left({B}_{\rm{min}}\right)}{{log}_{10}\left({B}_{\rm{max}}\right)-{log}_{10}\left({B}_{\rm{min}}\right)}\times 255. $$ (14)

      where Boriginal is the reflectance after correction for Rayleigh scattering.

    4.   Results
    • We used the AHI data at 0400 UTC on the 15th day of each month in 2020 to validate the model. The data for other times were not used to verify the statistical data because there is no sunlight in some areas at other times, resulting in a very low reflectivity, which will lower the average statistical error. Table 2 shows that the model errors of both the random forest regression and the CGAN are very low. The R2 and SSIM values of both models are > 0.99, which means that the image generated by the model is very close to the real image, both numerically and structurally. The PSNR of the CGAN is > 40 dB and higher than that of the random forest regression, which means that the image quality generated by the CGAN is higher, with fewer noise points and outliers. The AHI data used for model validation were from the equal longitude and latitude projection, not the nominal projection data, so there is no black invalid value on the AHI image. The performance of the CGAN model on these five indicators is better than that of the random forest regression. The random forest regression generated a huge model > 30 GB containing a large number of repeated parameters. By contrast, CGAN only generates models with a size of 200 MB. The CGAN is superior to the random forest regression in model size and statistical indicators. The green band of all the true color images was generated by the CGAN model.

      Metrics
      RMSEMAER2PSNR (dB)SSIM
      Random forest regression0.01020.00650.991239.670.9934
      CGAN0.00830.00610.993440.120.9946

      Table 2.  Comparison of the average statistical results for the 12 months in 2020 between the random forest regression and the CGAN

    • The AGRI data were input into the model to obtain the simulated green band reflectance. It is straightforward to generate true color images when red, green and blue images are available. A vivid true color image is obtained after image enhancement and hybrid green band processing.

      We evaluated the accuracy of the model at 0400 UTC, but we had no way of knowing whether the model is applicable over a 24-h period (including morning and evening). The statistical evaluation is invalid when part of the Earth is in darkness, but we can make a vertical comparison of the image itself (for example, comparing the noon image with the morning image). Figure 5 shows the true color images of the FY4A AGRI at four different times on 23 December 2020. These four images use the same image processing method and none of them has undergone Rayleigh scattering correction. When the sunlight is insufficient, the color of the sea is dark blue, whereas at noon, the sea appears vivid blue. The colors of these four pictures are similar and the subtle changes are reasonable, which shows that the model also works at other times.

      Figure 5.  FY-4A AGRI true color images on 23 December 2020 using hybrid green band and log enhancement at (a) 2200 UTC, (b) 0000 UTC, (c) 0200 UTC and (d) 0400 UTC.

    • The images in Fig. 6 reveal how the green channel with a central wavelength of 0.51 µm affects the true color image. By zooming in on the Australian and Indonesian archipelago, there is a clear difference in color in Figs. 6a, 6b. When the hybrid green band is not processed, the land surface is mostly brown and red, whereas the sea is greenish. This phenomenon is expected and solvable. After adding a reflectance of 0.825 µm, the image is much more realistic, as shown in Fig. 6b.

      Figure 6.  FY-4A AGRI true color images at 0400 UTC in March, 2021 with logarithm enhancement and Rayleigh scattering correction (a) without a hybrid green band and (b) with a hybrid green algorithm.

    • Most of the images that people are exposed to are processed by image enhancement. Unlike the images taken by cameras from space, unprocessed remote sensing images are mostly dim and not as beautiful as imagined. Figure 7a is an original image without image enhancement and the image after linear enhancement is shown in Fig. 7b. Figure 7c is the result of gamma correction and Fig. 7b is the image processed by the enhancement algorithm described here. The contrast in the original image is not strong and the brightness is low. Linear stretching has a weak effect on seawater and it is easy to saturate the clouds. Gamma correction is better than linear stretching, but it is not as good as the log enhancement method, which is more in line with the human eye.

      Figure 7.  Comparison of different image enhancement algorithms using FY4A-AGRI data at 0400 UTC. March, 2021.

    • Figure 8 shows a set of FY-4A AGRI true color images at 0500 UTC on March 23, 2020. Figures 8a and 8b use the same image enhancement method [we set Bmax to 1 and Bmin to 0.04 in Eq. (14)], but no Rayleigh scattering correction was carried out in Fig. 8a. The images before and after Rayleigh scattering correction show clear visual differences. The image before Rayleigh scattering correction has a blue haze; the image is clearer with Rayleigh scattering correction.

      Figure 8.  Comparison of FY4A-AGRI true color images using hybrid green band and log enhancement at 05:00 UTC. March 23, 2020 (a) without atmospheric correction and (b) with atmospheric correction.

      To show the change in image sharpness before and after Rayleigh scattering correction quantitatively, a Laplace variance algorithm (Pech-Pacheco et al., 2000) is used to calculate the image sharpness. The calculation formula is:

      $$ {L}_{v}=\sum\limits_{m}^{M}\sum\limits_{n}^{N}\left[\left|L\left(m,n\right)\right|-\bar{L}\right]. $$ (15)
      $$ L=\frac{1}{6}\left(\begin{array}{ccc}0& -1& 0\\ -1& 4& -1\\ 0& -1& 0\end{array}\right). $$ (16)

      where M and N are the number of rows and columns, respectively, in the image, L is the Laplacian operator, L (m, n) is the result of Laplacian convolution and $ \bar L$ is the average of L(m, n). Table 3 gives Lv before and after Rayleigh scattering correction. The sharpness of the red, green, and blue bands increased by 22.2, 46.7, and 58.5%, respectively. The sharpness of the blue band is the most improved because it is more strongly affected by Rayleigh scattering.

      RedGreenBlue
      Lv (before correction)0.07330.04320.0241
      Lv (after) correction0.08960.06340.0382

      Table 3.  Lv for three bands before and after correction for Rayleigh scattering

    • Figure 9 shows that the edge of the Earth appears red because blue and green light are completely removed by the correction for Rayleigh scattering. Overcorrected areas usually appear in large-angle areas, so the angle information can be used to optimize the display of the image. The ultimate solution is to reduce the calculation error in the large angle, which is very complex. The alternative method used here can rapidly optimize the image quality.

      Figure 9.  (a) True color image affected by overcorrection for Rayleigh scattering. (b) True color image after correction for Rayleigh scattering and optimization of overcorrection.

    • True color maps are very effective when monitoring weather conditions on a large scale. Figure 10a shows a landscape with volcanic ash passing through the clouds; brown volcanic ash is very conspicuous in the white clouds. This is an AGRI true color image of the eruption of the Raikoke volcano in Russia. The powerful impact pushed the plume to an altitude of 10 km.

      Figure 10.  Four examples of AGRI true color images explaining atmospheric features. (a) Volcanic ash from Raikoke Island, Russia (June 22, 2019); (b) large-scale forest fires in Australia (January 13, 2020); (c) a dust storm that swept over half of China (March 15, 2021); and (d) two cyclones in the Pacific Ocean (April 21, 2021).

      Recent wildfires in Australia lasted for more than four months from September 2019 to mid-January 2020. Figure 10b is a true color image of Australia on January 13, 2020. Smoke from the forest fires is clearly visible in the image and a hurricane is seen approaching Australia.

      China was severely affected by severe sandstorms in the first half of 2021. Figure 10c shows the FY-4A AGRI true color image on March 15, 2021. There is a sharp contrast between the white clouds and the orange sand dust (the area inside the black circle shows the location of the dust storm). True color images are a convenient way to detect dust storms. The extent of the dust storms can be obtained from the images and the grade of dust can be judged by its color.

      Figure 10d shows two cyclones in April 21, 2021. One tropical cyclone is over the Philippines and the extratropical cyclone is over Japan. Polar-orbiting satellites cannot observe two cyclones at the same time.

    5.   Conclusions and discussion
    • A mature satellite program should have good technology for the synthesis of true color images. We trained a random forest regression and a CGAN model to generate a green band reflectance to realize the synthesis of true color images for the China Fengyun satellites. The two models were trained and validated using Himawari-8 AHI data. Both the random forest regression and the CGAN achieved excellent accuracy. However, the CGAN method is more lightweight and takes up less memory than the random forest regression. In addition to memory advantages, the CGAN has an excellent run speed. The CGAN model can be accelerated by a GPU and the true color image can be realized in a very short time.

      We used AHI data for training because most satellites have green channels and because the Himawari-8 and FY-4 satellites are stationary meteorological satellites with similar spatiotemporal resolutions. Polar-orbiting satellites always pass the same point at the same time and there is no change in angle at the same location, whereas this change in angle is very clear for stationary meteorological satellites.

      The difference between the AGRI and AHI bands must be considered when applying the model to FY satellites. The most rigorous approach is to use an inter-satellite cross-calibration system, but there have been few studies on the cross-calibration between AHI and AGRI. We were therefore only able to analyze the correlation between the AHI and AGRI bands. After matching the AHI and AGRI data in both space and time, correlation analysis was performed on three input bands and one output band. The R2 value of the blue, red and the near-infrared channels between AHI and AGRI were 0.797, 0.769, and 0.783, respectively. The R2 value of the AGRI green channel generated by the model and the AHI green channel was 0.767. Although the correlation is not high, this result is reasonable.

      Using the AGRI red, synthetic green and blue bands, a true color image can be generated with a quality close to that of other similar geostationary satellite images, such as ABI and AHI. Rayleigh scattering significantly reduces the clarity and detail of the true color image of geostationary satellites and therefore needs to be removed. A radiative transfer model from a single-scattering approximation was developed to reduce the contributions from atmospheric molecular scattering. This method was tested for AGRI true color imagery processing and resulted in clean surface features. The sharpness of the red, green and blue bands was greatly improved after the correction of Rayleigh scattering. Our single-scattering approximation algorithm ignores the effect of multiple scattering, has a low computational cost and a fast computational speed, but its calculation accuracy is not as high as in other radiation transfer models, including multiple scattering. The atmospheric correction model can be refined to include multiple scattering and to consider variations in aerosols of atmospheric species, band-integrated solar irradiance and solar illumination geometry in spherical coordinates.

      Unlike polar-orbiting satellites, geostationary meteorological satellites view large areas of the Earth at high frequency and dynamic monitoring of dust storms, forest fires, typhoons, volcanic eruptions and other disasters can be achieved through the rapid visualization of data from geostationary meteorological satellites.

      Acknowledgments. The authors thank the editors and reviewers for their helpful comments.

Reference (29)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return