scholarly journals Deep-Learning-Based Harmonization and Super-Resolution of Near-Surface Air Temperature from CMIP6 Models (1850–2100)

2021 ◽  
Author(s):  
Xikun Wei ◽  
Guojie Wang ◽  
Donghan Feng ◽  
Zheng Duan ◽  
Daniel Fiifi Tawia Hagan ◽  
...  

Abstract. Future global temperature change would have significant effects on society and ecosystems. Earth system models (ESM) are the primary tools to explore the future climate change. However, ESMs still exist great uncertainty and often run at a coarse spatial resolution (The majority of ESMs at about 2 degree). Accurate temperature data at high spatial resolution are needed to improve our understanding of the temperature variation and for many applications. We innovatively apply the deep-learning(DL) method from the Super resolution (SR) in the computer vision to merge 31 ESMs data and the proposed method can perform data merge, bias-correction and spatial-downscaling simultaneously. The SR algorithms are designed to enhance image quality and outperform much better than the traditional methods. The CRU TS (Climate Research Unit gridded Time Series) is considered as reference data in the model training process. In order to find a suitable DL method for our work, we choose five SR methodologies made by different structures. Those models are compared based on multiple evaluation metrics (Mean square error(MSE), mean absolute error(MAE) and Pearson correlation coefficient(R)) and the optimal model is selected and used to merge the monthly historical data during 1850–1900 and monthly future scenarios data (SSP1-2.6, SSP2-4.5, SSP3-7.0, SSP5-8.5) during 2015–2100 at the high spatial resolution of 0.5 degree. Results showed that the merged data have considerably improved performance than any of the individual ESM data and the ensemble mean (EM) of all ESM data in terms of both spatial and temporal aspects. The MAE displays a great improvement and the spatial distribution of the MAE become larger and larger along the latitudes in north hemisphere, presenting like a ‘tertiary class echelon’ condition. The merged product also presents excellent performance when the observation data is smooth with few fluctuations in time series. Additionally, this work proves that the DL model can be transferred to deal with the data merge, bias-correction and spatial-downscaling successfully when enough training data are available. Data can be accessed at https://doi.org/10.5281/zenodo.5746632 (Wei et al., 2021).

2021 ◽  
Vol 13 (3) ◽  
pp. 364
Author(s):  
Han Gao ◽  
Jinhui Guo ◽  
Peng Guo ◽  
Xiuwan Chen

Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised learning approaches have performed well, collecting and annotating datasets for every task are extremely laborious, especially for those fully supervised cases where the pixel-level ground-truth labels are dense. In this work, we propose a new object-oriented deep learning framework that leverages residual networks with different depths to learn adjacent feature representations by embedding a multibranch architecture in the deep learning pipeline. The idea is to exploit limited training data at different neighboring scales to make a tradeoff between weak semantics and strong feature representations for operational land cover mapping tasks. We draw from established geographic object-based image analysis (GEOBIA) as an auxiliary module to reduce the computational burden of spatial reasoning and optimize the classification boundaries. We evaluated the proposed approach on two subdecimeter-resolution datasets involving both urban and rural landscapes. It presented better classification accuracy (88.9%) compared to traditional object-based deep learning methods and achieves an excellent inference time (11.3 s/ha).


2021 ◽  
Vol 13 (10) ◽  
pp. 1944
Author(s):  
Xiaoming Liu ◽  
Menghua Wang

The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite has been a reliable source of ocean color data products, including five moderate (M) bands and one imagery (I) band normalized water-leaving radiance spectra nLw(λ). The spatial resolutions of the M-band and I-band nLw(λ) are 750 m and 375 m, respectively. With the technique of convolutional neural network (CNN), the M-band nLw(λ) imagery can be super-resolved from 750 m to 375 m spatial resolution by leveraging the high spatial resolution features of I1-band nLw(λ) data. However, it is also important to enhance the spatial resolution of VIIRS-derived chlorophyll-a (Chl-a) concentration and the water diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)), as well as other biological and biogeochemical products. In this study, we describe our effort to derive high-resolution Kd(490) and Chl-a data based on super-resolved nLw(λ) images at the VIIRS five M-bands. To improve the network performance over extremely turbid coastal oceans and inland waters, the networks are retrained with a training dataset including ocean color data from the Bohai Sea, Baltic Sea, and La Plata River Estuary, covering water types from clear open oceans to moderately turbid and highly turbid waters. The evaluation results show that the super-resolved Kd(490) image is much sharper than the original one, and has more detailed fine spatial structures. A similar enhancement of finer structures is also found in the super-resolved Chl-a images. Chl-a filaments are much sharper and thinner in the super-resolved image, and some of the very fine spatial features that are not shown in the original images appear in the super-resolved Chl-a imageries. The networks are also applied to four other coastal and inland water regions. The results show that super-resolution occurs mainly on pixels of Chl-a and Kd(490) features, especially on the feature edges and locations with a large spatial gradient. The biases between the original M-band images and super-resolved high-resolution images are small for both Chl-a and Kd(490) in moderately to extremely turbid coastal oceans and inland waters, indicating that the super-resolution process does not change the mean values of the original images.


2021 ◽  
pp. 107949
Author(s):  
Yifan Fan ◽  
Xiaotian Ding ◽  
Jindong Wu ◽  
Jian Ge ◽  
Yuguo Li

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4447
Author(s):  
Jisun Shin ◽  
Young-Heon Jo ◽  
Joo-Hyung Ryu ◽  
Boo-Keun Khim ◽  
Soo Mee Kim

Red tides caused by Margalefidinium polykrikoides occur continuously along the southern coast of Korea, where there are many aquaculture cages, and therefore, prompt monitoring of bloom water is required to prevent considerable damage. Satellite-based ocean-color sensors are widely used for detecting red tide blooms, but their low spatial resolution restricts coastal observations. Contrarily, terrestrial sensors with a high spatial resolution are good candidate sensors, despite the lack of spectral resolution and bands for red tide detection. In this study, we developed a U-Net deep learning model for detecting M. polykrikoides blooms along the southern coast of Korea from PlanetScope imagery with a high spatial resolution of 3 m. The U-Net model was trained with four different datasets that were constructed with randomly or non-randomly chosen patches consisting of different ratios of red tide and non-red tide pixels. The qualitative and quantitative assessments of the conventional red tide index (RTI) and four U-Net models suggest that the U-Net model, which was trained with a dataset of non-randomly chosen patches including non-red tide patches, outperformed RTI in terms of sensitivity, precision, and F-measure level, accounting for an increase of 19.84%, 44.84%, and 28.52%, respectively. The M. polykrikoides map derived from U-Net provides the most reasonable red tide patterns in all water areas. Combining high spatial resolution images and deep learning approaches represents a good solution for the monitoring of red tides over coastal regions.


2016 ◽  
Vol 36 (4) ◽  
pp. 0428001 ◽  
Author(s):  
刘大伟 Liu Dawei ◽  
韩玲 Han Ling ◽  
韩晓勇 Han Xiaoyong

Forests ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 1047 ◽  
Author(s):  
Ying Sun ◽  
Jianfeng Huang ◽  
Zurui Ao ◽  
Dazhao Lao ◽  
Qinchuan Xin

The monitoring of tree species diversity is important for forest or wetland ecosystem service maintenance or resource management. Remote sensing is an efficient alternative to traditional field work to map tree species diversity over large areas. Previous studies have used light detection and ranging (LiDAR) and imaging spectroscopy (hyperspectral or multispectral remote sensing) for species richness prediction. The recent development of very high spatial resolution (VHR) RGB images has enabled detailed characterization of canopies and forest structures. In this study, we developed a three-step workflow for mapping tree species diversity, the aim of which was to increase knowledge of tree species diversity assessment using deep learning in a tropical wetland (Haizhu Wetland) in South China based on VHR-RGB images and LiDAR points. Firstly, individual trees were detected based on a canopy height model (CHM, derived from LiDAR points) by the local-maxima-based method in the FUSION software (Version 3.70, Seattle, USA). Then, tree species at the individual tree level were identified via a patch-based image input method, which cropped the RGB images into small patches (the individually detected trees) based on the tree apexes detected. Three different deep learning methods (i.e., AlexNet, VGG16, and ResNet50) were modified to classify the tree species, as they can make good use of the spatial context information. Finally, four diversity indices, namely, the Margalef richness index, the Shannon–Wiener diversity index, the Simpson diversity index, and the Pielou evenness index, were calculated from the fixed subset with a size of 30 × 30 m for assessment. In the classification phase, VGG16 had the best performance, with an overall accuracy of 73.25% for 18 tree species. Based on the classification results, mapping of tree species diversity showed reasonable agreement with field survey data (R2Margalef = 0.4562, root-mean-square error RMSEMargalef = 0.5629; R2Shannon–Wiener = 0.7948, RMSEShannon–Wiener = 0.7202; R2Simpson = 0.7907, RMSESimpson = 0.1038; and R2Pielou = 0.5875, RMSEPielou = 0.3053). While challenges remain for individual tree detection and species classification, the deep-learning-based solution shows potential for mapping tree species diversity.


Sign in / Sign up

Export Citation Format

Share Document