scholarly journals Estimation of PMx Concentrations from Landsat 8 OLI Images Based on a Multilayer Perceptron Neural Network

2019 ◽  
Vol 11 (6) ◽  
pp. 646 ◽  
Author(s):  
Bo Zhang ◽  
Meng Zhang ◽  
Jian Kang ◽  
Danfeng Hong ◽  
Jian Xu ◽  
...  

The estimation of PMx (incl. PM10 and PM2.5) concentrations using satellite observations is of great significance for detecting environmental issues in many urban areas of north China. Recently, aerosol optical depth (AOD) data have been being used to estimate the PMx concentrations by implementing linear and/or nonlinear regression analysis methods. However, a lot of relevant research based on AOD published so far have demonstrated some limitations in estimating the spatial distribution of PMx concentrations with respect to estimation accuracy and spatial resolution. In this research, the Google Earth Engine (GEE) platform is employed to obtain the band reflectance (BR) data of a large number of Landsat 8 Operational Land Imager (OLI) remote sensing images. Combined with the meteorological, time parameter and the latitude and longitude zone (LLZ) method proposed in this article, a new BR (band reflectance)-PMx (incl. PM10 and PM2.5) model based on a multilayer perceptron neural network is constructed for the estimation of PMx concentrations directly from Landsat 8 OLI remote sensing images. This research used Beijing, China as the test area and the conducted experiments demonstrated that the BR-PMx model achieved satisfactory performances for the PMx-concentration estimations. The coefficient of determination (R2) of the BR-PM2.5 and BR-PM10 models reached 0.795 and 0.773, respectively, and the root mean square error (RMSE) reached 20.09 μg/m3 and 31.27 μg/m3. Meanwhile, the estimation results have been compared with the results calculated by Kriging interpolation at the same time point, and the spatial distribution is consistent. Therefore, it can be concluded that the proposed BR-PMx model provides a new promising method for acquiring accurate PMx concentrations for various cities of China.

Nativa ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 370 ◽  
Author(s):  
Luís Flávio Pereira ◽  
Cecilia Fátima Carlos Ferreira ◽  
Ricardo Morato Fiúza Guimarães

Pastagens sob práticas de manejo ineficientes tornam-se degradadas, provocando sérios problemas socioambientais e econômicos. Assim, entender a dinâmica dos sistemas pastoris e suas interações com o meio físico torna-se essencial na busca de alternativas sustentáveis para a agropecuária. Estudou-se manejo, dinâmica anual e interações socioambientais em pastagens de uma bacia hidrográfica no bioma Mata Atlântica em Minas Gerais, Brasil, durante o ano hidrológico 2016/2017. Utilizou-se dados de campo, relatos de agricultores e sensoriamento remoto via imagens LANDSAT 8 OLI e Google Earth Pro®. Foi proposto um índice de qualidade para pastagens da região. As pastagens apresentaram, em média, qualidade moderada. Níveis de degradação foram altos, oscilando de forma quadrática (níveis 2, 4, 5 e IDP) e potencial (nível 1) com a precipitação (p < 0,01), o que sugere que a irrigação possa ser prática eficiente no controle da degradação. Durante o ano, pelo menos 51,27% das pastagens apresentaram algum sinal de degradação, atingindo-se a marca de 91,32%, no período seco. Os resultados sugerem pior qualidade e maiores níveis de degradação de pastagens em terras elevadas e declivosas. Devido às condições socioambientais locais, indica-se o uso de sistemas silvipastoris agroecológicos no manejo das pastagens.Palavras-chave: uso da terra, sensoriamento remoto, relação solo paisagem, Zona da Mata, índice de qualidade. MANAGEMENT, QUALITY AND DEGRADATION DYNAMICS OF PASTURES IN ATLANTIC FOREST BIOME, MINAS GERAIS – BRASIL ABSTRACT:Pastures under inefficient management practices get degraded, leading to serious socioeconomic and environmental issues. That being said, understanding the dynamics of such systems and their interaction with the environment is essential when it comes to looking towards sustainable alternatives for livestock activities. The management, annual dynamics and socio-environmental interactions in pastures in an hydrographic basin located in Atlantic Forest biome, Minas Gerais, Brasil, were studied during the hydrological year of 2016/2017. Field data and farmers reports were utilized, such as remote sensing via images from LANDSAT 8 OLI and Google Earth Pro®. A quality index was proposed for the pastures, which usually presented medium quality. Degradation levels were high, oscillating in a quadratic basis (levels 2, 4, 5 and IDP) and potential (level 1) with precipitation (p < 0,01), which suggests that irrigation might be an efficient practice when it comes to degradation control. During the year, at least 51,27% of pastures have presented signs of degradation, achieving 91,32% in dry periods. The results suggest less quality and bigger degradation levels in pastures located in high and steep areas. Considering the local environmental conditions, agroecological silvopasture systems are recommended regarding the pastures management.Keywords: land use, remote sensing, soil/landscape relationships, Zona da Mata, quality index.


2020 ◽  
Vol 12 (8) ◽  
pp. 1263 ◽  
Author(s):  
Yingfei Xiong ◽  
Shanxin Guo ◽  
Jinsong Chen ◽  
Xinping Deng ◽  
Luyi Sun ◽  
...  

Detailed and accurate information on the spatial variation of land cover and land use is a critical component of local ecology and environmental research. For these tasks, high spatial resolution images are required. Considering the trade-off between high spatial and high temporal resolution in remote sensing images, many learning-based models (e.g., Convolutional neural network, sparse coding, Bayesian network) have been established to improve the spatial resolution of coarse images in both the computer vision and remote sensing fields. However, data for training and testing in these learning-based methods are usually limited to a certain location and specific sensor, resulting in the limited ability to generalize the model across locations and sensors. Recently, generative adversarial nets (GANs), a new learning model from the deep learning field, show many advantages for capturing high-dimensional nonlinear features over large samples. In this study, we test whether the GAN method can improve the generalization ability across locations and sensors with some modification to accomplish the idea “training once, apply to everywhere and different sensors” for remote sensing images. This work is based on super-resolution generative adversarial nets (SRGANs), where we modify the loss function and the structure of the network of SRGANs and propose the improved SRGAN (ISRGAN), which makes model training more stable and enhances the generalization ability across locations and sensors. In the experiment, the training and testing data were collected from two sensors (Landsat 8 OLI and Chinese GF 1) from different locations (Guangdong and Xinjiang in China). For the cross-location test, the model was trained in Guangdong with the Chinese GF 1 (8 m) data to be tested with the GF 1 data in Xinjiang. For the cross-sensor test, the same model training in Guangdong with GF 1 was tested in Landsat 8 OLI images in Xinjiang. The proposed method was compared with the neighbor-embedding (NE) method, the sparse representation method (SCSR), and the SRGAN. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were chosen for the quantitive assessment. The results showed that the ISRGAN is superior to the NE (PSNR: 30.999, SSIM: 0.944) and SCSR (PSNR: 29.423, SSIM: 0.876) methods, and the SRGAN (PSNR: 31.378, SSIM: 0.952), with the PSNR = 35.816 and SSIM = 0.988 in the cross-location test. A similar result was seen in the cross-sensor test. The ISRGAN had the best result (PSNR: 38.092, SSIM: 0.988) compared to the NE (PSNR: 35.000, SSIM: 0.982) and SCSR (PSNR: 33.639, SSIM: 0.965) methods, and the SRGAN (PSNR: 32.820, SSIM: 0.949). Meanwhile, we also tested the accuracy improvement for land cover classification before and after super-resolution by the ISRGAN. The results show that the accuracy of land cover classification after super-resolution was significantly improved, in particular, the impervious surface class (the road and buildings with high-resolution texture) improved by 15%.


2021 ◽  
Vol 13 (16) ◽  
pp. 3319
Author(s):  
Nan Ma ◽  
Lin Sun ◽  
Chenghu Zhou ◽  
Yawen He

Automatic cloud detection in remote sensing images is of great significance. Deep-learning-based methods can achieve cloud detection with high accuracy; however, network training heavily relies on a large number of labels. Manually labelling pixel-wise level cloud and non-cloud annotations for many remote sensing images is laborious and requires expert-level knowledge. Different types of satellite images cannot share a set of training data, due to the difference in spectral range and spatial resolution between them. Hence, labelled samples in each upcoming satellite image are required to train a new deep-learning-based model. In order to overcome such a limitation, a novel cloud detection algorithm based on a spectral library and convolutional neural network (CD-SLCNN) was proposed in this paper. In this method, the residual learning and one-dimensional CNN (Res-1D-CNN) was used to accurately capture the spectral information of the pixels based on the prior spectral library, effectively preventing errors due to the uncertainties in thin clouds, broken clouds, and clear-sky pixels during remote sensing interpretation. Benefiting from data simulation, the method is suitable for the cloud detection of different types of multispectral data. A total of 62 Landsat-8 Operational Land Imagers (OLI), 25 Moderate Resolution Imaging Spectroradiometers (MODIS), and 20 Sentinel-2 satellite images acquired at different times and over different types of underlying surfaces, such as a high vegetation coverage, urban area, bare soil, water, and mountains, were used for cloud detection validation and quantitative analysis, and the cloud detection results were compared with the results from the function of the mask, MODIS cloud mask, support vector machine, and random forest. The comparison revealed that the CD-SLCNN method achieved the best performance, with a higher overall accuracy (95.6%, 95.36%, 94.27%) and mean intersection over union (77.82%, 77.94%, 77.23%) on the Landsat-8 OLI, MODIS, and Sentinel-2 data, respectively. The CD-SLCNN algorithm produced consistent results with a more accurate cloud contour on thick, thin, and broken clouds over a diverse underlying surface, and had a stable performance regarding bright surfaces, such as buildings, ice, and snow.


2019 ◽  
Vol 9 (14) ◽  
pp. 2917 ◽  
Author(s):  
Yan Chen ◽  
Chengming Zhang ◽  
Shouyi Wang ◽  
Jianping Li ◽  
Feng Li ◽  
...  

Using satellite remote sensing has become a mainstream approach for extracting crop spatial distribution. Making edges finer is a challenge, while simultaneously extracting crop spatial distribution information from high-resolution remote sensing images using a convolutional neural network (CNN). Based on the characteristics of the crop area in the Gaofen 2 (GF-2) images, this paper proposes an improved CNN to extract fine crop areas. The CNN comprises a feature extractor and a classifier. The feature extractor employs a spectral feature extraction unit to generate spectral features, and five coding-decoding-pair units to generate five level features. A linear model is used to fuse features of different levels, and the fusion results are up-sampled to obtain a feature map consistent with the structure of the input image. This feature map is used by the classifier to perform pixel-by-pixel classification. In this study, the SegNet and RefineNet models and 21 GF-2 images of Feicheng County, Shandong Province, China, were chosen for comparison experiment. Our approach had an accuracy of 93.26%, which is higher than those of the existing SegNet (78.12%) and RefineNet (86.54%) models. This demonstrates the superiority of the proposed method in extracting crop spatial distribution information from GF-2 remote sensing images.


Author(s):  
Sri Yulianto Joko Prasetyo ◽  
Kristoko Dwi Hartomo ◽  
Mila Chrismawati Paseleng ◽  
Dian Widiyanto Candra ◽  
Bistok Hasiholan Simanjuntak

2021 ◽  
Vol 13 (2) ◽  
pp. 239
Author(s):  
Zhenfeng Shao ◽  
Zifan Zhou ◽  
Xiao Huang ◽  
Ya Zhang

Automatic extraction of the road surface and road centerline from very high-resolution (VHR) remote sensing images has always been a challenging task in the field of feature extraction. Most existing road datasets are based on data with simple and clear backgrounds under ideal conditions, such as images derived from Google Earth. Therefore, the studies on road surface extraction and road centerline extraction under complex scenes are insufficient. Meanwhile, most existing efforts addressed these two tasks separately, without considering the possible joint extraction of road surface and centerline. With the introduction of multitask convolutional neural network models, it is possible to carry out these two tasks simultaneously by facilitating information sharing within a multitask deep learning model. In this study, we first design a challenging dataset using remote sensing images from the GF-2 satellite. The dataset contains complex road scenes with manually annotated images. We then propose a two-task and end-to-end convolution neural network, termed Multitask Road-related Extraction Network (MRENet), for road surface extraction and road centerline extraction. We take features extracted from the road as the condition of centerline extraction, and the information transmission and parameter sharing between the two tasks compensate for the potential problem of insufficient road centerline samples. In the network design, we use atrous convolutions and a pyramid scene parsing pooling module (PSP pooling), aiming to expand the network receptive field, integrate multilevel features, and obtain more abundant information. In addition, we use a weighted binary cross-entropy function to alleviate the background imbalance problem. Experimental results show that the proposed algorithm outperforms several comparative methods in the aspects of classification precision and visual interpretation.


2018 ◽  
Vol 8 (10) ◽  
pp. 1981 ◽  
Author(s):  
Chengming Zhang ◽  
Shuai Gao ◽  
Xiaoxia Yang ◽  
Feng Li ◽  
Maorui Yue ◽  
...  

When extracting winter wheat spatial distribution by using convolutional neural network (CNN) from Gaofen-2 (GF-2) remote sensing images, accurate identification of edge pixel is the key to improving the result accuracy. In this paper, an approach for extracting accurate winter wheat spatial distribution based on CNN is proposed. A hybrid structure convolutional neural network (HSCNN) was first constructed, which consists of two independent sub-networks of different depths. The deeper sub-network was used to extract the pixels present in the interior of the winter wheat field, whereas the shallower sub-network extracts the pixels at the edge of the field. The model was trained by classification-based learning and used in image segmentation for obtaining the distribution of winter wheat. Experiments were performed on 39 GF-2 images of Shandong province captured during 2017–2018, with SegNet and DeepLab as comparison models. As shown by the results, the average accuracy of SegNet, DeepLab, and HSCNN was 0.765, 0.853, and 0.912, respectively. HSCNN was equally as accurate as DeepLab and superior to SegNet for identifying interior pixels, and its identification of the edge pixels was significantly better than the two comparison models, which showed the superiority of HSCNN in the identification of winter wheat spatial distribution.


2019 ◽  
Vol 11 (15) ◽  
pp. 1786 ◽  
Author(s):  
Tianyang Dong ◽  
Yuqi Shen ◽  
Jian Zhang ◽  
Yang Ye ◽  
Jing Fan

High-resolution remote sensing images can not only help forestry administrative departments achieve high-precision forest resource surveys, wood yield estimations and forest mapping but also provide decision-making support for urban greening projects. Many scholars have studied ways to detect single trees from remote sensing images and proposed many detection methods. However, the existing single tree detection methods have many errors of commission and omission in complex scenes, close values on the digital data of the image for background and trees, unclear canopy contour and abnormal shape caused by illumination shadows. To solve these problems, this paper presents progressive cascaded convolutional neural networks for single tree detection with Google Earth imagery and adopts three progressive classification branches to train and detect tree samples with different classification difficulties. In this method, the feature extraction modules of three CNN networks are progressively cascaded, and the network layer in the branches determined whether to filter the samples and feed back to the feature extraction module to improve the precision of single tree detection. In addition, the mechanism of two-phase training is used to improve the efficiency of model training. To verify the validity and practicability of our method, three forest plots located in Hangzhou City, China, Phang Nga Province, Thailand and Florida, USA were selected as test areas, and the tree detection results of different methods, including the region-growing, template-matching, convolutional neural network and our progressive cascaded convolutional neural network, are presented. The results indicate that our method has the best detection performance. Our method not only has higher precision and recall but also has good robustness to forest scenes with different complexity levels. The F1 measure analysis in the three plots was 81.0%, which is improved by 14.5%, 18.9% and 5.0%, respectively, compared with other existing methods.


Author(s):  
D. C. Pu ◽  
J. Y. Sun ◽  
Q. Ding ◽  
Q. Zheng ◽  
T. T. Li ◽  
...  

Abstract. Urban information extraction from satellite based remote sensing data could provide the basic scientific decision-making data for the construction and management of future cities. In particular, long-term satellite based remote sensing such as Landsat observations provides a rich source of data for urban area mapping. Urban area mapping based on the single-temporal Landsat observations is vulnerable to data quality (such as cloud coverage and stripe), and it is difficult to extract urban areas accurately. The composite of dense time series Landsat observations can significantly reduce the effect of data quality on urban area mapping. Multidimensional array is currently effective theory for geographic big data analysis and management, providing a theoretical basis for the composite of dense time series Landsat observations. Google Earth Engine (GEE) not only provides rich satellite based remote sensing data for the composite of dense time series data, but also has powerful massive data analysis capabilities. In the study, we chose Random Forest (RF) algorithm for the urban area extraction owing to its stable performance, high classification accuracy and feature importance evaluation. In this work, the study area is located in the central part of the city of Beijing, China. Our main data source is all Landsat8 OLI images in Beijing (path/row: 123/32) in 2017.Based on the multidimensional array for geographic big data theory and the GEE cloud computing platform, four commonly used reducer methods are selected to composite the annual dense time series Landsat 8 OLI data. After collecting the training samples, RF algorithm was selected for supervised classification, feature importance evaluation and accuracy verification for urban area mapping. The results showed that 1), compared with the single temporal image of Landsat 8 OLI, the quality of annual composite image was improved obviously, especially for urban extraction in cloudy areas; 2) for the evaluation results of feature importance based on RF algorithm, Coastal, Blue, NIR, SWIR1 and SWIR2 bands were the more important characteristic bands, while the Green and Red bands were comparatively less important; 3) the annual composite images obtained by the ee.Reducer.min, ee.Reducer.max, ee.Reducer.mean and ee.Reducer.median methods were classified and accuracy verification was carried out using the verification points. The overall accuracy of the urban area mapping reached 0.805, 0.820, 0.868 and 0.929, respectively. In summary, the ee.Reducer.median method is a suitable method for annual dense time series Landsat image composite, which could improve the data quality, and ensure the difference of features and the higher accuracy of urban area mapping.


Sign in / Sign up

Export Citation Format

Share Document