scholarly journals Next Generation Mapping: Combining Deep Learning, Cloud Computing, and Big Remote Sensing Data

2019 ◽  
Vol 11 (23) ◽  
pp. 2881 ◽  
Author(s):  
Leandro Parente ◽  
Evandro Taquary ◽  
Ana Silva ◽  
Carlos Souza ◽  
Laerte Ferreira

The rapid growth of satellites orbiting the planet is generating massive amounts of data for Earth science applications. Concurrently, state-of-the-art deep-learning-based algorithms and cloud computing infrastructure have become available with a great potential to revolutionize the image processing of satellite remote sensing. Within this context, this study evaluated, based on thousands of PlanetScope images obtained over a 12-month period, the performance of three machine learning approaches (random forest, long short-term memory-LSTM, and U-Net). We applied these approaches to mapped pasturelands in a Central Brazil region. The deep learning algorithms were implemented using TensorFlow, while the random forest utilized the Google Earth Engine platform. The accuracy assessment presented F1 scores for U-Net, LSTM, and random forest of, respectively, 96.94%, 98.83%, and 95.53% in the validation data, and 94.06%, 87.97%, and 82.57% in the test data, indicating a better classification efficiency using the deep learning approaches. Although the use of deep learning algorithms depends on a high investment in calibration samples and the generalization of these methods requires further investigations, our results suggest that the neural network architectures developed in this study can be used to map large geographic regions that consider a wide variety of satellite data (e.g., PlanetScope, Sentinel-2, Landsat-8).

Irriga ◽  
2020 ◽  
Vol 25 (1) ◽  
pp. 160-169
Author(s):  
Cesar De Oliveira Ferreira Silva

CLASSIFICAÇÃO SUPERVISIONADA DE ÁREA IRRIGADA UTILIZANDO ÍNDICES ESPECTRAIS DE IMAGENS LANDSAT-8 COM GOOGLE EARTH ENGINE   CÉSAR DE OLIVEIRA FERREIRA SILVA1   1 Departamento de Engenharia Rural, Faculdade de Ciências Agronômicas, Universidade Estadual Paulista (UNESP) Campus de Botucatu. Avenida Universitária, n° 3780, Altos do Paraíso, CEP: 18610-034, Botucatu – SP, Brasil, e-mail: [email protected].     1 RESUMO   Identificar áreas de irrigação usando imagens de satélite é um desafio que encontra em soluções de computação em nuvem um grande potencial, como na ferramenta Google Earth Engine (GEE), que facilita o processo de busca, filtragem e manipulação de grandes volumes de dados de sensoriamento remoto sem a necessidade de softwares pagos ou de download de imagens. O presente trabalho apresenta uma implementação de classificação supervisionada de áreas irrigadas e não-irrigadas na região de Sorriso e Lucas do Rio Verde/MT com o algoritmo Classification and Regression Trees (CART) em ambiente GEE utilizando as bandas 2-7 do satélite Landsat-8 e os índices NDVI, NDWI e SAVI. A acurácia da classificação supervisionada foi de 99,4% ao utilizar os índices NDWI, NDVI e SAVI e de 98,7% sem utilizar esses índices, todas consideradas excelentes. O tempo de processamento médio, refeito 10 vezes, foi de 52 segundos, considerando todo o código-fonte desenvolvido desde a filtragem das imagens até a conclusão da classificação. O código-fonte desenvolvido é apresentado em anexo de modo a difundir e incentivar o uso do GEE para estudos de inteligência espacial em irrigação e drenagem por sua usabilidade e fácil manipulação.   Keywords: computação em nuvem, sensoriamento remoto, hidrologia, modelagem.     SILVA, C. O .F SUPERVISED CLASSIFICATION OF IRRIGATED AREA USING SPECTRAL INDEXES FROM LANDSAT-8 IMAGES WITH GOOGLE EARTH ENGINE     2 ABSTRACT   Identifying irrigation areas using satellite images is a challenge that finds great potential in cloud computing solutions as the Google Earth Engine (GEE) tool, which facilitates the process of searching, filtering and manipulating large volumes of remote sensing data without the need for paid software or image downloading. The present work presents an implementation of the supervised classification of irrigated and rain-fed areas in the region of Sorriso and Lucas do Rio Verde/MT with the Classification and Regression Trees (CART) algorithm in GEE environment using bands 2-7 of the Landsat- 8 and the NDVI, NDWI and SAVI indices. The accuracy of the supervised classification was 99.4% when using NDWI, NDVI and SAVI indices and 98.7% without using these indices, which were considered excellent. The average processing time, redone 10 times, was 52 seconds, considering all the source code developed from the filtering of the images to the conclusion of the classification. The developed source code is available in the appendix in order to disseminate and encourage the use of GEE for studies of spatial intelligence in irrigation and drainage due to its usability and easy manipulation.   Keywords: cloud computing, remote sensing, hydrology, modeling.


Author(s):  
M. Amani ◽  
A. Ghorbanian ◽  
S. Mahdavi ◽  
A. Mohammadzadeh

Abstract. Land cover classification is important for various environmental assessments. The opportunity of imaging the Earth’s surface makes remote sensing techniques efficient approaches for land cover classification. The only country-wide land cover map of Iran was produced by the Iranian Space Agency (ISA) using low spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) imagery and a basic classification method. Thus, it is necessary to produce a more accurate map using advanced remote sensing and machine learning techniques. In this study, multi-temporal Landsat-8 data (1,321 images) were inserted into a Random Forest (RF) algorithm to classify the land cover of the entire country into 13 categories. To this end, all steps, including pre-processing, classification, and accuracy assessment were implemented in the Google Earth Engine (GEE) platform. The overall classification accuracy and Kappa Coefficient obtained from the Iran-wide map were 74% and 0.71, respectively, indicating the high potential of the proposed method for large-scale land cover mapping.


Kybernetes ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shubham Bharti ◽  
Arun Kumar Yadav ◽  
Mohit Kumar ◽  
Divakar Yadav

PurposeWith the rise of social media platforms, an increasing number of cases of cyberbullying has reemerged. Every day, large number of people, especially teenagers, become the victim of cyber abuse. A cyberbullied person can have a long-lasting impact on his mind. Due to it, the victim may develop social anxiety, engage in self-harm, go into depression or in the extreme cases, it may lead to suicide. This paper aims to evaluate various techniques to automatically detect cyberbullying from tweets by using machine learning and deep learning approaches.Design/methodology/approachThe authors applied machine learning algorithms approach and after analyzing the experimental results, the authors postulated that deep learning algorithms perform better for the task. Word-embedding techniques were used for word representation for our model training. Pre-trained embedding GloVe was used to generate word embedding. Different versions of GloVe were used and their performance was compared. Bi-directional long short-term memory (BLSTM) was used for classification.FindingsThe dataset contains 35,787 labeled tweets. The GloVe840 word embedding technique along with BLSTM provided the best results on the dataset with an accuracy, precision and F1 measure of 92.60%, 96.60% and 94.20%, respectively.Research limitations/implicationsIf a word is not present in pre-trained embedding (GloVe), it may be given a random vector representation that may not correspond to the actual meaning of the word. It means that if a word is out of vocabulary (OOV) then it may not be represented suitably which can affect the detection of cyberbullying tweets. The problem may be rectified through the use of character level embedding of words.Practical implicationsThe findings of the work may inspire entrepreneurs to leverage the proposed approach to build deployable systems to detect cyberbullying in different contexts such as workplace, school, etc and may also draw the attention of lawmakers and policymakers to create systemic tools to tackle the ills of cyberbullying.Social implicationsCyberbullying, if effectively detected may save the victims from various psychological problems which, in turn, may lead society to a healthier and more productive life.Originality/valueThe proposed method produced results that outperform the state-of-the-art approaches in detecting cyberbullying from tweets. It uses a large dataset, created by intelligently merging two publicly available datasets. Further, a comprehensive evaluation of the proposed methodology has been presented.


Water ◽  
2021 ◽  
Vol 13 (21) ◽  
pp. 3115
Author(s):  
Hadi Farhadi ◽  
Mohammad Najafzadeh

Detecting effective parameters in flood occurrence is one of the most important issues that has drawn more attention in recent years. Remote Sensing (RS) and Geographical Information System (GIS) are two efficient ways to spatially predict Flood Risk Mapping (FRM). In this study, a web-based platform called the Google Earth Engine (GEE) (Google Company, Mountain View, CA, USA) was used to obtain flood risk indices for the Galikesh River basin, Northern Iran. With the aid of Landsat 8 satellite imagery and the Shuttle Radar Topography Mission (SRTM) Digital Elevation Model (DEM), 11 risk indices (Elevation (El), Slope (Sl), Slope Aspect (SA), Land Use (LU), Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), Topographic Wetness Index (TWI), River Distance (RD), Waterway and River Density (WRD), Soil Texture (ST]), and Maximum One-Day Precipitation (M1DP)) were provided. In the next step, all of these indices were imported into ArcMap 10.8 (Esri, West Redlands, CA, USA) software for index normalization and to better visualize the graphical output. Afterward, an intelligent learning machine (Random Forest (RF)), which is a robust data mining technique, was used to compute the importance degree of each index and to obtain the flood hazard map. According to the results, the indices of WRD, RD, M1DP, and El accounted for about 68.27 percent of the total flood risk. Among these indices, the WRD index containing about 23.8 percent of the total risk has the greatest impact on floods. According to FRM mapping, about 21 and 18 percent of the total areas stood at the higher and highest risk areas, respectively.


2021 ◽  
Vol 2021 (1) ◽  
pp. 1001-1011
Author(s):  
Dwi Wahyu Triscowati ◽  
Widyo Pura Buana ◽  
Arif Handoyo Marsuhandi

Ketersediaan informasi potensi lahan jagung yang cepat terbaharui penting untuk mendukung pemulihan ekonomi pasca covid 19. Pemetaan jagung menjadi suatu tantangan tersendiri di bidang pertanian karena areal penanaman jagung tidak memiliki ciri khusus seperti sawah, jagung belum memiliki peta luas baku, serta  penanamannya dapat dilakukan di sawah maupun lahan-lahan kering hutan. Permasalahan lainnya, perlu sumberdaya komputasi yang tinggi jika pemetaan jagung dilakukan secara langsung ataupun identifikasi secara manual. Dalam penelitian ini dilakukan pemetaan potensi jagung di Jawa Timur pada Kabupaten terpilih secara otomatis menggunakan Machine learning pada cloud computing google earth engine. Dengan penggunaan cloud computing GEE, pemetaan jagung dapat dilakukan pada area luas tanpa terkendala kemampuan komputer. Penelitian ini menggunakan algoritma pembelajaran mesin Random Forest(RF) berbasis piksel dengan input data dari satelit Landsat-8, Sentinel-1 dan Sentinel-2. Data referensi untuk melatih model klasifikasi menggunakan hasil KSA jagung. Akurasi hasil Machine learning paling baik berasal dari kombinasi Landsat-8 dan Sentinel-2 dengan rataan akurasi sebesar 0.79. Model klasifikasi kemudian diaplikasikan pada 10 Kabupaten dimana hasil terbaik adalah pada Kabupaten Banyuwangi dengan akurasi  0.89. Dilihat dari luas potensi jagung pada daerah Banyuwangi luasan berkisar dari 22.256,82 – 58.992,3 Ha berdasarkan pixel yang terprediksi sebagai jagung minimal 3 kali/bulan. Dari hasil kajian ini terbukti bahwa penggunaan cloud computing dapat melakukan penghitungan pada 10 Kabupaten secara cepat baik dari sisi pembangunan model maupun dari prediksinya. Selain itu karena menggunakan cloud computing pemanfaatan citra satelit dapat dimanfaatkan secepat mungking setelah citra satelit terbit/rilis sehingga prediksi dari potensi jagung dapat secara cepat dan tepat dihasilkan. Kajian ini juga menyoroti kekurangan yang terjadi yaitu dari sisi jumlah sampel untuk data latih dan keterbatasan algoritma yang digunakan sehingga kedepannya dapat dikembangkan lebih baik lagi.


2020 ◽  
Vol 12 (9) ◽  
pp. 1444 ◽  
Author(s):  
Abolfazl Abdollahi ◽  
Biswajeet Pradhan ◽  
Nagesh Shukla ◽  
Subrata Chakraborty ◽  
Abdullah Alamri

One of the most challenging research subjects in remote sensing is feature extraction, such as road features, from remote sensing images. Such an extraction influences multiple scenes, including map updating, traffic management, emergency tasks, road monitoring, and others. Therefore, a systematic review of deep learning techniques applied to common remote sensing benchmarks for road extraction is conducted in this study. The research is conducted based on four main types of deep learning methods, namely, the GANs model, deconvolutional networks, FCNs, and patch-based CNNs models. We also compare these various deep learning models applied to remote sensing datasets to show which method performs well in extracting road parts from high-resolution remote sensing images. Moreover, we describe future research directions and research gaps. Results indicate that the largest reported performance record is related to the deconvolutional nets applied to remote sensing images, and the F1 score metric of the generative adversarial network model, DenseNet method, and FCN-32 applied to UAV and Google Earth images are high: 96.08%, 95.72%, and 94.59%, respectively.


2020 ◽  
Vol 12 (3) ◽  
pp. 1625-1648 ◽  
Author(s):  
Xiao Zhang ◽  
Liangyun Liu ◽  
Changshan Wu ◽  
Xidong Chen ◽  
Yuan Gao ◽  
...  

Abstract. The amount of impervious surface is an important indicator in the monitoring of the intensity of human activity and environmental change. The use of remote sensing techniques is the only means of accurately carrying out global mapping of impervious surfaces covering large areas. Optical imagery can capture surface reflectance characteristics, while synthetic-aperture radar (SAR) images can be used to provide information on the structure and dielectric properties of surface materials. In addition, nighttime light (NTL) imagery can detect the intensity of human activity and thus provide important a priori probabilities of the occurrence of impervious surfaces. In this study, we aimed to generate an accurate global impervious surface map at a resolution of 30 m for 2015 by combining Landsat 8 Operational Land Image (OLI) optical images, Sentinel-1 SAR images and Visible Infrared Imaging Radiometer Suite (VIIRS) NTL images based on the Google Earth Engine (GEE) platform. First, the global impervious and nonimpervious training samples were automatically derived by combining the GlobeLand30 land-cover product with VIIRS NTL and MODIS enhanced vegetation index (EVI) imagery. Then, the local adaptive random forest classifiers, allowing for a regional adjustment of the classification parameters to take into account the regional characteristics, were trained and used to generate regional impervious surface maps for each 5∘×5∘ geographical grid using local training samples and multisource and multitemporal imagery. Finally, a global impervious surface map, produced by mosaicking numerous 5∘×5∘ regional maps, was validated by interpretation samples and then compared with five existing impervious products (GlobeLand30, FROM-GLC, NUACI, HBASE and GHSL). The results indicated that the global impervious surface map produced using the proposed multisource, multitemporal random forest classification (MSMT_RF) method was the most accurate of the maps, having an overall accuracy of 95.1 % and kappa coefficient (one of the most commonly used statistics to test interrater reliability; Olofsson et al., 2014) of 0.898 as against 85.6 % and 0.695 for NUACI, 89.6 % and 0.780 for FROM-GLC, 90.3 % and 0.794 for GHSL, 88.4 % and 0.753 for GlobeLand30, and 88.0 % and 0.745 for HBASE using all 15 regional validation data. Therefore, it is concluded that a global 30 m impervious surface map can accurately and efficiently be generated by the proposed MSMT_RF method based on the GEE platform. The global impervious surface map generated in this paper is available at https://doi.org/10.5281/zenodo.3505079 (Zhang and Liu, 2019).


2020 ◽  
Vol 38 (4A) ◽  
pp. 510-514
Author(s):  
Tay H. Shihab ◽  
Amjed N. Al-Hameedawi ◽  
Ammar M. Hamza

In this paper to make use of complementary potential in the mapping of LULC spatial data is acquired from LandSat 8 OLI sensor images are taken in 2019.  They have been rectified, enhanced and then classified according to Random forest (RF) and artificial neural network (ANN) methods. Optical remote sensing images have been used to get information on the status of LULC classification, and extraction details. The classification of both satellite image types is used to extract features and to analyse LULC of the study area. The results of the classification showed that the artificial neural network method outperforms the random forest method. The required image processing has been made for Optical Remote Sensing Data to be used in LULC mapping, include the geometric correction, Image Enhancements, The overall accuracy when using the ANN methods 0.91 and the kappa accuracy was found 0.89 for the training data set. While the overall accuracy and the kappa accuracy of the test dataset were found 0.89 and 0.87 respectively.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
Vol 13 (11) ◽  
pp. 2220
Author(s):  
Yanbing Bai ◽  
Wenqi Wu ◽  
Zhengxin Yang ◽  
Jinze Yu ◽  
Bo Zhao ◽  
...  

Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in recent years has demonstrated the excellent potential of multi-source data fusion and deep learning algorithms in improving flood detection, while this field has only been studied initially due to the lack of large-scale labelled remote sensing images of flood events. Here, we present new deep learning algorithms and a multi-source data fusion driven flood inundation mapping approach by leveraging a large-scale publicly available Sen1Flood11 dataset consisting of roughly 4831 labelled Sentinel-1 SAR and Sentinel-2 optical imagery gathered from flood events worldwide in recent years. Specifically, we proposed an automatic segmentation method for surface water, permanent water, and temporary water identification, and all tasks share the same convolutional neural network architecture. We utilize focal loss to deal with the class (water/non-water) imbalance problem. Thorough ablation experiments and analysis confirmed the effectiveness of various proposed designs. In comparison experiments, the method proposed in this paper is superior to other classical models. Our model achieves a mean Intersection over Union (mIoU) of 52.99%, Intersection over Union (IoU) of 52.30%, and Overall Accuracy (OA) of 92.81% on the Sen1Flood11 test set. On the Sen1Flood11 Bolivia test set, our model also achieves very high mIoU (47.88%), IoU (76.74%), and OA (95.59%) and shows good generalization ability.


Sign in / Sign up

Export Citation Format

Share Document