scholarly journals OIL PATTERN IDENTIFICATION ANALYSIS USING SEMANTIC DEEP LEARNING METHOD FROM PLEIADES-1B SATELIITE IMAGERY WITH ARCGIS PRO SOFTWARE (Case Study: Village “A”)

2021 ◽  
Vol 936 (1) ◽  
pp. 012021
Author(s):  
Novi Anita ◽  
Bangun Muljo Sukojo ◽  
Sondy Hardian Meisajiwa ◽  
Muhammad Alfian Romadhon

Abstract There are many petroleum mining activities scattered in developing countries, such as Indonesia. Indonesia is one of the largest oil-producing countries in Southeast Asia with the 23rd ranking. Since the Dutch era, Indonesia has produced a very large amount of petroleum. One of the oil producing areas is “A” Village. There is an old well that produces petroleum oil which is still active with an age of more than 100 years, for now the oil well is still used by the local community as the main source of livelihood. With this activity, resulting in an oil pattern around the old oil refinery, which over time will absorb into the ground. This study aims to analyze and identify the oil pattern around the old oil refinery in the “A” area. The data used is in the form of High-Resolution Satellite Imagery (CSRT), namely Pleiades-1B with a spatial resolution of 1.5 meters. Data were identified using the Deep Learning Semantic method. For the limitation of this research is the administrative limit of XX Regency with a scale of 1: 25,000 as supporting data when cutting the image. The method used is the Deep Learning Convolutional Neural Network series. This research is based on how to wait for the method of the former oil spill which is the consideration of the consideration used. This study produced a land cover map that was classified into 3 categories, namely oil patterns area, area not affected by oil and vegetation. As a supporting value to show the accuracy of the classification results, an accuracy test method is used with the confusion matrix method. To show the accuracy of this study using thermal data taken from the field. Thermal data used in the form of numbers that show the temperature of each land cover. Based on the above reference, a research related to the analysis of very high-resolution image data (Pleiades-1B) will be conducted to examine the oil pattern. This research uses the deep learning series convolutional neural network (CNN) method. With this research, it is hoped that it can help agencies in knowing the right method to identify oil in mainland areas.

2019 ◽  
Vol 11 (9) ◽  
pp. 1006 ◽  
Author(s):  
Quanlong Feng ◽  
Jianyu Yang ◽  
Dehai Zhu ◽  
Jiantao Liu ◽  
Hao Guo ◽  
...  

Coastal land cover classification is a significant yet challenging task in remote sensing because of the complex and fragmented nature of coastal landscapes. However, availability of multitemporal and multisensor remote sensing data provides opportunities to improve classification accuracy. Meanwhile, rapid development of deep learning has achieved astonishing results in computer vision tasks and has also been a popular topic in the field of remote sensing. Nevertheless, designing an effective and concise deep learning model for coastal land cover classification remains problematic. To tackle this issue, we propose a multibranch convolutional neural network (MBCNN) for the fusion of multitemporal and multisensor Sentinel data to improve coastal land cover classification accuracy. The proposed model leverages a series of deformable convolutional neural networks to extract representative features from a single-source dataset. Extracted features are aggregated through an adaptive feature fusion module to predict final land cover categories. Experimental results indicate that the proposed MBCNN shows good performance, with an overall accuracy of 93.78% and a Kappa coefficient of 0.9297. Inclusion of multitemporal data improves accuracy by an average of 6.85%, while multisensor data contributes to 3.24% of accuracy increase. Additionally, the featured fusion module in this study also increases accuracy by about 2% when compared with the feature-stacking method. Results demonstrate that the proposed method can effectively mine and fuse multitemporal and multisource Sentinel data, which improves coastal land cover classification accuracy.


2021 ◽  
Vol 13 (19) ◽  
pp. 3953
Author(s):  
Patrick Clifton Gray ◽  
Diego F. Chamorro ◽  
Justin T. Ridge ◽  
Hannah Rae Kerner ◽  
Emily A. Ury ◽  
...  

The ability to accurately classify land cover in periods before appropriate training and validation data exist is a critical step towards understanding subtle long-term impacts of climate change. These trends cannot be properly understood and distinguished from individual disturbance events or decadal cycles using only a decade or less of data. Understanding these long-term changes in low lying coastal areas, home to a huge proportion of the global population, is of particular importance. Relatively simple deep learning models that extract representative spatiotemporal patterns can lead to major improvements in temporal generalizability. To provide insight into major changes in low lying coastal areas, our study (1) developed a recurrent convolutional neural network that incorporates spectral, spatial, and temporal contexts for predicting land cover class, (2) evaluated this model across time and space and compared this model to conventional Random Forest and Support Vector Machine methods as well as other deep learning approaches, and (3) applied this model to classify land cover across 20 years of Landsat 5 data in the low-lying coastal plain of North Carolina, USA. We observed striking changes related to sea level rise that support evidence on a smaller scale of agricultural land and forests transitioning into wetlands and “ghost forests”. This work demonstrates that recurrent convolutional neural networks should be considered when a model is needed that can generalize across time and that they can help uncover important trends necessary for understanding and responding to climate change in vulnerable coastal regions.


Genes ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 862
Author(s):  
Tong Liu ◽  
Zheng Wang

We present a deep-learning package named HiCNN2 to learn the mapping between low-resolution and high-resolution Hi-C (a technique for capturing genome-wide chromatin interactions) data, which can enhance the resolution of Hi-C interaction matrices. The HiCNN2 package includes three methods each with a different deep learning architecture: HiCNN2-1 is based on one single convolutional neural network (ConvNet); HiCNN2-2 consists of an ensemble of two different ConvNets; and HiCNN2-3 is an ensemble of three different ConvNets. Our evaluation results indicate that HiCNN2-enhanced high-resolution Hi-C data achieve smaller mean squared error and higher Pearson’s correlation coefficients with experimental high-resolution Hi-C data compared with existing methods HiCPlus and HiCNN. Moreover, all of the three HiCNN2 methods can recover more significant interactions detected by Fit-Hi-C compared to HiCPlus and HiCNN. Based on our evaluation results, we would recommend using HiCNN2-1 and HiCNN2-3 if recovering more significant interactions from Hi-C data is of interest, and HiCNN2-2 and HiCNN if the goal is to achieve higher reproducibility scores between the enhanced Hi-C matrix and the real high-resolution Hi-C matrix.


2021 ◽  
Vol 87 (8) ◽  
pp. 577-591
Author(s):  
Fengpeng Li ◽  
Jiabao Li ◽  
Wei Han ◽  
Ruyi Feng ◽  
Lizhe Wang

Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set.


2018 ◽  
Vol 10 (9) ◽  
pp. 1461 ◽  
Author(s):  
Yongyang Xu ◽  
Zhong Xie ◽  
Yaxing Feng ◽  
Zhanlong Chen

The road network plays an important role in the modern traffic system; as development occurs, the road structure changes frequently. Owing to the advancements in the field of high-resolution remote sensing, and the success of semantic segmentation success using deep learning in computer version, extracting the road network from high-resolution remote sensing imagery is becoming increasingly popular, and has become a new tool to update the geospatial database. Considering that the training dataset of the deep convolutional neural network will be clipped to a fixed size, which lead to the roads run through each sample, and that different kinds of road types have different widths, this work provides a segmentation model that was designed based on densely connected convolutional networks (DenseNet) and introduces the local and global attention units. The aim of this work is to propose a novel road extraction method that can efficiently extract the road network from remote sensing imagery with local and global information. A dataset from Google Earth was used to validate the method, and experiments showed that the proposed deep convolutional neural network can extract the road network accurately and effectively. This method also achieves a harmonic mean of precision and recall higher than other machine learning and deep learning methods.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1033
Author(s):  
Qiaodi Wen ◽  
Ziqi Luo ◽  
Ruitao Chen ◽  
Yifan Yang ◽  
Guofa Li

By detecting the defect location in high-resolution insulator images collected by unmanned aerial vehicle (UAV) in various environments, the occurrence of power failure can be timely detected and the caused economic loss can be reduced. However, the accuracies of existing detection methods are greatly limited by the complex background interference and small target detection. To solve this problem, two deep learning methods based on Faster R-CNN (faster region-based convolutional neural network) are proposed in this paper, namely Exact R-CNN (exact region-based convolutional neural network) and CME-CNN (cascade the mask extraction and exact region-based convolutional neural network). Firstly, we proposed an Exact R-CNN based on a series of advanced techniques including FPN (feature pyramid network), cascade regression, and GIoU (generalized intersection over union). RoI Align (region of interest align) is introduced to replace RoI pooling (region of interest pooling) to address the misalignment problem, and the depthwise separable convolution and linear bottleneck are introduced to reduce the computational burden. Secondly, a new pipeline is innovatively proposed to improve the performance of insulator defect detection, namely CME-CNN. In our proposed CME-CNN, an insulator mask image is firstly generated to eliminate the complex background by using an encoder-decoder mask extraction network, and then the Exact R-CNN is used to detect the insulator defects. The experimental results show that our proposed method can effectively detect insulator defects, and its accuracy is better than the examined mainstream target detection algorithms.


2020 ◽  
Vol 12 (4) ◽  
pp. 698 ◽  
Author(s):  
Duo Jia ◽  
Changqing Song ◽  
Changxiu Cheng ◽  
Shi Shen ◽  
Lixin Ning ◽  
...  

Spatiotemporal fusion is considered a feasible and cost-effective way to solve the trade-off between the spatial and temporal resolution of satellite sensors. Recently proposed learning-based spatiotemporal fusion methods can address the prediction of both phenological and land-cover change. In this paper, we propose a novel deep learning-based spatiotemporal data fusion method that uses a two-stream convolutional neural network. The method combines both forward and backward prediction to generate a target fine image, where temporal change-based and a spatial information-based mapping are simultaneously formed, addressing the prediction of both phenological and land-cover changes with better generalization ability and robustness. Comparative experimental results for the test datasets with phenological and land-cover changes verified the effectiveness of our method. Compared to existing learning-based spatiotemporal fusion methods, our method is more effective in predicting phenological change and directly reconstructing the prediction with complete spatial details without the need for auxiliary modulation.


Sign in / Sign up

Export Citation Format

Share Document