scholarly journals Lithological Mapping Based on Fully Convolutional Network and Multi-Source Geological Data

2021 ◽  
Vol 13 (23) ◽  
pp. 4860
Author(s):  
Ziye Wang ◽  
Renguang Zuo ◽  
Hao Liu

Deep learning algorithms have found numerous applications in the field of geological mapping to assist in mineral exploration and benefit from capabilities such as high-dimensional feature learning and processing through multi-layer networks. However, there are two challenges associated with identifying geological features using deep learning methods. On the one hand, a single type of data resource cannot diagnose the characteristics of all geological units; on the other hand, deep learning models are commonly designed to output a certain class for the whole input rather than segmenting it into several parts, which is necessary for geological mapping tasks. To address such concerns, a framework that comprises a multi-source data fusion technology and a fully convolutional network (FCN) model is proposed in this study, aiming to improve the classification accuracy for geological mapping. Furthermore, multi-source data fusion technology is first applied to integrate geochemical, geophysical, and remote sensing data for comprehensive analysis. A semantic segmentation-based FCN model is then constructed to determine the lithological units per pixel by exploring the relationships among multi-source data. The FCN is trained end-to-end and performs dense pixel-wise prediction with an arbitrary input size, which is ideal for targeting geological features such as lithological units. The framework is finally proven by a comparative study in discriminating seven lithological units in the Cuonadong dome, Tibet, China. A total classification accuracy of 0.96 and a high mean intersection over union value of 0.9 were achieved, indicating that the proposed model would be an innovative alternative to traditional machine learning algorithms for geological feature mapping.

2021 ◽  
Vol 13 (11) ◽  
pp. 2220
Author(s):  
Yanbing Bai ◽  
Wenqi Wu ◽  
Zhengxin Yang ◽  
Jinze Yu ◽  
Bo Zhao ◽  
...  

Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in recent years has demonstrated the excellent potential of multi-source data fusion and deep learning algorithms in improving flood detection, while this field has only been studied initially due to the lack of large-scale labelled remote sensing images of flood events. Here, we present new deep learning algorithms and a multi-source data fusion driven flood inundation mapping approach by leveraging a large-scale publicly available Sen1Flood11 dataset consisting of roughly 4831 labelled Sentinel-1 SAR and Sentinel-2 optical imagery gathered from flood events worldwide in recent years. Specifically, we proposed an automatic segmentation method for surface water, permanent water, and temporary water identification, and all tasks share the same convolutional neural network architecture. We utilize focal loss to deal with the class (water/non-water) imbalance problem. Thorough ablation experiments and analysis confirmed the effectiveness of various proposed designs. In comparison experiments, the method proposed in this paper is superior to other classical models. Our model achieves a mean Intersection over Union (mIoU) of 52.99%, Intersection over Union (IoU) of 52.30%, and Overall Accuracy (OA) of 92.81% on the Sen1Flood11 test set. On the Sen1Flood11 Bolivia test set, our model also achieves very high mIoU (47.88%), IoU (76.74%), and OA (95.59%) and shows good generalization ability.


2018 ◽  
Vol 10 (11) ◽  
pp. 1827 ◽  
Author(s):  
Ahram Song ◽  
Jaewan Choi ◽  
Youkyung Han ◽  
Yongil Kim

Hyperspectral change detection (CD) can be effectively performed using deep-learning networks. Although these approaches require qualified training samples, it is difficult to obtain ground-truth data in the real world. Preserving spatial information during training is difficult due to structural limitations. To solve such problems, our study proposed a novel CD method for hyperspectral images (HSIs), including sample generation and a deep-learning network, called the recurrent three-dimensional (3D) fully convolutional network (Re3FCN), which merged the advantages of a 3D fully convolutional network (FCN) and a convolutional long short-term memory (ConvLSTM). Principal component analysis (PCA) and the spectral correlation angle (SCA) were used to generate training samples with high probabilities of being changed or unchanged. The strategy assisted in training fewer samples of representative feature expression. The Re3FCN was mainly comprised of spectral–spatial and temporal modules. Particularly, a spectral–spatial module with a 3D convolutional layer extracts the spectral–spatial features from the HSIs simultaneously, whilst a temporal module with ConvLSTM records and analyzes the multi-temporal HSI change information. The study first proposed a simple and effective method to generate samples for network training. This method can be applied effectively to cases with no training samples. Re3FCN can perform end-to-end detection for binary and multiple changes. Moreover, Re3FCN can receive multi-temporal HSIs directly as input without learning the characteristics of multiple changes. Finally, the network could extract joint spectral–spatial–temporal features and it preserved the spatial structure during the learning process through the fully convolutional structure. This study was the first to use a 3D FCN and a ConvLSTM for the remote-sensing CD. To demonstrate the effectiveness of the proposed CD method, we performed binary and multi-class CD experiments. Results revealed that the Re3FCN outperformed the other conventional methods, such as change vector analysis, iteratively reweighted multivariate alteration detection, PCA-SCA, FCN, and the combination of 2D convolutional layers-fully connected LSTM.


Forests ◽  
2019 ◽  
Vol 10 (9) ◽  
pp. 818
Author(s):  
Yanbiao Xi ◽  
Chunying Ren ◽  
Zongming Wang ◽  
Shiqing Wei ◽  
Jialing Bai ◽  
...  

The accurate characterization of tree species distribution in forest areas can help significantly reduce uncertainties in the estimation of ecosystem parameters and forest resources. Deep learning algorithms have become a hot topic in recent years, but they have so far not been applied to tree species classification. In this study, one-dimensional convolutional neural network (Conv1D), a popular deep learning algorithm, was proposed to automatically identify tree species using OHS-1 hyperspectral images. Additionally, the random forest (RF) classifier was applied to compare to the algorithm of deep learning. Based on our experiments, we drew three main conclusions: First, the OHS-1 hyperspectral images used in this study have high spatial resolution (10 m), which reduces the influence of mixed pixel effect and greatly improves the classification accuracy. Second, limited by the amount of sample data, Conv1D-based classifier does not need too many layers to achieve high classification accuracy. In addition, the size of the convolution kernel has a great influence on the classification accuracy. Finally, the accuracy of Conv1D (85.04%) is higher than that of RF model (80.61%). Especially for broadleaf species with similar spectral characteristics, such as Manchurian walnut and aspen, the accuracy of Conv1D-based classifier is significantly higher than RF classifier (87.15% and 71.77%, respectively). Thus, the Conv1D-based deep learning framework combined with hyperspectral imagery can efficiently improve the accuracy of tree species classification and has great application prospects in the future.


2021 ◽  
Vol 12 (2) ◽  
pp. 138
Author(s):  
Hashfi Fadhillah ◽  
Suryo Adhi Wibowo ◽  
Rita Purnamasari

Abstract  Combining the real world with the virtual world and then modeling it in 3D is an effort carried on Augmented Reality (AR) technology. Using fingers for computer operations on multi-devices makes the system more interactive. Marker-based AR is one type of AR that uses markers in its detection. This study designed the AR system by detecting fingertips as markers. This system is designed using the Region-based Deep Fully Convolutional Network (R-FCN) deep learning method. This method develops detection results obtained from the Fully Connected Network (FCN). Detection results will be integrated with a computer pointer for basic operations. This study uses a predetermined step scheme to get the best IoU parameters, precision and accuracy. The scheme in this study uses a step scheme, namely: 25K, 50K and 75K step. High precision creates centroid point changes that are not too far away. High accuracy can improve AR performance under conditions of rapid movement and improper finger conditions. The system design uses a dataset in the form of an index finger image with a configuration of 10,800 training data and 3,600 test data. The model will be tested on each scheme using video at different distances, locations and times. This study produced the best results on the 25K step scheme with IoU of 69%, precision of 5.56 and accuracy of 96%.Keyword: Augmented Reality, Region-based Convolutional Network, Fully Convolutional Network, Pointer, Step training Abstrak Menggabungkan dunia nyata dengan dunia virtual lalu memodelkannya bentuk 3D merupakan upaya yang diusung pada teknologi Augmented Reality (AR). Menggunakan jari untuk operasi komputer pada multi-device membuat sistem yang lebih interaktif. Marker-based AR merupakan salah satu jenis AR yang menggunakan marker dalam deteksinya. Penelitian ini merancang sistem AR dengan mendeteksi ujung jari sebagai marker. Sistem ini dirancang menggunakan metode deep learning Region-based Fully Convolutional Network (R-FCN). Metode ini mengembangkan hasil deteksi yang didapat dari Fully Connected Network (FCN). Hasil deteksi akan diintegrasikan dengan pointer komputer untuk operasi dasar. Penelitian ini menggunakan skema step training yang telah ditentukan untuk mendapatkan parameter IoU, presisi dan akurasi yang terbaik. Skema pada penelitian ini menggunakan skema step yaitu: 25K, 50K dan 75K step. Presisi tinggi menciptakan perubahan titik centroid yang tidak terlalu jauh. Akurasi  yang tinggi dapat meningkatkan kinerja AR dalam kondisi pergerakan yang cepat dan kondisi jari yang tidak tepat. Perancangan sistem menggunakan dataset berupa citra jari telunjuk dengan konfigurasi 10.800 data latih dan 3.600 data uji. Model akan diuji pada tiap skema dilakukan menggunakan video pada jarak, lokasi dan waktu yang berbeda. Penelitian ini menghasilkan hasil terbaik pada skema step 25K dengan IoU sebesar 69%, presisi sebesar 5,56 dan akurasi sebesar 96%.Kata kunci: Augmented Reality, Region-based Convolutional Network, Fully Convolutional Network, Pointer, Step training 


Author(s):  
So-Hyun Park ◽  
Sun-Young Ihm ◽  
Aziz Nasridinov ◽  
Young-Ho Park

This study proposes a method to reduce the playing-related musculoskeletal disorders (PRMDs) that often occur among pianists. Specifically, we propose a feasibility test that evaluates several state-of-the-art deep learning algorithms to prevent injuries of pianist. For this, we propose (1) a C3P dataset including various piano playing postures and show (2) the application of four learning algorithms, which demonstrated their superiority in video classification, to the proposed C3P datasets. To our knowledge, this is the first study that attempted to apply the deep learning paradigm to reduce the PRMDs in pianist. The experimental results demonstrated that the classification accuracy is 80% on average, indicating that the proposed hypothesis about the effectiveness of the deep learning algorithms to prevent injuries of pianist is true.


2020 ◽  
pp. 147592172094295
Author(s):  
Homin Song ◽  
Yongchao Yang

Subwavelength defect imaging using guided waves has been known to be a difficult task mainly due to the diffraction limit and dispersion of guided waves. In this article, we present a noncontact super-resolution guided wave array imaging approach based on deep learning to visualize subwavelength defects in plate-like structures. The proposed approach is a novel hierarchical multiscale imaging approach that combines two distinct fully convolutional networks. The first fully convolutional network, the global detection network, globally detects subwavelength defects in a raw low-resolution guided wave beamforming image. Then, the subsequent second fully convolutional network, the local super-resolution network, locally resolves subwavelength-scale fine structural details of the detected defects. We conduct a series of numerical simulations and laboratory-scale experiments using a noncontact guided wave array enabled by a scanning laser Doppler vibrometer on aluminate plates with various subwavelength defects. The results demonstrate that the proposed super-resolution guided wave array imaging approach not only locates subwavelength defects but also visualizes super-resolution fine structural details of these defects, thus enabling further estimation of the size and shape of the detected subwavelength defects. We discuss several key aspects of the performance of our approach, compare with an existing super-resolution algorithm, and make recommendations for its successful implementations.


Author(s):  
V. R. S. Mani

In this chapter, the author paints a comprehensive picture of different deep learning models used in different multi-modal image segmentation tasks. This chapter is an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application. Methods are classified according to the different types of multi-modal images and the corresponding types of convolution neural networks used in the segmentation task. The chapter starts with an introduction to CNN topology and describes various models like Hyper Dense Net, Organ Attention Net, UNet, VNet, Dilated Fully Convolutional Network, Transfer Learning, etc.


The Analyst ◽  
2021 ◽  
Vol 146 (1) ◽  
pp. 184-195
Author(s):  
Pavel Jahoda ◽  
Igor Drozdovskiy ◽  
Samuel J. Payler ◽  
Leonardo Turchi ◽  
Loredana Bessone ◽  
...  

Combining Deep Learning algorithms, together with data fusion from multi-method spectroscopy, could drastically increase the accuracy of automatic mineral recognition compared to existing approaches.


Sign in / Sign up

Export Citation Format

Share Document