scholarly journals Deep learning for extracting micro-fracture: Pixel-level detection by convolutional neural network

2020 ◽  
Vol 205 ◽  
pp. 03007
Author(s):  
Yejin Kim ◽  
Seong Jun Ha ◽  
Tae sup Yun

Hydraulic stimulation has been a key technique in enhanced geothermal systems (EGS) and the recovery of unconventional hydrocarbon resources to artificially generate fractures in a rock formation. Previous experimental studies present that the pattern and aperture of generated fractures vary as the fracking pressure propagation. The recent development of three-dimensional X-ray computed tomography allows visualizing the fractures for further analysing the morphological features of fractures. However, the generated fracture consists of a few pixels (e.g., 1-3 pixels) so that the accurate and quantitative extract of micro-fracture is highly challenging. Also, the high-frequency noise around the fracture and the weak contrast across the fracture makes the application of conventional segmentation methods limited. In this study, we adopted an encoder-decoder network with a convolutional neural network (CNN) based on deep learning method for the fast and precise detection of micro-fractures. The conventional image processing methods fail to extract the continuous fractures and overestimate the fracture thickness and aperture values while the CNN-based approach successfully detects the barely seen fractures. The reconstruction of the 3D fracture surface and quantitative roughness analysis of fracture surfaces extracted by different methods enables comparison of sensitivity (or robustness) to noise between each method.

2020 ◽  
Vol 19 (6) ◽  
pp. 1884-1893
Author(s):  
Shekhroz Khudoyarov ◽  
Namgyu Kim ◽  
Jong-Jae Lee

Ground-penetrating radar is a typical sensor system for analyzing underground facilities such as pipelines and rebars. The technique also can be used to detect an underground cavity, which is a potential sign of urban sinkholes. Multichannel ground-penetrating radar devices are widely used to detect underground cavities thanks to the capacity of informative three-dimensional data. Nevertheless, the three-dimensional ground-penetrating radar data interpretation is unclear and complicated when recognizing underground cavities because similar ground-penetrating radar data reflected from different underground objects are often mixed with the cavities. As it is prevalently known that the deep learning algorithm-based techniques are powerful at image classification, deep learning-based techniques for underground object detection techniques using two-dimensional GPR (ground-penetrating radar) radargrams have been researched upon in recent years. However, spatial information of underground objects can be characterized better in three-dimensional ground-penetrating radar voxel data than in two-dimensional ground-penetrating radar images. Therefore, in this study, a novel underground object classification technique is proposed by applying deep three-dimensional convolutional neural network on three-dimensional ground-penetrating radar data. First, a deep convolutional neural network architecture was developed using three-dimensional convolutional networks for recognizing spatial underground objects such as, pipe, cavity, manhole, and subsoil. The framework of applying the three-dimensional convolutional neural network into three-dimensional ground-penetrating radar data was then proposed and experimentally validated using real three-dimensional ground-penetrating radar data. In order to do that, three-dimensional ground-penetrating radar block data were used to train the developed three-dimensional convolutional neural network and to classify unclassified three-dimensional ground-penetrating radar data collected from urban roads in Seoul, South Korea. The validation results revealed that four underground objects (pipe, cavity, manhole, and subsoil) are successfully classified, and the average classification accuracy was 97%. In addition, a false alarm was rarely indicated.


Author(s):  
Chan Hee Park ◽  
Hyunjae Kim ◽  
Junmin Lee ◽  
Giljun Ahn ◽  
Myeongbaek Youn ◽  
...  

Abstract Motors, which are one of the most widely used machines in the manufacturing field, take charge of a key role in precision machining. Therefore, it is important to accurately estimate the health state of the motor that affects the quality of the product. The research outlined in this paper aims to improve motor fault severity estimation by suggesting a novel deep learning method, specifically, feature inherited hierarchical convolutional neural network (FI-HCNN). FI-HCNN consists of a fault diagnosis part and a severity estimation part, arranged hierarchically. The main novelty of the proposed FI-HCNN is the special inherited structure between the hierarchy; the severity estimation part utilizes the latent features to exploit the fault-related representations in the fault diagnosis task. FI-HCNN can improve the accuracy of the fault severity estimation because the level-specific abstraction is supported by the latent features. Also, FI-HCNN has ease in practical application because it is developed based on stator current signals which are usually acquired for a control purpose. Experimental studies of mechanical motor faults, including eccentricity, broken rotor bars, and unbalanced conditions, are used to corroborate the high performance of FI-HCNN, as compared to both conventional methods and other hierarchical deep learning methods.


2019 ◽  
Author(s):  
Seoin Back ◽  
Junwoong Yoon ◽  
Nianhan Tian ◽  
Wen Zhong ◽  
Kevin Tran ◽  
...  

We present an application of deep-learning convolutional neural network of atomic surface structures using atomic and Voronoi polyhedra-based neighbor information to predict adsorbate binding energies for the application in catalysis.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2021 ◽  
Vol 13 (2) ◽  
pp. 274
Author(s):  
Guobiao Yao ◽  
Alper Yilmaz ◽  
Li Zhang ◽  
Fei Meng ◽  
Haibin Ai ◽  
...  

The available stereo matching algorithms produce large number of false positive matches or only produce a few true-positives across oblique stereo images with large baseline. This undesired result happens due to the complex perspective deformation and radiometric distortion across the images. To address this problem, we propose a novel affine invariant feature matching algorithm with subpixel accuracy based on an end-to-end convolutional neural network (CNN). In our method, we adopt and modify a Hessian affine network, which we refer to as IHesAffNet, to obtain affine invariant Hessian regions using deep learning framework. To improve the correlation between corresponding features, we introduce an empirical weighted loss function (EWLF) based on the negative samples using K nearest neighbors, and then generate deep learning-based descriptors with high discrimination that is realized with our multiple hard network structure (MTHardNets). Following this step, the conjugate features are produced by using the Euclidean distance ratio as the matching metric, and the accuracy of matches are optimized through the deep learning transform based least square matching (DLT-LSM). Finally, experiments on Large baseline oblique stereo images acquired by ground close-range and unmanned aerial vehicle (UAV) verify the effectiveness of the proposed approach, and comprehensive comparisons demonstrate that our matching algorithm outperforms the state-of-art methods in terms of accuracy, distribution and correct ratio. The main contributions of this article are: (i) our proposed MTHardNets can generate high quality descriptors; and (ii) the IHesAffNet can produce substantial affine invariant corresponding features with reliable transform parameters.


Sign in / Sign up

Export Citation Format

Share Document