DEEP-LEARNING-BASED AUTOMATED SEDIMENTARY GEOMETRY CHARACTERIZATION FROM BOREHOLE IMAGES

2021 ◽  
Author(s):  
Marie Lefranc ◽  
◽  
Zikri Bayraktar ◽  
Morten Kristensen ◽  
Hedi Driss ◽  
...  

Sedimentary geometry on borehole images usually summarizes the arrangement of bed boundaries, erosive surfaces, cross bedding, sedimentary dip, and/or deformed beds. The interpretation, very often manual, requires a good level of expertise, is time consuming, can suffer from user bias, and become very challenging when dealing with highly deviated wells. Bedform geometry interpretation from crossbed data is rarely completed from a borehole image. The purpose of this study is to develop an automated method to interpret sedimentary structures, including the bedform geometry, from borehole images. Automation is achieved in this unique interpretation methodology using deep learning. The first task comprised the creation of a training dataset of 2D borehole images. This library of images was then used to train machine learning (ML) models. Testing different architectures of convolutional neural networks (CNN) showed the ResNet architecture to give the best performance for the classification of the different sedimentary structures. The validation accuracy was very high, in the range of 93–96%. To test the developed method, additional logs of synthetic data were created as sequences of different sedimentary structures (i.e., classes) associated with different well deviations, with addition of gaps. The model was able to predict the proper class and highlight the transitions accurately.

Author(s):  
Marie Lefranc ◽  
◽  
Zikri Bayraktar ◽  
Morten Kristensen ◽  
Hedi Driss ◽  
...  

Sedimentary geometry on borehole images usually summarizes the arrangement of bed boundaries, erosive surfaces, crossbedding, sedimentary dip, and/or deformed beds. The interpretation, very often manual, requires a good level of expertise, is time consuming, can suffer from user bias, and becomes very challenging when dealing with highly deviated wells. Bedform geometry interpretation from crossbed data is rarely completed from a borehole image. The purpose of this study is to develop an automated method to interpret sedimentary structures, including the bedform geometry resulting from the change in flow direction from borehole images. Automation is achieved in this unique interpretation methodology using deep learning (DL). The first task comprised the creation of a training data set of 2D borehole images. This library of images was then used to train deep neural network models. Testing different architectures of convolutional neural networks (CNN) showed the ResNet architecture to give the best performance for the classification of the different sedimentary structures. The validation accuracy was very high, in the range of 93 to 96%. To test the developed method, additional logs of synthetic data were created as sequences of different sedimentary structures (i.e., classes) associated with different well deviations, with the addition of gaps. The model was able to predict the proper class in these composite logs and highlight the transitions accurately.


Author(s):  
Mubarak Muhammad ◽  
Sertan Serte

Among the areas where AI studies centered on developing models that provide real-time solutions for the real estate industry are real estate price forecasting, building age, and types and design of the building (villa, apartment, floor number). Nevertheless, within the ML sector, DL is an emerging region with an Interest increases every year. As a result, a growing number of DL research are in conferences and papers, models for real estate have begun to emerge. In this study, we present a deep learning method for classification of houses in Northern Cyprus using Convolutional neural network. This work proposes the use of Convolutional neural networks in the classification of houses images. The classification will be based on the house age, house price, number of floors in the house, house type i.e. Villa and Apartment. The first category is Villa versus Apartments class; based on the training dataset of 362 images the class result shows the overall accuracy of 96.40%. The second category is split into two classes according to age of the buildings, namely 0 to 5 years Apartments 6 to 10 years Apartments. This class is to classify the building based on their age and the result shows the accuracy of 87.42%. The third category is villa with roof versus Villa without roof apartments class which also shows the overall accuracy of 87.60%. The fourth category is Villa Price from 10,000 euro to 200,000 Versus Villa Price from 200,000 Euro to above and the result shows the accuracy of 81.84%. The last category consists of three classes namely 2 floor Apartment versus 3 floor Apartment, 2 floor Apartment versus 4 floor Apartment and 2 floor Apartment versus 5 floor Apartment which all shows the accuracy of 83.54%, 82.48% and 84.77% respectively. From the experiments carried out in this thesis and the results obtained we conclude that the main aims and objectives of this thesis which is to used Deep learning in Classification and detection of houses in Northern Cyprus and to test the performance of AlexNet for houses classification was successful. This study will be very significant in creation of smart cities and digitization of real estate sector as the world embrace the used of the vast power of Artificial Intelligence, machine learning and machine vision.


Author(s):  
M. Buyukdemircioglu ◽  
R. Can ◽  
S. Kocaman

Abstract. Automatic detection, segmentation and reconstruction of buildings in urban areas from Earth Observation (EO) data are still challenging for many researchers. Roof is one of the most important element in a building model. The three-dimensional geographical information system (3D GIS) applications generally require the roof type and roof geometry for performing various analyses on the models, such as energy efficiency. The conventional segmentation and classification methods are often based on features like corners, edges and line segments. In parallel to the developments in computer hardware and artificial intelligence (AI) methods including deep learning (DL), image features can be extracted automatically. As a DL technique, convolutional neural networks (CNNs) can also be used for image classification tasks, but require large amount of high quality training data for obtaining accurate results. The main aim of this study was to generate a roof type dataset from very high-resolution (10 cm) orthophotos of Cesme, Turkey, and to classify the roof types using a shallow CNN architecture. The training dataset consists 10,000 roof images and their labels. Six roof type classes such as flat, hip, half-hip, gable, pyramid and complex roofs were used for the classification in the study area. The prediction performance of the shallow CNN model used here was compared with the results obtained from the fine-tuning of three well-known pre-trained networks, i.e. VGG-16, EfficientNetB4, ResNet-50. The results show that although our CNN has slightly lower performance expressed with the overall accuracy, it is still acceptable for many applications using sparse data.


2020 ◽  
Author(s):  
Sassan Ostvar ◽  
Han Truong ◽  
Elisabeth R. Silver ◽  
Charles J. Lightdale ◽  
Chin Hur ◽  
...  

AbstractEsophageal adenocarcinoma (EAC) is a rare but lethal cancer with rising incidence in several global hotspots including the United States. The five-year survival rate for patients diagnosed with advanced disease can be as low as 5% in EAC, making early detection and preventive intervention crucial. The current standard of care for EAC targets patients with Barrett’s esophagus (BE), the main precursor to EAC and a relatively common condition in adults with chronic acid reflux disease. Preventive care for EAC requires repeated surveillance endoscopies of BE patients with biopsy sampling, and can be intrusive, error-prone, and costly. The integration of minimally-invasive subsurface tissue imaging in the current standard of care can reduce the need for exhaustive tissue sampling and improve the quality of life in BE patients. Effective adoption of subsurface imaging in EAC care can be facilitated by computer-aided detection (CAD) systems based on deep learning. Despite their recent successes in lung and breast cancer imaging, the development of deep neural networks for rare conditions like EAC remains challenging due to data scarcity, heavy bias in existing datasets toward non-cases, and uncertainty in image labels. Here we explore the use of synthetic datasets–specifically data derived from simulations of optical back-scattering during imaging– in the development of CAD systems based on deep learning. As a proof of concept, we studied the binary classification of esophageal OCT into normal squamous and glandular mucosae, typical of BE. We found that deep convolutional networks trained on synthetic data had improved performance over models trained on clinical datasets with uncertain labels. Model performance also improved with dataset size during training on synthetic data. Our findings demonstrate the utility of transfer from simulations to real data in the context of medical imaging, especially in the severely data-poor regime and when significant uncertainty in labels are present, and motivate further development of transfer learning from simulations to aid the development of CAD for rare malignancies.


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Leonard L Yeo ◽  
Melih Engin ◽  
Robin Lange ◽  
David K Tang ◽  
Sadaf Monajemi ◽  
...  

Purpose: B-Mode ultrasound imaging is commonly used for detection and measurement of atherosclerotic carotid plaques, which are an important cause of ischemic stroke. However, accurate interpretation of ultrasound can be difficult and subjective. Artificial Intelligence (AI) models can assist in image interpretation, reducing subjectivity, and speeding up the process of detection and measurement of carotid plaques. We evaluated the accuracy of a deep learning model for automatic detection of carotid plaques in b-mode ultrasound compared against expert interpretation of the images. Methods: We propose an automated method using convolutional neural networks to detect atherosclerotic plaques and measure intima-media thickness (IMT) in B-Mode carotid images. In contrast to most of the existing methods, our goal was to not only measure IMT in healthy subjects (max IMT below 1.2 mm) but also to provide accurate detection of plaques and other vessel wall pathology. Given the B-mode longitudinal image as the input, the neural network first finds a region of interest (ROI) surrounding the artery and then segments both near wall and far wall of the artery. The network was trained and tested on two separate datasets obtained retrospectively from 3 stroke centers and 4 different ultrasound machine manufacturers. The training dataset was comprised of 1021 images. Results: The performance of the method was assessed on an independent dataset not used for model development to prevent bias, consisting of 205 images, where 54% (111 out of 205) of the images had pathology. The ground truth was determined by an expert reader interpreting images, and Pearson coefficient (IMT correlation) and Bland-Altman analysis were used to assess the performance of the method. The obtained correlation coefficient was 0.93 and r-squared was 0.87, showing a strong correlation. There was no significant over or under estimation of IMT (bias = -0.002 mm, lower limit of agreement (LOA) = -0.246 mm, upper LOA = 0.242 mm). Conclusion: The results show that the proposed deep learning method can be used for accurate analysis and interpretation of carotid ultrasound scans in a clinical setting and potentially reduce the reporting time while increasing objectivity of the reports.


2022 ◽  
Author(s):  
Chandra Bhushan Kumar

<div>In this study, we have proposed SCL-SSC(Supervised Contrastive Learning for Sleep Stage Classification), a deep learning-based framework for sleep stage classification which performs the task in two stages, 1) feature representation learning, and 2) classification. The feature learner is trained separately to represent the raw EEG signals in the feature space such that the distance between the embedding of EEG signals of the same sleep stage has less than the distance between the embedding of EEG signals of different sleep stages in the euclidean space. On top of feature learners, we have trained the classifier to perform the classification task. The distribution of sleep stages is not uniform in the PSG data, wake(W) and N2 sleep stages appear more frequently than the other sleep stages, which leads to an imbalance dataset problem. This paper addresses this issue by using weighted softmax cross-entropy loss function and also dataset oversampling technique utilized to produce synthetic data points for minority sleep stages for approximately balancing the number of sleep stages in the training dataset. The performance of our proposed model is evaluated on the publicly available Physionet datasets EDF-Sleep 2013 and 2018 versions. We have trained and evaluated our model on two EEG channels (Fpz-Cz and Pz-Oz) on these datasets separately. The evaluation result shows that the performance of SCL-SSC is the best annotation performance compared to the existing state-of art deep learning algorithms to our best of knowledge, with an overall accuracy of 94.1071% with a macro F1 score of 92.6416 and Cohen’s Kappa coefficient(κ) 0.9197. Our ablation studies on SCL-SSC shows that both triplet loss based pre-training of feature learner and oversampling of minority classes are contributing to better performance of the model(SCL-SSC).</div>


2022 ◽  
Author(s):  
Chandra Bhushan Kumar

<div>In this study, we have proposed SCL-SSC(Supervised Contrastive Learning for Sleep Stage Classification), a deep learning-based framework for sleep stage classification which performs the task in two stages, 1) feature representation learning, and 2) classification. The feature learner is trained separately to represent the raw EEG signals in the feature space such that the distance between the embedding of EEG signals of the same sleep stage has less than the distance between the embedding of EEG signals of different sleep stages in the euclidean space. On top of feature learners, we have trained the classifier to perform the classification task. The distribution of sleep stages is not uniform in the PSG data, wake(W) and N2 sleep stages appear more frequently than the other sleep stages, which leads to an imbalance dataset problem. This paper addresses this issue by using weighted softmax cross-entropy loss function and also dataset oversampling technique utilized to produce synthetic data points for minority sleep stages for approximately balancing the number of sleep stages in the training dataset. The performance of our proposed model is evaluated on the publicly available Physionet datasets EDF-Sleep 2013 and 2018 versions. We have trained and evaluated our model on two EEG channels (Fpz-Cz and Pz-Oz) on these datasets separately. The evaluation result shows that the performance of SCL-SSC is the best annotation performance compared to the existing state-of art deep learning algorithms to our best of knowledge, with an overall accuracy of 94.1071% with a macro F1 score of 92.6416 and Cohen’s Kappa coefficient(κ) 0.9197. Our ablation studies on SCL-SSC shows that both triplet loss based pre-training of feature learner and oversampling of minority classes are contributing to better performance of the model(SCL-SSC).</div>


2020 ◽  
Vol 12 (12) ◽  
pp. 2026 ◽  
Author(s):  
Ellen Bowler ◽  
Peter T. Fretwell ◽  
Geoffrey French ◽  
Michal Mackiewicz

Many wildlife species inhabit inaccessible environments, limiting researchers ability to conduct essential population surveys. Recently, very high resolution (sub-metre) satellite imagery has enabled remote monitoring of certain species directly from space; however, manual analysis of the imagery is time-consuming, expensive and subjective. State-of-the-art deep learning approaches can automate this process; however, often image datasets are small, and uncertainty in ground truth labels can affect supervised training schemes and the interpretation of errors. In this paper, we investigate these challenges by conducting both manual and automated counts of nesting Wandering Albatrosses on four separate islands, captured by the 31 cm resolution WorldView-3 sensor. We collect counts from six observers, and train a convolutional neural network (U-Net) using leave-one-island-out cross-validation and different combinations of ground truth labels. We show that (1) interobserver variation in manual counts is significant and differs between the four islands, (2) the small dataset can limit the networks ability to generalise to unseen imagery and (3) the choice of ground truth labels can have a significant impact on our assessment of network performance. Our final results show the network detects albatrosses as accurately as human observers for two of the islands, while in the other two misclassifications are largely caused by the presence of noise, cloud cover and habitat, which was not present in the training dataset. While the results show promise, we stress the importance of considering these factors for any study where data is limited and observer confidence is variable.


2021 ◽  
Vol 45 (4) ◽  
pp. 233-238
Author(s):  
Lazar Kats ◽  
Marilena Vered ◽  
Johnny Kharouba ◽  
Sigalit Blumer

Objective: To apply the technique of transfer deep learning on a small data set for automatic classification of X-ray modalities in dentistry. Study design: For solving the problem of classification, the convolution neural networks based on VGG16, NASNetLarge and Xception architectures were used, which received pre-training on ImageNet subset. In this research, we used an in-house dataset created within the School of Dental Medicine, Tel Aviv University. The training dataset contained anonymized 496 digital Panoramic and Cephalometric X-ray images for orthodontic examinations from CS 8100 Digital Panoramic System (Carestream Dental LLC, Atlanta, USA). The models were trained using NVIDIA GeForce GTX 1080 Ti GPU. The study was approved by the ethical committee of Tel Aviv University. Results: The test dataset contained 124 X-ray images from 2 different devices: CS 8100 Digital Panoramic System and Planmeca ProMax 2D (Planmeca, Helsinki, Finland). X-ray images in the test database were not pre-processed. The accuracy of all neural network architectures was 100%. Following a result of almost absolute accuracy, the other statistical metrics were not relevant. Conclusions: In this study, good results have been obtained for the automatic classification of different modalities of X-ray images used in dentistry. The most promising direction for the development of this kind of application is the transfer deep learning. Further studies on automatic classification of modalities, as well as sub-modalities, can maximally reduce occasional difficulties arising in this field in the daily practice of the dentist and, eventually, improve the quality of diagnosis and treatment.


2019 ◽  
Author(s):  
Yosuke Toda ◽  
Fumio Okura ◽  
Jun Ito ◽  
Satoshi Okada ◽  
Toshinori Kinoshita ◽  
...  

Incorporating deep learning in the image analysis pipeline has opened the possibility of introducing precision phenotyping in the field of agriculture. However, to train the neural network, a sufficient amount of training data must be prepared, which requires a time-consuming manual data annotation process that often becomes the limiting step. Here, we show that an instance segmentation neural network (Mask R-CNN) aimed to phenotype the barley seed morphology of various cultivars, can be sufficiently trained purely by a synthetically generated dataset. Our attempt is based on the concept of domain randomization, where a large amount of image is generated by randomly orienting the seed object to a virtual canvas. After training with such a dataset, performance based on recall and the average Precision of the real-world test dataset achieved 96% and 95%, respectively. Applying our pipeline enables extraction of morphological parameters at a large scale, enabling precise characterization of the natural variation of barley from a multivariate perspective. Importantly, we show that our approach is effective not only for barley seeds but also for various crops including rice, lettuce, oat, and wheat, and thus supporting the fact that the performance benefits of this technique is generic. We propose that constructing and utilizing such synthetic data can be a powerful method to alleviate human labor costs needed to prepare the training dataset for deep learning in the agricultural domain.


Sign in / Sign up

Export Citation Format

Share Document