Application of deep learning for seismic horizon interpretation

2019 ◽  
Vol 59 (1) ◽  
pp. 426
Author(s):  
James Lowell ◽  
Jacob Smith

The interpretation of key horizons on seismic data is an essential but time-consuming part of the subsurface workflow. This is compounded when surfaces need to be re-interpreted on variations of the same data, such as angle stacks, 4D data, or reprocessed data. Deep learning networks, which are a subset of machine learning, have the potential to automate this reinterpretation process, and significantly increase the efficiency of the subsurface workflow. This study investigates whether a deep learning network can learn from a single horizon interpretation in order to identify that event in a different version of the same data. The results were largely successful with the target horizon correctly identified in an alternative offset stack, and was correctly repositioned in areas where there was misalignment between the training data and the test data.

2021 ◽  
Vol 11 (1) ◽  
pp. 339-348
Author(s):  
Piotr Bojarczak ◽  
Piotr Lesiak

Abstract The article uses images from Unmanned Aerial Vehicles (UAVs) for rail diagnostics. The main advantage of such a solution compared to traditional surveys performed with measuring vehicles is the elimination of decreased train traffic. The authors, in the study, limited themselves to the diagnosis of hazardous split defects in rails. An algorithm has been proposed to detect them with an efficiency rate of about 81% for defects not less than 6.9% of the rail head width. It uses the FCN-8 deep-learning network, implemented in the Tensorflow environment, to extract the rail head by image segmentation. Using this type of network for segmentation increases the resistance of the algorithm to changes in the recorded rail image brightness. This is of fundamental importance in the case of variable conditions for image recording by UAVs. The detection of these defects in the rail head is performed using an algorithm in the Python language and the OpenCV library. To locate the defect, it uses the contour of a separate rail head together with a rectangle circumscribed around it. The use of UAVs together with artificial intelligence to detect split defects is an important element of novelty presented in this work.


2016 ◽  
Vol 14 (03) ◽  
pp. 1642002 ◽  
Author(s):  
Bahar Akbal-Delibas ◽  
Roshanak Farhoodi ◽  
Marc Pomplun ◽  
Nurit Haspel

One of the major challenges for protein docking methods is to accurately discriminate native-like structures from false positives. Docking methods are often inaccurate and the results have to be refined and re-ranked to obtain native-like complexes and remove outliers. In a previous work, we introduced AccuRefiner, a machine learning based tool for refining protein–protein complexes. Given a docked complex, the refinement tool produces a small set of refined versions of the input complex, with lower root-mean-square-deviation (RMSD) of atomic positions with respect to the native structure. The method employs a unique ranking tool that accurately predicts the RMSD of docked complexes with respect to the native structure. In this work, we use a deep learning network with a similar set of features and five layers. We show that a properly trained deep learning network can accurately predict the RMSD of a docked complex with 1.40 Å error margin on average, by approximating the complex relationship between a wide set of scoring function terms and the RMSD of a docked structure. The network was trained on 35000 unbound docking complexes generated by RosettaDock. We tested our method on 25 different putative docked complexes produced also by RosettaDock for five proteins that were not included in the training data. The results demonstrate that the high accuracy of the ranking tool enables AccuRefiner to consistently choose the refinement candidates with lower RMSD values compared to the coarsely docked input structures.


Author(s):  
Zainab Mushtaq

Abstract: Malware is routinely used for illegal reasons, and new malware variants are discovered every day. Computer vision in computer security is one of the most significant disciplines of research today, and it has witnessed tremendous growth in the preceding decade due to its efficacy. We employed research in machine-learning and deep-learning technology such as Logistic Regression, ANN, CNN, transfer learning on CNN, and LSTM to arrive at our conclusions. We have published analysis-based results from a range of categorization models in the literature. InceptionV3 was trained using a transfer learning technique, which yielded reasonable results when compared with other methods such as LSTM. On the test dataset, the transferring learning technique was about 98.76 percent accurate, while on the train dataset, it was around 99.6 percent accurate. Keywords: Malware, illegal activity, Deep learning, Network Security,


Author(s):  
Vijayarajan Rajangam ◽  
Sangeetha N. ◽  
Karthik R. ◽  
Kethepalli Mallikarjuna

Multimodal imaging systems assist medical practitioners in cost-effective diagnostic methods in clinical pathologies. Multimodal imaging of the same organ or the region of interest reveals complementing anatomical and functional details. Multimodal image fusion algorithms integrate complementary image details into a composite image that reduces clinician's time for effective diagnosis. Deep learning networks have their role in feature extraction for the fusion of multimodal images. This chapter analyzes the performance of a pre-trained VGG19 deep learning network that extracts features from the base and detail layers of the source images for constructing a weight map to fuse the source image details. Maximum and averaging fusion rules are adopted for base layer fusion. The performance of the fusion algorithm for multimodal medical image fusion is analyzed by peak signal to noise ratio, structural similarity index, fusion factor, and figure of merit. Performance analysis of the fusion algorithms is also carried out for the source images with the presence of impulse and Gaussian noise.


2019 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hao Cao ◽  
Rong Mo ◽  
Neng Wan

Purpose The proposed method is to generate the 3 D model of frame assemblies based on their topological model automatedly. It was a very demanding task and there was no appropriate automated method to facilitate this work. Design/methodology/approach The proposed method includes two stages. The first stage is decisive. In this stage, a deep learning network and the Chu–Liu–Edmonds algorithm are used to recognize contact relations among parts. Based on this recognition, the authors perform a geometrical computation in the second stage to finalize the 3 D model. Findings The authors verify the feasibility of the proposed method using a case study and find that the classification rate of the deep learning network for part contact relations is higher than 75 per cent. Furthermore, more accurate results could be achieved with modification by the Chu–Liu–Edmonds algorithm. The proposed method has lower computational complexity compared with traditional heuristic methods, and its results are more consistent with existing designs. Research limitations/implications The paper introduces machine learning method into assembly modelling issue. The proposed method divides the assembly modelling into two steps and solves the assemble relation creatively. Practical implications Frame assemblies are fundamental to many areas. The proposed method could automate frame assembly modelling in a viable way. It could benefit design and manufacture process significantly. Originality/value The proposed method expands the application of machine learning into a new field. It would be more useful than simple machine learning in industry. The proposed method is better than general heuristic algorithms. It outputs identical results when the inputs are the same. Meanwhile, the algorithmic complexity in worst situation is better than general heuristic algorithms.


2020 ◽  
Vol 10 (18) ◽  
pp. 6502
Author(s):  
Shinjin Kang ◽  
Jong-in Choi

On the game screen, the UI interface provides key information for game play. A vision deep learning network exploits pure pixel information in the screen. Apart from this, if we separately extract the information provided by the UI interface and use it as an additional input value, we can enhance the learning efficiency of deep learning networks. To this end, by effectively segmenting UI interface components such as buttons, image icons, and gauge bars on the game screen, we should be able to separately analyze only the relevant images. In this paper, we propose a methodology that segments UI components in a game by using synthetic game images created on a game engine. We developed a tool that approximately detected the UI areas of an image in games on the game screen and generated a large amount of synthetic labeling data through this. By training this data on a Pix2Pix, we applied UI segmentation. The network trained in this way can segment the UI areas of the target game regardless of the position of the corresponding UI components. Our methodology can help analyze the game screen without applying data augmentation to the game screen. It can also help vision researchers who should extract semantic information from game image data.


Author(s):  
Ashwan A. Abdulmunem ◽  
Zinah Abdulridha Abutiheen ◽  
Hiba J. Aleqabie

Corona virus disease (COVID-19) has an incredible influence in the last few months. It causes thousands of deaths in round the world. This make a rapid research movement to deal with this new virus. As a computer science, many technical researches have been done to tackle with it by using image processing algorithms. In this work, we introduce a method based on deep learning networks to classify COVID-19 based on x-ray images. Our results are encouraging to rely on to classify the infected people from the normal. We conduct our experiments on recent dataset, Kaggle dataset of COVID-19 X-ray images and using ResNet50 deep learning network with 5 and 10 folds cross validation. The experiments results show that 5 folds gives effective results than 10 folds with accuracy rate 97.28%.


2020 ◽  
Vol 145 ◽  
pp. 104609
Author(s):  
Shulin Pan ◽  
Kai Chen ◽  
Jingyi Chen ◽  
Ziyu Qin ◽  
Qinghui Cui ◽  
...  

2020 ◽  
Vol 14 ◽  
pp. 174830262097352
Author(s):  
Anis Theljani ◽  
Ke Chen

Different from image segmentation, developing a deep learning network for image registration is less straightforward because training data cannot be prepared or supervised by humans unless they are trivial (e.g. pre-designed affine transforms). One approach for an unsupervised deep leaning model is to self-train the deformation fields by a network based on a loss function with an image similarity metric and a regularisation term, just with traditional variational methods. Such a function consists in a smoothing constraint on the derivatives and a constraint on the determinant of the transformation in order to obtain a spatially smooth and plausible solution. Although any variational model may be used to work with a deep learning algorithm, the challenge lies in achieving robustness. The proposed algorithm is first trained based on a new and robust variational model and tested on synthetic and real mono-modal images. The results show how it deals with large deformation registration problems and leads to a real time solution with no folding. It is then generalised to multi-modal images. Experiments and comparisons with learning and non-learning models demonstrate that this approach can deliver good performances and simultaneously generate an accurate diffeomorphic transformation.


Sign in / Sign up

Export Citation Format

Share Document