scholarly journals DEEP LEARNING-BASED NUMERICAL DISPERSION MITIGIATION IN SEISMIC MODELLING

2021 ◽  
Vol 2 (2) ◽  
pp. 17-25
Author(s):  
Kseniia A. Gadylshina ◽  
Kirill G. Gadylshin ◽  
Vadim V. Lisitsa ◽  
Dmitry M. Vishnevsky

Seismic modelling is the most computationally intense and time consuming part of seismic processing and imaging algorithms. Indeed, generation of a typical seismic data-set requires approximately 10 core-hours of a standard CPU-based clusters. Such a high demand in the resources is due to the use of fine spatial discretizations to achieve a low level of numerical dispersion (numerical error). This paper presents an original approach to seismic modelling where the wavefields for all sources (right-hand sides) are simulated inaccurately using coarse meshes. A small number of the wavefields are generated with computationally intense fine-meshes and then used as a training dataset for the Deep Learning algorithm - Numerical Dispersion Mitigation network (NDM-net). Being trained, the NDM-net is applied to suppress the numerical dispersion of the entire seismic dataset.

2021 ◽  
Vol 13 (9) ◽  
pp. 1779
Author(s):  
Xiaoyan Yin ◽  
Zhiqun Hu ◽  
Jiafeng Zheng ◽  
Boyong Li ◽  
Yuanyuan Zuo

Radar beam blockage is an important error source that affects the quality of weather radar data. An echo-filling network (EFnet) is proposed based on a deep learning algorithm to correct the echo intensity under the occlusion area in the Nanjing S-band new-generation weather radar (CINRAD/SA). The training dataset is constructed by the labels, which are the echo intensity at the 0.5° elevation in the unblocked area, and by the input features, which are the intensity in the cube including multiple elevations and gates corresponding to the location of bottom labels. Two loss functions are applied to compile the network: one is the common mean square error (MSE), and the other is a self-defined loss function that increases the weight of strong echoes. Considering that the radar beam broadens with distance and height, the 0.5° elevation scan is divided into six range bands every 25 km to train different models. The models are evaluated by three indicators: explained variance (EVar), mean absolute error (MAE), and correlation coefficient (CC). Two cases are demonstrated to compare the effect of the echo-filling model by different loss functions. The results suggest that EFnet can effectively correct the echo reflectivity and improve the data quality in the occlusion area, and there are better results for strong echoes when the self-defined loss function is used.


Author(s):  
Usman Ahmed ◽  
Jerry Chun-Wei Lin ◽  
Gautam Srivastava

Deep learning methods have led to a state of the art medical applications, such as image classification and segmentation. The data-driven deep learning application can help stakeholders to collaborate. However, limited labelled data set limits the deep learning algorithm to generalize for one domain into another. To handle the problem, meta-learning helps to learn from a small set of data. We proposed a meta learning-based image segmentation model that combines the learning of the state-of-the-art model and then used it to achieve domain adoption and high accuracy. Also, we proposed a prepossessing algorithm to increase the usability of the segments part and remove noise from the new test image. The proposed model can achieve 0.94 precision and 0.92 recall. The ability to increase 3.3% among the state-of-the-art algorithms.


GEOMATICA ◽  
2021 ◽  
pp. 1-23
Author(s):  
Roholah Yazdan ◽  
Masood Varshosaz ◽  
Saied Pirasteh ◽  
Fabio Remondino

Automatic detection and recognition of traffic signs from images is an important topic in many applications. At first, we segmented the images using a classification algorithm to delineate the areas where the signs are more likely to be found. In this regard, shadows, objects having similar colours, and extreme illumination changes can significantly affect the segmentation results. We propose a new shape-based algorithm to improve the accuracy of the segmentation. The algorithm works by incorporating the sign geometry to filter out the wrong pixels from the classification results. We performed several tests to compare the performance of our algorithm against those obtained by popular techniques such as Support Vector Machine (SVM), K-Means, and K-Nearest Neighbours. In these tests, to overcome the unwanted illumination effects, the images are transformed into colour spaces Hue, Saturation, and Intensity, YUV, normalized red green blue, and Gaussian. Among the traditional techniques used in this study, the best results were obtained with SVM applied to the images transformed into the Gaussian colour space. The comparison results also suggested that by adding the geometric constraints proposed in this study, the quality of sign image segmentation is improved by 10%–25%. We also comparted the SVM classifier enhanced by incorporating the geometry of signs with a U-Shaped deep learning algorithm. Results suggested the performance of both techniques is very close. Perhaps the deep learning results could be improved if a more comprehensive data set is provided.


CONVERTER ◽  
2021 ◽  
pp. 598-605
Author(s):  
Zhao Jianchao

Behind the rapid development of the Internet industry, Internet security has become a hidden danger. In recent years, the outstanding performance of deep learning in classification and behavior prediction based on massive data makes people begin to study how to use deep learning technology. Therefore, this paper attempts to apply deep learning to intrusion detection to learn and classify network attacks. Aiming at the nsl-kdd data set, this paper first uses the traditional classification methods and several different deep learning algorithms for learning classification. This paper deeply analyzes the correlation among data sets, algorithm characteristics and experimental classification results, and finds out the deep learning algorithm which is relatively good at. Then, a normalized coding algorithm is proposed. The experimental results show that the algorithm can improve the detection accuracy and reduce the false alarm rate.


2021 ◽  
Author(s):  
Sidhant Idgunji ◽  
Madison Ho ◽  
Jonathan L. Payne ◽  
Daniel Lehrmann ◽  
Michele Morsilli ◽  
...  

<p>The growing digitization of fossil images has vastly improved and broadened the potential application of big data and machine learning, particularly computer vision, in paleontology. Recent studies show that machine learning is capable of approaching human abilities of classifying images, and with the increase in computational power and visual data, it stands to reason that it can match human ability but at much greater efficiency in the near future. Here we demonstrate this potential of using deep learning to identify skeletal grains at different levels of the Linnaean taxonomic hierarchy. Our approach was two-pronged. First, we built a database of skeletal grain images spanning a wide range of animal phyla and classes and used this database to train the model. We used a Python-based method to automate image recognition and extraction from published sources. Second, we developed a deep learning algorithm that can attach multiple labels to a single image. Conventionally, deep learning is used to predict a single class from an image; here, we adopted a Branch Convolutional Neural Network (B-CNN) technique to classify multiple taxonomic levels for a single skeletal grain image. Using this method, we achieved over 90% accuracy for both the coarse, phylum-level recognition and the fine, class-level recognition across diverse skeletal grains (6 phyla and 15 classes). Furthermore, we found that image augmentation improves the overall accuracy. This tool has potential applications in geology ranging from biostratigraphy to paleo-bathymetry, paleoecology, and microfacies analysis. Further improvement of the algorithm and expansion of the training dataset will continue to narrow the efficiency gap between human expertise and machine learning.</p>


2021 ◽  
pp. bjophthalmol-2020-318107
Author(s):  
Kenichi Nakahara ◽  
Ryo Asaoka ◽  
Masaki Tanito ◽  
Naoto Shibata ◽  
Keita Mitsuhashi ◽  
...  

Background/aimsTo validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone.MethodsA training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC).ResultsThe AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < −12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras.ConclusionThe usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.


2021 ◽  
Vol 8 ◽  
Author(s):  
Olle Holmberg ◽  
Tobias Lenz ◽  
Valentin Koch ◽  
Aseel Alyagoob ◽  
Léa Utsch ◽  
...  

Background: Optical coherence tomography is a powerful modality to assess atherosclerotic lesions, but detecting lesions in high-resolution OCT is challenging and requires expert knowledge. Deep-learning algorithms can be used to automatically identify atherosclerotic lesions, facilitating identification of patients at risk. We trained a deep-learning algorithm (DeepAD) with co-registered, annotated histopathology to predict atherosclerotic lesions in optical coherence tomography (OCT).Methods: Two datasets were used for training DeepAD: (i) a histopathology data set from 7 autopsy cases with 62 OCT frames and co-registered histopathology for high quality manual annotation and (ii) a clinical data set from 51 patients with 222 OCT frames in which manual annotations were based on clinical expertise only. A U-net based deep convolutional neural network (CNN) ensemble was employed as an atherosclerotic lesion prediction algorithm. Results were analyzed using intersection over union (IOU) for segmentation.Results: DeepAD showed good performance regarding the prediction of atherosclerotic lesions, with a median IOU of 0.68 ± 0.18 for segmentation of atherosclerotic lesions. Detection of calcified lesions yielded an IOU = 0.34. When training the algorithm without histopathology-based annotations, a performance drop of &gt;0.25 IOU was observed. The practical application of DeepAD was evaluated retrospectively in a clinical cohort (n = 11 cases), showing high sensitivity as well as specificity and similar performance when compared to manual expert analysis.Conclusion: Automated detection of atherosclerotic lesions in OCT is improved using a histopathology-based deep-learning algorithm, allowing accurate detection in the clinical setting. An automated decision-support tool based on DeepAD could help in risk prediction and guide interventional treatment decisions.


2019 ◽  
Vol 28 (12) ◽  
pp. 1950153 ◽  
Author(s):  
Jing Tan ◽  
Chong-Bin Chen

We use the deep learning algorithm to learn the Reissner–Nordström (RN) black hole metric by building a deep neural network. Plenty of data are determined in boundary of AdS and we propagate them to the black hole horizon through AdS metric and equation of motion (e.o.m). We label these data according to the values near the horizon, and together with initial data they constitute a data set. Then we construct corresponding deep neural network and train it with the data set to obtain the Reissner–Nordström (RN) black hole metric. Finally, we discuss the effects of learning rate, batch-size and initialization on the training process.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1140
Author(s):  
Jeong-Hee Lee ◽  
Jongseok Kang ◽  
We Shim ◽  
Hyun-Sang Chung ◽  
Tae-Eung Sung

Building a pattern detection model using a deep learning algorithm for data collected from manufacturing sites is an effective way for to perform decision-making and assess business feasibility for enterprises, by providing the results and implications of the patterns analysis of big data occurring at manufacturing sites. To identify the threshold of the abnormal pattern requires collaboration between data analysts and manufacturing process experts, but it is practically difficult and time-consuming. This paper suggests how to derive the threshold setting of the abnormal pattern without manual labelling by process experts, and offers a prediction algorithm to predict the potentials of future failures in advance by using the hybrid Convolutional Neural Networks (CNN)–Long Short-Term Memory (LSTM) algorithm, and the Fast Fourier Transform (FFT) technique. We found that it is easier to detect abnormal patterns that cannot be found in the existing time domain after preprocessing the data set through FFT. Our study shows that both train loss and test loss were well developed, with near zero convergence with the lowest loss rate compared to existing models such as LSTM. Our proposition for the model and our method of preprocessing the data greatly helps in understanding the abnormal pattern of unlabeled big data produced at the manufacturing site, and can be a strong foundation for detecting the threshold of the abnormal pattern of big data occurring at manufacturing sites.


2021 ◽  
Vol 36 (1) ◽  
pp. 698-703
Author(s):  
Krushitha Reddy ◽  
D. Jenila Rani

Aim: The aim of this research work is to determine the presence of hyperthyroidism using modern algorithms, and comparing the accuracy rate between deep learning algorithms and vivo monitoring. Materials and methods: Data collection containing ultrasound images from kaggle's website was used in this research. Samples were considered as (N=23) for Deep learning algorithm and (N=23) for vivo monitoring in accordance to total sample size calculated using clinical.com. The accuracy was calculated by using DPLA with a standard data set. Results: Comparison of accuracy rate is done by independent sample test using SPSS software. There is a statistically indifference between Deep learning algorithm and in vivo monitoring. Deep learning algorithm (87.89%) showed better results in comparison to vivo monitoring (83.32%). Conclusion: Deep learning algorithms appear to give better accuracy than in vivo monitoring to predict hyperthyroidism.


Sign in / Sign up

Export Citation Format

Share Document