Tracking 3D seismic horizons with a new hybrid tracking algorithm

2020 ◽  
Vol 8 (4) ◽  
pp. SQ39-SQ45
Author(s):  
Rahul Gogia ◽  
Raman Singh ◽  
Paul de Groot ◽  
Harshit Gupta ◽  
Seshan Srirangarajan ◽  
...  

We have developed a new algorithm for tracking 3D seismic horizons. The algorithm combines an inversion-based, seismic-dip flattening technique with conventional, similarity-based autotracking. The inversion part of the algorithm aims to minimize the error between horizon dips and computed seismic dips. After each cycle in the inversion loop, more seeds are added to the horizon by the similarity-based autotracker. In the example data set, the algorithm is first used to quickly track a set of framework horizons, each guided by a small set of user-picked seed positions. Next, the intervals bounded by the framework horizons are infilled to generate a dense set of horizons, also known as HorizonCube. This is done under the supervision of a human interpreter in a similar manner. The results show that the algorithm behaves better than unconstrained flattening techniques in intervals with trackable events. Inversion-based algorithms generate continuous horizons with no holes to be filled posttracking with a gridding algorithm and no loop skips (jumping to the wrong event) that need to be edited as is standard practice with autotrackers. Because editing is a time-consuming process, creating horizons with inversion-based algorithms tends to be faster than conventional autotracking. Horizons created with the adopted algorithm follow seismic events more closely than horizons generated with the inversion-only algorithm, and the fault crossings are sharper.

Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. KS173-KS182 ◽  
Author(s):  
Andrew Poulin ◽  
Ron Weir ◽  
David Eaton ◽  
Nadine Igonin ◽  
Yukuan Chen ◽  
...  

Focal-time analysis is a straightforward data-driven method to obtain robust stratigraphic depth control for microseismicity or induced seismic events. The method eliminates the necessity to build an explicit, calibrated velocity model for hypocenter depth estimation, although it requires multicomponent 3D seismic data that are colocated with surface or near-surface microseismic observations. Event focal depths are initially expressed in terms of zero-offset focal time (two-way P-P reflection time) to facilitate registration and visualization with 3D seismic data. Application of the focal-time method requires (1) high-quality P- and S-wave time picks, which are extrapolated to zero offset and (2) registration of correlative P-P and P-S reflections to provide [Formula: see text] and [Formula: see text] time-depth control. We determine the utility of this method by applying it to a microseismic and induced-seismicity data set recorded with a shallow-borehole monitoring array in Alberta, Canada, combined with high-quality multicomponent surface seismic data. The calculated depth distribution of events is in good agreement with hypocenter locations obtained independently using a nonlinear global-search method. Our results reveal that individual event clusters have distinct depth distributions that can provide important clues about the mechanisms of fault activation.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2020 ◽  
Vol 27 (4) ◽  
pp. 329-336 ◽  
Author(s):  
Lei Xu ◽  
Guangmin Liang ◽  
Baowen Chen ◽  
Xu Tan ◽  
Huaikun Xiang ◽  
...  

Background: Cell lytic enzyme is a kind of highly evolved protein, which can destroy the cell structure and kill the bacteria. Compared with antibiotics, cell lytic enzyme will not cause serious problem of drug resistance of pathogenic bacteria. Thus, the study of cell wall lytic enzymes aims at finding an efficient way for curing bacteria infectious. Compared with using antibiotics, the problem of drug resistance becomes more serious. Therefore, it is a good choice for curing bacterial infections by using cell lytic enzymes. Cell lytic enzyme includes endolysin and autolysin and the difference between them is the purpose of the break of cell wall. The identification of the type of cell lytic enzymes is meaningful for the study of cell wall enzymes. Objective: In this article, our motivation is to predict the type of cell lytic enzyme. Cell lytic enzyme is helpful for killing bacteria, so it is meaningful for study the type of cell lytic enzyme. However, it is time consuming to detect the type of cell lytic enzyme by experimental methods. Thus, an efficient computational method for the type of cell lytic enzyme prediction is proposed in our work. Method: We propose a computational method for the prediction of endolysin and autolysin. First, a data set containing 27 endolysins and 41 autolysins is built. Then the protein is represented by tripeptides composition. The features are selected with larger confidence degree. At last, the classifier is trained by the labeled vectors based on support vector machine. The learned classifier is used to predict the type of cell lytic enzyme. Results: Following the proposed method, the experimental results show that the overall accuracy can attain 97.06%, when 44 features are selected. Compared with Ding's method, our method improves the overall accuracy by nearly 4.5% ((97.06-92.9)/92.9%). The performance of our proposed method is stable, when the selected feature number is from 40 to 70. The overall accuracy of tripeptides optimal feature set is 94.12%, and the overall accuracy of Chou's amphiphilic PseAAC method is 76.2%. The experimental results also demonstrate that the overall accuracy is improved by nearly 18% when using the tripeptides optimal feature set. Conclusion: The paper proposed an efficient method for identifying endolysin and autolysin. In this paper, support vector machine is used to predict the type of cell lytic enzyme. The experimental results show that the overall accuracy of the proposed method is 94.12%, which is better than some existing methods. In conclusion, the selected 44 features can improve the overall accuracy for identification of the type of cell lytic enzyme. Support vector machine performs better than other classifiers when using the selected feature set on the benchmark data set.


1995 ◽  
Vol 3 (3) ◽  
pp. 133-142 ◽  
Author(s):  
M. Hana ◽  
W.F. McClure ◽  
T.B. Whitaker ◽  
M. White ◽  
D.R. Bahler

Two artificial neural network models were used to estimate the nicotine in tobacco: (i) a back-propagation network and (ii) a linear network. The back-propagation network consisted of an input layer, an output layer and one hidden layer. The linear network consisted of an input layer and an output layer. Both networks used the generalised delta rule for learning. Performances of both networks were compared to the multiple linear regression method MLR of calibration. The nicotine content in tobacco samples was estimated for two different data sets. Data set A contained 110 near infrared (NIR) spectra each consisting of reflected energy at eight wavelengths. Data set B consisted of 200 NIR spectra with each spectrum having 840 spectral data points. The Fast Fourier transformation was applied to data set B in order to compress each spectrum into 13 Fourier coefficients. For data set A, the linear regression model gave better results followed by the back-propagation network which was followed by the linear network. The true performance of the linear regression model was better than the back-propagation and the linear networks by 14.0% and 18.1%, respectively. For data set B, the back-propagation network gave the best result followed by MLR and the linear network. Both the linear network and MLR models gave almost the same results. The true performance of the back-propagation network model was better than the MLR and linear network by 35.14%.


Genetics ◽  
2001 ◽  
Vol 157 (3) ◽  
pp. 1369-1385 ◽  
Author(s):  
Z W Luo ◽  
C A Hackett ◽  
J E Bradshaw ◽  
J W McNicol ◽  
D Milbourne

Abstract This article presents methodology for the construction of a linkage map in an autotetraploid species, using either codominant or dominant molecular markers scored on two parents and their full-sib progeny. The steps of the analysis are as follows: identification of parental genotypes from the parental and offspring phenotypes; testing for independent segregation of markers; partition of markers into linkage groups using cluster analysis; maximum-likelihood estimation of the phase, recombination frequency, and LOD score for all pairs of markers in the same linkage group using the EM algorithm; ordering the markers and estimating distances between them; and reconstructing their linkage phases. The information from different marker configurations about the recombination frequency is examined and found to vary considerably, depending on the number of different alleles, the number of alleles shared by the parents, and the phase of the markers. The methods are applied to a simulated data set and to a small set of SSR and AFLP markers scored in a full-sib population of tetraploid potato.


1994 ◽  
Vol 37 (3) ◽  
Author(s):  
M. Rizescu ◽  
E. Popescu ◽  
V. Oancea ◽  
D. Enescu

The paper presents our attempts made for improving the locations obtained for local seismic events, using refined lithospheric structure models. The location program (based on Geiger method) supposes a known model. The program is run for some seismic sequences which occurred in different regions, on the Romanian territory, using for each of the sequences three velocity models: 1) 7 layers of constant velocity of seismic waves, as an average structure of the lithosphere for the whole territory; 2) site dependent structure (below each station), based on geophysical and geological information on the crust; 3) curves deseribing the dependence of propagation velocities with depth in the lithosphere, characterizing the 7 structural units delineated on the Romanian territory. The results obtained using the different velocity models are compared. Station corrections are computed for each data set. Finally, the locations determined for some quarry blasts are compared with the real ones.


F1000Research ◽  
2020 ◽  
Vol 8 ◽  
pp. 2024
Author(s):  
Joshua P. Zitovsky ◽  
Michael I. Love

Allelic imbalance occurs when the two alleles of a gene are differentially expressed within a diploid organism and can indicate important differences in cis-regulation and epigenetic state across the two chromosomes. Because of this, the ability to accurately quantify the proportion at which each allele of a gene is expressed is of great interest to researchers. This becomes challenging in the presence of small read counts and/or sample sizes, which can cause estimators for allelic expression proportions to have high variance. Investigators have traditionally dealt with this problem by filtering out genes with small counts and samples. However, this may inadvertently remove important genes that have truly large allelic imbalances. Another option is to use pseudocounts or Bayesian estimators to reduce the variance. To this end, we evaluated the accuracy of four different estimators, the latter two of which are Bayesian shrinkage estimators: maximum likelihood, adding a pseudocount to each allele, approximate posterior estimation of GLM coefficients (apeglm) and adaptive shrinkage (ash). We also wrote C++ code to quickly calculate ML and apeglm estimates and integrated it into the apeglm package. The four methods were evaluated on two simulations and one real data set. Apeglm consistently performed better than ML according to a variety of criteria, and generally outperformed use of pseudocounts as well. Ash also performed better than ML in one of the simulations, but in the other performance was more mixed. Finally, when compared to five other packages that also fit beta-binomial models, the apeglm package was substantially faster and more numerically reliable, making our package useful for quick and reliable analyses of allelic imbalance. Apeglm is available as an R/Bioconductor package at http://bioconductor.org/packages/apeglm.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 3523-3526

This paper describes an efficient algorithm for classification in large data set. While many algorithms exist for classification, they are not suitable for larger contents and different data sets. For working with large data sets various ELM algorithms are available in literature. However the existing algorithms using fixed activation function and it may lead deficiency in working with large data. In this paper, we proposed novel ELM comply with sigmoid activation function. The experimental evaluations demonstrate the our ELM-S algorithm is performing better than ELM,SVM and other state of art algorithms on large data sets.


2011 ◽  
Vol 48-49 ◽  
pp. 102-105
Author(s):  
Guo Zhen Cheng ◽  
Dong Nian Cheng ◽  
He Lei

Detecting network traffic anomaly is very important for network security. But it has high false alarm rate, low detect rate and that can’t perform real-time detection in the backbone very well due to its nonlinearity, nonstationarity and self-similarity. Therefore we propose a novel detection method—EMD-DS, and prove that it can reduce mean error rate of anomaly detection efficiently after EMD. On the KDD CUP 1999 intrusion detection evaluation data set, this detector detects 85.1% attacks at low false alarm rate which is better than some other systems.


Author(s):  
Usman Ahmed ◽  
Jerry Chun-Wei Lin ◽  
Gautam Srivastava

Deep learning methods have led to a state of the art medical applications, such as image classification and segmentation. The data-driven deep learning application can help stakeholders to collaborate. However, limited labelled data set limits the deep learning algorithm to generalize for one domain into another. To handle the problem, meta-learning helps to learn from a small set of data. We proposed a meta learning-based image segmentation model that combines the learning of the state-of-the-art model and then used it to achieve domain adoption and high accuracy. Also, we proposed a prepossessing algorithm to increase the usability of the segments part and remove noise from the new test image. The proposed model can achieve 0.94 precision and 0.92 recall. The ability to increase 3.3% among the state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document