scholarly journals Prediction of DNA from context using neural networks

2021 ◽  
Author(s):  
Christian Grønbæk ◽  
Yuhu Liang ◽  
Desmond Elliott ◽  
Anders Krogh

One way to better understand the structure in DNA is by learning to predict the sequence. Here, we train a model to predict the missing base at any given position, given its left and right flanking contexts. Our best-performing model is a neural network that obtains an accuracy close to 54% on the human genome, which is 2% points better than modelling the data using a Markov model. In likelihood-ratio tests, we show that the neural network is significantly better than any of the alternative models by a large margin. We report on where the accuracy is obtained, observing first that the performance appears to be uniform over the chromosomes. The models perform best in repetitive sequences, as expected, although they are far from random performance in the more difficult coding sections, the proportions being ~70:40%. Exploring further the sources of the accuracy, Fourier transforming the predictions reveals weak but clear periodic signals. In the human genome the characteristic periods hint at connections to nucleosome positioning. To understand this we find similar periodic signals in GC/AT content in the human genome, which to the best of our knowledge have not been reported before. On other large genomes similarly high accuracy is found, while lower predictive accuracy is observed on smaller genomes. Only in mouse did we see periodic signals in the same range as in human, though weaker and of different type. Interestingly, applying a model trained on the mouse genome to the human genome results in a performance far below that of the human model, except in the difficult coding regions. Despite the clear outcomes of the likelihood ratio tests, there is currently a limited superiority of the neural network methods over the Markov model. We expect, however, that there is great potential for better modelling DNA using different neural network architectures.

1995 ◽  
Vol 06 (05) ◽  
pp. 681-692
Author(s):  
R. ODORICO

A Neural Network trigger for [Formula: see text] events based on the SVT microvertex processor of experiment CDF at Fermilab is presented. It exploits correlations among track impact parameters and azimuths calculated by the SVT from the SVX microvertex detector data. The neural trigger is meant for implementation on the systolic Siemens microprocessor MA16, which has already been used in a neural-network trigger for experiment WA92 at CERN. A suitable set of input variables is found, which allows a viable solution for the preprocessing task using standard electronic components. The response time of the neural-network stage of the trigger, including preprocessing, can be estimated ~10 μs. Its precise value depends on the quantitative specifications of the output signals of the SVT, which is still in development. The performance of the neural-network trigger is found to be significantly better than that of a conventional trigger exclusively based on impact parameter data.


Author(s):  
Soha Abd Mohamed El-Moamen ◽  
Marghany Hassan Mohamed ◽  
Mohammed F. Farghally

The need for tracking and evaluation of patients in real-time has contributed to an increase in knowing people’s actions to enhance care facilities. Deep learning is good at both a rapid pace in collecting frameworks of big data healthcare and good predictions for detection the lung cancer early. In this paper, we proposed a constructive deep neural network with Apache Spark to classify images and levels of lung cancer. We developed a binary classification model using threshold technique classifying nodules to benign or malignant. At the proposed framework, the neural network models training, defined using the Keras API, is performed using BigDL in a distributed Spark clusters. The proposed algorithm has metrics AUC-0.9810, a misclassifying rate from which it has been shown that our suggested classifiers perform better than other classifiers.


2021 ◽  
Vol 2083 (3) ◽  
pp. 032010
Author(s):  
Rong Ma

Abstract The traditional BP neural network is difficult to achieve the target effect in the prediction of waterway cargo turnover. In order to improve the accuracy of waterway cargo turnover forecast, a waterway cargo turnover forecast model was created based on genetic algorithm to optimize neural network parameters. The genetic algorithm overcomes the trap that the general iterative method easily falls into, that is, the “endless loop” phenomenon that occurs when the local minimum is small, and the calculation time is small, and the robustness is high. Using genetic algorithm optimized BP neural network to predict waterway cargo turnover, and the empirical analysis of the waterway cargo turnover forecast is carried out. The results obtained show that the neural network waterway optimized by genetic algorithm has a higher accuracy than the traditional BP neural network for predicting waterway cargo turnover, and the optimization model can long-term analysis of the characteristics of waterway cargo turnover changes shows that the prediction effect is far better than traditional neural networks.


2007 ◽  
Vol 11 (6) ◽  
pp. 1883-1896 ◽  
Author(s):  
A. Piotrowski ◽  
S. G. Wallis ◽  
J. J. Napiórkowski ◽  
P. M. Rowiński

Abstract. The prediction of temporal concentration profiles of a transported pollutant in a river is still a subject of ongoing research efforts worldwide. The present paper is aimed at studying the possibility of using Multi-Layer Perceptron Neural Networks to evaluate the whole concentration versus time profile at several cross-sections of a river under various flow conditions, using as little information about the river system as possible. In contrast with the earlier neural networks based work on longitudinal dispersion coefficients, this new approach relies more heavily on measurements of concentration collected during tracer tests over a range of flow conditions, but fewer hydraulic and morphological data are needed. The study is based upon 26 tracer experiments performed in a small river in Edinburgh, UK (Murray Burn) at various flow rates in a 540 m long reach. The only data used in this study were concentration measurements collected at 4 cross-sections, distances between the cross-sections and the injection site, time, as well as flow rate and water velocity, obtained according to the data measured at the 1st and 2nd cross-sections. The four main features of concentration versus time profiles at a particular cross-section, namely the peak concentration, the arrival time of the peak at the cross-section, and the shapes of the rising and falling limbs of the profile are modeled, and for each of them a separately designed neural network was used. There was also a variant investigated in which the conservation of the injected mass was assured by adjusting the predicted peak concentration. The neural network methods were compared with the unit peak attenuation curve concept. In general the neural networks predicted the main features of the concentration profiles satisfactorily. The predicted peak concentrations were generally better than those obtained using the unit peak attenuation method, and the method with mass-conservation assured generally performed better than the method that did not account for mass-conservation. Predictions of peak travel time were also better using the neural networks than the unit peak attenuation method. Including more data into the neural network training set clearly improved the prediction of the shapes of the concentration profiles. Similar improvements in peak concentration were less significant and the travel time prediction appeared to be largely unaffected.


1999 ◽  
Author(s):  
Arturo Pacheco-Vega ◽  
Mihir Sen ◽  
K. T. Yang ◽  
Rodney L. McClain

Abstract In the present study we apply an artificial neural network to predict the operation of a humid air-water fin-tube compact heat exchanger. The network configuration is of the feedforward type with a sigmoid activation function and a backpropagation algorithm. Published experimental data, corresponding to humid air flowing over the heat exchanger tubes and water flowing inside them, are used to train the neural network. After training with known experimental values of the humid-air flow rates, dry-bulb and wet-bulb inlet temperatures for various geometrical configurations, the j-factor and heat transfer rate predictions of the network were tested against the experimental values. Comparisons were made with published predictions of power-law correlations which were obtained from the same data. The results demonstrate that the neural network is able to predict the performance of this heat exchanger much better than the correlations.


2020 ◽  
Vol 9 (2) ◽  
pp. 285
Author(s):  
Putu Wahyu Tirta Guna ◽  
Luh Arida Ayu Ayu Rahning Putri

Not many people know that endek cloth itself has 4 known variances. .Nowadays. Computing and classification algorithm can be implemented to solve classification problem with respect to the features data as input. We can use this computing power to digitalize these endek pattern. The features extraction algorithm used in this research is GLCM. Where these data will act as input for the neural network model later. There is a lot of optimizer algorithm to use in back propagation phase. In this research we  prefer to use adam which is one of the newest and most popular optimizer algorithm. To compare its performace we also use SGD which is older and popular optimizer algorithm. Later we find that adam algorithm generate 33% accuracy which is better than what SGD algorithm give, it is 23% accuracy. Longer epoch also give affect for overall model accuracy.


2004 ◽  
Vol 43 (11) ◽  
pp. 1783-1790 ◽  
Author(s):  
Craig G. Carmichael ◽  
William A. Gallus ◽  
Bradley R. Temeyer ◽  
Mark K. Bryden

Abstract Winter roadway maintenance budget data for the state of Iowa have been combined with available climate data for a 6-yr period to create a winter weather index that provides a useful assessment of winter severity. The weather index can be combined with measures of transportation department infrastructure within a region to estimate expenses for a given time period in the region. The index was developed using artificial neural network techniques that are nonlinear and perceive patterns in the input data. Winter weather severity as diagnosed by the index correlates well with Iowa Department of Transportation roadway treatment expenses. The neural network–based index is shown to perform better than the Strategic Highway Research Program (SHRP) index and an index developed using linear regression techniques.


2015 ◽  
Vol 738-739 ◽  
pp. 191-196
Author(s):  
Yun Jie Li ◽  
Hui Song

In this paper, several data mining techniques were discussed and analyzed in order to achieve the objective of human daily activities recognition based on a continuous sensing data set. The data mining techniques of decision tree, Naïve Bayes and Neural Network were successfully applied to the data set. The paper also proposed an idea of combining the Neural Network with the Decision Tree, the result shows that it works much better than the typical Neural Network and the typical Decision Tree model.


The proposed work is to extensively evaluate if a user is depressed or not using his Tweets on Twitter. With the omni presence of social media, this method should help in identifying the depression of users. We propose an Optimized Hybrid Neural Network model to evaluate the user tweets on Twitter to analyze if a user is depressed or not. Where Neural Network is trained using Tweets to predict the polarity of Tweets. The Neural Network is trained in such a way that at any point when presented with a Tweet the model outputs the polarity associated with the Tweet. Also, a user-friendly GUI is presented to the user that loads the trained neural network in no time and can be used to analyze the users’ state of depression. The aim of this research work is to provide an algorithm to evaluate users’ sentiment on Twitter in a way better than all other existing techniques


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Mengyue Li ◽  
Biwen Li ◽  
Yuan Wan

This study is devoted to investigating the stabilization to exponential input-to-state stability (ISS) of a class of neural networks with time delay and external disturbances under the observer-based aperiodic intermittent control (APIC). Compared with the general neural networks, the state of the neural network investigated is not yet fully available. Correspondingly, an observer-based APIC is constructed, and moreover, neither the observer nor the controller requires the information of time delay. Then, the stabilization to exponential ISS of the neural network is realized severally by the observer-based time-triggered APIC (T-APIC) and the observer-based event-triggered APIC (E-APIC), and corresponding criteria are given. Furthermore, the minimum activation time rate (MATR) of the observer-based T-APIC and the observer-based E-APIC is estimated, respectively. Finally, a numerical example is given, which not only verifies the effectiveness of our results but also shows that the observer-based E-APIC is superior to the observer-based T-APIC and the observer-based periodic intermittent control (PIC) in control times and the minimum activation time rate, and the function of the observer-based T-APIC is also better than the observer-based PIC.


Sign in / Sign up

Export Citation Format

Share Document