NEURAL NETWORKS TO COMPUTE MOLECULAR DYNAMICS

1994 ◽  
Vol 02 (02) ◽  
pp. 193-228 ◽  
Author(s):  
LARRY S. LIEBOVITCH ◽  
NIKITA D. ARNOLD ◽  
LEV Y. SELECTOR

Large molecules such as proteins have many of the properties of neural networks. Hence, neural networks may serve as a natural and thus efficient method to compute the time dependent changes of the structure in large molecules. We describe how to encode the spatial conformation and energy structure of a molecule in a neural network. The dynamics of the molecule can then be computed from the dynamics of the corresponding neural network. As a detailed example, we formulated a Hopfield network to compute the molecular dynamics of a small molecule, cyclohexane. We used this network to determine the distribution of times spent in the twist and chair conformational states as the cyclohexane thermally switches between these two states.

1996 ◽  
Vol 8 (4) ◽  
pp. 843-854 ◽  
Author(s):  
Peter M. Williams

Neural network outputs are interpreted as parameters of statistical distributions. This allows us to fit conditional distributions in which the parameters depend on the inputs to the network. We exploit this in modeling multivariate data, including the univariate case, in which there may be input-dependent (e.g., time-dependent) correlations between output components. This provides a novel way of modeling conditional correlation that extends existing techniques for determining input-dependent (local) error bars.


Author(s):  
Olga RUZAKOVA

The article presents a methodological approach to assessing the investment attractiveness of an enterprise based on the Hopfield neural network mathematical apparatus. An extended set of evaluation parameters of the investment process has been compiled. An algorithm for formalizing the decision-making process regarding the investment attractiveness of the enterprise based on the mathematical apparatus of neural networks has been developed. The proposed approach allows taking into account the constantly changing sets of quantitative and qualitative parameters, identifying the appropriate level of investment attractiveness of the enterprise with minimal money and time expenses – one of the standards of the Hopfield network, which is most similar to the one that characterizes the activity of the enterprise. Developed complex formalization of the investment process allows you to make investment decisions in the context of incompleteness and heterogeneity of information, based on the methodological tools of neural networks.


2020 ◽  
Author(s):  
Xian Wang ◽  
Anshuman Kumar ◽  
Christian Shelton ◽  
Bryan Wong

Inverse problems continue to garner immense interest in the physical sciences, particularly in the context of controlling desired phenomena in non-equilibrium systems. In this work, we utilize a series of deep neural networks for predicting time-dependent optimal control fields, <i>E(t)</i>, that enable desired electronic transitions in reduced-dimensional quantum dynamical systems. To solve this inverse problem, we investigated two independent machine learning approaches: (1) a feedforward neural network for predicting the frequency and amplitude content of the power spectrum in the frequency domain (i.e., the Fourier transform of <i>E(t)</i>), and (2) a cross-correlation neural network approach for directly predicting <i>E(t)</i> in the time domain. Both of these machine learning methods give complementary approaches for probing the underlying quantum dynamics and also exhibit impressive performance in accurately predicting both the frequency and strength of the optimal control field. We provide detailed architectures and hyperparameters for these deep neural networks as well as performance metrics for each of our machine-learned models. From these results, we show that machine learning approaches, particularly deep neural networks, can be employed as a cost-effective statistical approach for designing electromagnetic fields to enable desired transitions in these quantum dynamical systems.


2019 ◽  
Vol 8 (2) ◽  
pp. 4928-4937 ◽  

Odia character and digits recognition area are vital issues of these days in computer vision. In this paper a Hope field neural network design to solve the printed Odia character recognition has been discussed. Optical Character Recognition (OCR) is the principle of applying conversion of the pictures from handwritten, printed or typewritten to machine encoded text version. Artificial Neural Networks (ANNs) trained as a classifier and it had been trained, supported the rule of Hopfield Network by exploitation code designed within the MATLAB. Preprocessing of data (image acquisition, binarization, skeletonization, skew detection and correction, image cropping, resizing, implementation and digitalization) all these activities have been carried out using MATLAB. The OCR, designed a number of the thought accuses non-standard speech for different types of languages. Segmentation, feature extraction, classification tasks is the well-known techniques for reviewing of Odia characters and outlined with their weaknesses, relative strengths. It is expected that who are interested to figure within the field of recognition of Odia characters are described in this paper. Recognition of Odia printed characters, numerals, machine characters of research areas finds costly applications within the banks, industries, offices. In this proposed work we devolve an efficient and robust mechanism in which Odia characters are recognized by the Hopfield Neural Networks (HNN).


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 726 ◽  
Author(s):  
Giorgio Gosti ◽  
Viola Folli ◽  
Marco Leonetti ◽  
Giancarlo Ruocco

In a neural network, an autapse is a particular kind of synapse that links a neuron onto itself. Autapses are almost always not allowed neither in artificial nor in biological neural networks. Moreover, redundant or similar stored states tend to interact destructively. This paper shows how autapses together with stable state redundancy can improve the storage capacity of a recurrent neural network. Recent research shows how, in an N-node Hopfield neural network with autapses, the number of stored patterns (P) is not limited to the well known bound 0.14 N , as it is for networks without autapses. More precisely, it describes how, as the number of stored patterns increases well over the 0.14 N threshold, for P much greater than N, the retrieval error asymptotically approaches a value below the unit. Consequently, the reduction of retrieval errors allows a number of stored memories, which largely exceeds what was previously considered possible. Unfortunately, soon after, new results showed that, in the thermodynamic limit, given a network with autapses in this high-storage regime, the basin of attraction of the stored memories shrinks to a single state. This means that, for each stable state associated with a stored memory, even a single bit error in the initial pattern would lead the system to a stationary state associated with a different memory state. This thus limits the potential use of this kind of Hopfield network as an associative memory. This paper presents a strategy to overcome this limitation by improving the error correcting characteristics of the Hopfield neural network. The proposed strategy allows us to form what we call an absorbing-neighborhood of state surrounding each stored memory. An absorbing-neighborhood is a set defined by a Hamming distance surrounding a network state, which is an absorbing because, in the long-time limit, states inside it are absorbed by stable states in the set. We show that this strategy allows the network to store an exponential number of memory patterns, each surrounded with an absorbing-neighborhood with an exponentially growing size.


2020 ◽  
Author(s):  
Xian Wang ◽  
Anshuman Kumar ◽  
Christian Shelton ◽  
Bryan Wong

Inverse problems continue to garner immense interest in the physical sciences, particularly in the context of controlling desired phenomena in non-equilibrium systems. In this work, we utilize a series of deep neural networks for predicting time-dependent optimal control fields, <i>E(t)</i>, that enable desired electronic transitions in reduced-dimensional quantum dynamical systems. To solve this inverse problem, we investigated two independent machine learning approaches: (1) a feedforward neural network for predicting the frequency and amplitude content of the power spectrum in the frequency domain (i.e., the Fourier transform of <i>E(t)</i>), and (2) a cross-correlation neural network approach for directly predicting <i>E(t)</i> in the time domain. Both of these machine learning methods give complementary approaches for probing the underlying quantum dynamics and also exhibit impressive performance in accurately predicting both the frequency and strength of the optimal control field. We provide detailed architectures and hyperparameters for these deep neural networks as well as performance metrics for each of our machine-learned models. From these results, we show that machine learning approaches, particularly deep neural networks, can be employed as a cost-effective statistical approach for designing electromagnetic fields to enable desired transitions in these quantum dynamical systems.


Author(s):  
Hanlin Gu ◽  
Wei Wang ◽  
Siqin Cao ◽  
Ilona Christy Unarta ◽  
Yuan Yao ◽  
...  

Markov State Model (MSM) is a powerful tool for modeling the long timescale dynamics based on numerous short molecular dynamics (MD) simulation trajectories, which makes it a useful tool for...


1995 ◽  
Vol 06 (03) ◽  
pp. 317-357 ◽  
Author(s):  
M.B. SUKHASWAMI ◽  
P. SEETHARAMULU ◽  
ARUN K. PUJARI

The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different “hands” in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.


Author(s):  
Panagiotis Kouvaros ◽  
Alessio Lomuscio

We introduce an efficient method for the complete verification of ReLU-based feed-forward neural networks. The method implements branching on the ReLU states on the basis of a notion of dependency between the nodes. This results in dividing the original verification problem into a set of sub-problems whose MILP formulations require fewer integrality constraints. We evaluate the method on all of the ReLU-based fully connected networks from the first competition for neural network verification. The experimental results obtained show 145% performance gains over the present state-of-the-art in complete verification.


1995 ◽  
Vol 09 (10) ◽  
pp. 1159-1169 ◽  
Author(s):  
VARSHA BANERJEE ◽  
SANJAY PURI

We present a continuous-time neural network model which consists of neurons with a continuous input-output relation. We use a computationally efficient discrete-time equivalent of this model to study its time-dependent properties. Detailed numerical results are presented for the behavior of the relaxation time to a target pattern as a function of the storage capacity of the network.


Sign in / Sign up

Export Citation Format

Share Document