Microcomputer-based in-vivo liver differentiation using artificial neural networks

Author(s):  
N. Botros ◽  
D. Zatari
Author(s):  
Klaus-Jürgen Schapper ◽  
Michael Wiese ◽  
Reinhold Dieter ◽  
Peter Emig ◽  
Jürgen Engel ◽  
...  

Author(s):  
Abdelkader A Metwally ◽  
Amira A Nayel ◽  
Rania M Hathout

In silico prediction of the in vivo efficacy of siRNA ionizable-lipid nanoparticles is desirable yet never achieved before. This study aims to computationally predict siRNA nanoparticles in vivo efficacy, which saves time and resources. A data set containing 120 entries was prepared by combining molecular descriptors of the ionizable lipids together with two nanoparticles formulation characteristics. Input descriptor combinations were selected by an evolutionary algorithm. Artificial neural networks, support vector machines and partial least squares regression were used for QSAR modeling. Depending on how the data set is split, two training sets and two external validation sets were prepared. Training and validation sets contained 90 and 30 entries respectively. The results showed the successful predictions of validation set log(dose) with R2val = 0.86 – 0.89 and 0.75 – 80 for validation sets one and two respectively. Artificial neural networks resulted in the best R2val for both validation sets. For predictions that have high bias, improvement of R2val from 0.47 to 0.96 was achieved by selecting the training set lipids lying within the applicability domain. In conclusion, in vivo performance of siRNA nanoparticles was successfully predicted by combining cheminformatics with machine learning techniques.


2017 ◽  
Author(s):  
◽  
D. Flores

Artificial neural networks (ANN) are a computational method that has been widely used to solve complex problems and carry out predictions on nonlinear systems. Multilayer perceptron artificial neural networks were used to predict the physiological response that would be obtained by adding a specific concentration of digoxin to Tivela stultorum hearts, this organism is a model for testing cardiac drugs that pretends to be used in humans. The MLPANN inputs were weight, volume, length, and width of the heart, digoxin concentration and volume used for diluting digoxin, and maximum contraction, minimum contraction, filling time, and heart rate before adding digoxin, and the outputs were the maximum contraction, minimum contraction, filling time, and heart rate that would be obtained after adding digoxin to the heart. ANNs were trained, validated, and tested with the results obtained from the in vivo experiments. To choose the optimal network, the smallest square mean error value was used. Perceptrons obtained a high performance and correlation between predicted and calculated values, except in the case of the filling time output. Accurate predictions of the T. stultorum clams cardioactivity were obtained when a specific concentration of digoxin was added using ANNs with one hidden layer; this could be useful as a tool to facilitate laboratory experiments to test digoxin effects.


2015 ◽  
Author(s):  
Vinicius Pegorini ◽  
Leandro Zen Karam ◽  
Christiano S. Rocha Pitta ◽  
Richardson Ribeiro ◽  
Tangriani Simioni Assmann ◽  
...  

2021 ◽  
Author(s):  
Kyle Aitken ◽  
Marina Garrett ◽  
Shawn Olsen ◽  
Stefan Mihalas

Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggest that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift most often occurs along directions that have the most variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.


Sign in / Sign up

Export Citation Format

Share Document