scholarly journals Dynamic Processes in a Superconducting Adiabatic Neuron with Non-Shunted Josephson Contacts

Symmetry ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1735
Author(s):  
Marina Bastrakova ◽  
Anastasiya Gorchavkina ◽  
Andrey Schegolev ◽  
Nikolay Klenov ◽  
Igor Soloviev ◽  
...  

We investigated the dynamic processes in a superconducting neuron based on Josephson contacts without resistive shunting (SC-neuron). Such a cell is a key element of perceptron-type neural networks that operate in both classical and quantum modes. The analysis of the obtained results allowed us to find the mode when the transfer characteristic of the element implements the “sigmoid” activation function. The numerical approach to the analysis of the equations of motion and the Monte Carlo method revealed the influence of inertia (capacitances), dissipation, and temperature on the dynamic characteristics of the neuron.

2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


1997 ◽  
Vol 9 (5) ◽  
pp. 1109-1126
Author(s):  
Zhiyu Tian ◽  
Ting-Ting Y. Lin ◽  
Shiyuan Yang ◽  
Shibai Tong

With the progress in hardware implementation of artificial neural networks, the ability to analyze their faulty behavior has become increasingly important to their diagnosis, repair, reconfiguration, and reliable application. The behavior of feedforward neural networks with hard limiting activation function under stuck-at faults is studied in this article. It is shown that the stuck-at-M faults have a larger effect on the network's performance than the mixed stuck-at faults, which in turn have a larger effect than that of stuck-at-0 faults. Furthermore, the fault-tolerant ability of the network decreases with the increase of its size for the same percentage of faulty interconnections. The results of our analysis are validated by Monte-Carlo simulations.


2012 ◽  
Vol 628 ◽  
pp. 324-329
Author(s):  
F. García Fernández ◽  
L. García Esteban ◽  
P. de Palacios ◽  
A. García-Iruela ◽  
R. Cabedo Gallén

Artificial neural networks have become a powerful modeling tool. However, although they obtain an output with very good accuracy, they provide no information about the uncertainty of the network or its coverage intervals. This study describes the application of the Monte Carlo method to obtain the output uncertainty and coverage intervals of a particular type of artificial neural network: the multilayer perceptron.


2019 ◽  
Author(s):  
Vladimír Kunc ◽  
Jiří Kléma

AbstractMotivationGene expression profiling was made cheaper by the NIH LINCS program that profiles only ~1, 000 selected landmark genes and uses them to reconstruct the whole profile. The D–GEX method employs neural networks to infer the whole profile. However, the original D–GEX can be further significantly improved.ResultsWe have analyzed the D–GEX method and determined that the inference can be improved using a logistic sigmoid activation function instead of the hyperbolic tangent. Moreover, we propose a novel transformative adaptive activation function that improves the gene expression inference even further and which generalizes several existing adaptive activation functions. Our improved neural network achieves average mean absolute error of 0.1340 which is a significant improvement over our reimplementation of the original D–GEX which achieves average mean absolute error 0.1637


2020 ◽  
Vol 1471 ◽  
pp. 012010 ◽  
Author(s):  
Heny Pratiwi ◽  
Agus Perdana Windarto ◽  
S. Susliansyah ◽  
Ririn Restu Aria ◽  
Susi Susilowati ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document