delta rule
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 11)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Vol 258 ◽  
pp. 06024
Author(s):  
Anatoly Nevelev ◽  
Alfiya Kamaletdinova

The sustainability of an industrial region is rooted in the sustainability of the production process itself. Industrial production is strictly focused on the product, which plays a crucial role in production structure. The instability of production is associated with the delta-result, which should be the subject of scientific research. The delta rule is the difference between the result and the product. It categorically grasps the source of instability of the region’s production and industrial development as a whole. The interaction of material production and science, as an ideal production, provides the most optimal conditions for managing the complete result of an industrial region’s life activity. The article presents the rationale for using the delta-result assessment methodology in the analysis of sustainable development of the region, including the processes of goal-setting and sustainable development. The work results can be used to improve the efficiency of management of both industrial enterprises and institutions carrying out sectoral or territorial regulation.


Author(s):  
Emad Abdel-salam ◽  
Mohamed NOUH ◽  
Yosry Azzam

The helium burning phase represents the second stage that the star used to consume nuclear fuel in its interior. In this stage, the three elements carbon, oxygen, and neon are synthesized. The present paper has two folds, the first is to develop an analytical solution to the system of the conformable fractional differential equations of the helium burning network, where we used for this purpose the series expansion method and obtained recurrence relations for the product abundances i.e. helium, carbon, oxygen, and neon. Using four different initial abundances, we calculated 44 gas models covering the range of the fractional parameter with step . We found that the effects of the fractional parameter on the product abundances are small which coincides with the results obtained by a previous study. Second, we introduced the mathematical model of the neural network (NN) and developed a neural network algorithm to simulate the helium burning network using its feed-forward model that is trained by the back propagation (BP) gradient descent delta rule algorithm. A comparison between the NN and the analytical models revealed very good agreement for all gas models. We found that NN could be considered as a powerful tool to solve and model nuclear burning networks and could be applied to the other nuclear stellar burning networks.


2020 ◽  
Vol 70 (2) ◽  
pp. 14-20
Author(s):  
D.B. Amirkhan ◽  
◽  
A.B. Shansharkhanov ◽  

The article discusses a method for forecasting the exchange rate. Artificial neural networks act as a forecasting tool. As a currency for the numerical testing of the proposed approach, the oil price in dollars, USD (value in rubles and tenge) was chosen as the most common currency in the world. The data will be processed from 2000 to 2019. In the course of the study, the indicators of the General exchange rate were identified with each other by day. When determining the dollar exchange rate using a single-layer neural network, the Adeline algorithm and the generalized Delta rule were used. Based on the prediction algorithm, the program code is written in Python. It is obvious that the quality of neural network training can be used to further predict the dynamics of the exchange rate.


2020 ◽  
Vol 32 (5) ◽  
pp. 1018-1032 ◽  
Author(s):  
Noah Frazier-Logue ◽  
Stephen José Hanson

Multilayer neural networks have led to remarkable performance on many kinds of benchmark tasks in text, speech, and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting and misspecification. One approach to these estimation and related problems (e.g., saddle points, colinearity, feature discovery) is called Dropout. The Dropout algorithm removes hidden units according to a binomial random variable with probability [Formula: see text] prior to each update, creating random “shocks” to the network that are averaged over updates (thus creating weight sharing). In this letter, we reestablish an older parameter search method and show that Dropout is a special case of this more general model, stochastic delta rule (SDR), published originally in 1990. Unlike Dropout, SDR redefines each weight in the network as a random variable with mean [Formula: see text] and standard deviation [Formula: see text]. Each weight random variable is sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights (accumulated in the mean values). Both parameters are updated according to prediction error, thus resulting in weight noise injections that reflect a local history of prediction error and local model averaging. SDR therefore implements a more sensitive local gradient-dependent simulated annealing per weight converging in the limit to a Bayes optimal network. We run tests on standard benchmarks (CIFAR and ImageNet) using a modified version of DenseNet and show that SDR outperforms standard Dropout in top-5 validation error by approximately 13% with DenseNet-BC 121 on ImageNet and find various validation error improvements in smaller networks. We also show that SDR reaches the same accuracy that Dropout attains in 100 epochs in as few as 40 epochs, as well as improvements in training error by as much as 80%.


Author(s):  
Mattias Forsgren ◽  
Peter Juslin ◽  
Ronald van den Berg

ABSTRACTExtensive research in the behavioural sciences has addressed people’s ability to learn probabilities of stochastic events, typically assuming them to be stationary (i.e., constant over time). Only recently have there been attempts to model the cognitive processes whereby people learn – and track – non-stationary probabilities, reviving the old debate on whether learning occurs trial-by-trial or by occasional shifts between discrete hypotheses. Trial-by-trial updating models – such as the delta-rule model – have been popular in describing human learning in various contexts, but it has been argued that they are inadequate for explaining how humans update beliefs about non-stationary probabilities. Specifically, it has been claimed that these models cannot account for the discrete, stepwise updating that characterises response patterns in experiments where participants tracked a non-stationary probability based on observed outcomes. Here, we demonstrate that the rejection of trial-by-trial models was premature for two reasons. First, our experimental data suggest that the stepwise behaviour depends on details of the experimental paradigm. Hence, discreteness in response data does not necessarily imply discreteness in internal belief updating. Second, previous studies have dismissed trial-by-trial models mainly based on qualitative arguments rather than quantitative model comparison. To evaluate the models more rigorously, we performed a likelihood-based model comparison between stepwise and trial-by-trial updating models. Across eight datasets collected in three different labs, human behaviour is consistently best described by trial-by-trial updating models. Our results suggest that trial-by-trial updating plays a prominent role in the cognitive processes underlying learning of non-stationary probabilities.


Decision ◽  
2020 ◽  
Vol 7 (1) ◽  
pp. 55-66
Author(s):  
Sangil Lee ◽  
Joshua I. Gold ◽  
Joseph W. Kable
Keyword(s):  

2019 ◽  
Author(s):  
Steffen A. Herff ◽  
Shanshan Zhen ◽  
Rongjun Yu ◽  
Kat Rose Agres

Statistical learning (SL) is the ability to generate predictions based on probabilistic dependencies in the environment, an ability that is present throughout life. The effect of aging on SL is still unclear. Here, we explore statistical learning in healthy adults (40 younger and 40 older). The novel paradigm tracks learning trajectories and shows age-related differences in overall performance, yet similarities in learning rates. Bayesian models reveal further differences between younger and older adults in dealing with uncertainty in this probabilistic SL task. We test computational models of three different learning strategies: (1) Win-Stay, Lose-Shift, (2) Delta Rule Learning, (3) Information Weights to explore whether they capture age-related differences in performance and learning in the present task. A likely candidate mechanism emerges in the form of age-dependent differences in information weights, in which young adults more readily change their behavior, but also show disproportionally strong reactions towards erroneous predictions. With lower but more balanced information weights, older adults show slower behavioral adaptation but eventually arrive at more stable and accurate representations of the underlying transitional probability matrix.


Author(s):  
David D. Nolte

Individual neurons are modelled as nonlinear oscillators that rely on bistability and homoclinic orbits to produce spiking potentials. Simplified mathematical models, like the Fitzhugh–Nagumo and NaK models, capture successively more sophisticated behavior of individual neurons, such as thresholds and spiking. Artificial neurons are introduced that are composed of three simple features: summation of inputs, referencing to a threshold, and saturating output. Artificial networks of neurons are defined through specific network architectures that included the perceptron, feedforward networks with hidden layers that are trained using the Delta Rule, and recurrent networks with feedback. A prevalent example of a recurrent network is the Hopfield network, which performs operations such as associative recall. The dynamic trajectories of the Hopfield network have basins of attraction in state space that correspond to stored memories.


Sign in / Sign up

Export Citation Format

Share Document