Diffusion learning algorithms for feedforward neural networks

2013 ◽  
Vol 49 (3) ◽  
pp. 334-346 ◽  
Author(s):  
B. A. Skorohod
Risks ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 58
Author(s):  
Banghee So ◽  
Jean-Philippe Boucher ◽  
Emiliano A. Valdez

This article describes the techniques employed in the production of a synthetic dataset of driver telematics emulated from a similar real insurance dataset. The synthetic dataset generated has 100,000 policies that included observations regarding driver’s claims experience, together with associated classical risk variables and telematics-related variables. This work is aimed to produce a resource that can be used to advance models to assess risks for usage-based insurance. It follows a three-stage process while using machine learning algorithms. In the first stage, a synthetic portfolio of the space of feature variables is generated applying an extended SMOTE algorithm. The second stage is simulating values for the number of claims as multiple binary classifications applying feedforward neural networks. The third stage is simulating values for aggregated amount of claims as regression using feedforward neural networks, with number of claims included in the set of feature variables. The resulting dataset is evaluated by comparing the synthetic and real datasets when Poisson and gamma regression models are fitted to the respective data. Other visualization and data summarization produce remarkable similar statistics between the two datasets. We hope that researchers interested in obtaining telematics datasets to calibrate models or learning algorithms will find our work ot be valuable.


1993 ◽  
Vol 115 (1) ◽  
pp. 38-43 ◽  
Author(s):  
H. S. M. Beigi ◽  
C. J. Li

Previous studies have suggested that, for moderate sized neural networks, the use of classical Quasi-Newton methods yields the best convergence properties among all the state-of-the-art [1]. This paper describes a set of even better learning algorithms based on a class of Quasi-Newton optimization techniques called Self-Scaling Variable Metric (SSVM) methods. One of the characteristics of SSVM methods is that they provide a set of search directions which are invariant under the scaling of the objective function. With an XOR benchmark and an encoder benchmark, simulations using the SSVM algorithms for the learning of general feedforward neural networks were carried out to study their performance. Compared to classical Quasi-Newton methods, it is shown that the SSVM method reduces the number of iterations required for convergence by 40 percent to 60 percent that of the classical Quasi-Newton methods which, in general, converge two to three orders of magnitude faster than the steepest descent techniques.


2020 ◽  
Vol 53 (2) ◽  
pp. 1108-1113
Author(s):  
Magnus Malmström ◽  
Isaac Skog ◽  
Daniel Axehill ◽  
Fredrik Gustafsson

Sign in / Sign up

Export Citation Format

Share Document