scholarly journals Using the Stochastic Gradient Descent Optimization Algorithm on Estimating of Reactivity Ratios

Materials ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 4764
Author(s):  
Iosif Sorin Fazakas-Anca ◽  
Arina Modrea ◽  
Sorin Vlase

This paper describes an improved method of calculating reactivity ratios by applying the neuronal networks optimization algorithm, named gradient descent. The presented method is integral and has been compared to the following existing methods: Fineman–Ross, Tidwell–Mortimer, Kelen–Tüdös, extended Kelen–Tüdös and Error in Variable Methods. A comparison of the reactivity ratios that obtained different levels of conversions was made based on the Fisher criterion. The new calculation method for reactivity ratios shows better results than these other methods.

Author(s):  
Arnulf Jentzen ◽  
Benno Kuckuck ◽  
Ariel Neufeld ◽  
Philippe von Wurstemberger

Abstract Stochastic gradient descent (SGD) optimization algorithms are key ingredients in a series of machine learning applications. In this article we perform a rigorous strong error analysis for SGD optimization algorithms. In particular, we prove for every arbitrarily small $\varepsilon \in (0,\infty )$ and every arbitrarily large $p{\,\in\,} (0,\infty )$ that the considered SGD optimization algorithm converges in the strong $L^p$-sense with order $1/2-\varepsilon $ to the global minimum of the objective function of the considered stochastic optimization problem under standard convexity-type assumptions on the objective function and relaxed assumptions on the moments of the stochastic errors appearing in the employed SGD optimization algorithm. The key ideas in our convergence proof are, first, to employ techniques from the theory of Lyapunov-type functions for dynamical systems to develop a general convergence machinery for SGD optimization algorithms based on such functions, then, to apply this general machinery to concrete Lyapunov-type functions with polynomial structures and, thereafter, to perform an induction argument along the powers appearing in the Lyapunov-type functions in order to achieve for every arbitrarily large $ p \in (0,\infty ) $ strong $ L^p $-convergence rates.


2019 ◽  
Vol 63 (2) ◽  
pp. 267-282 ◽  
Author(s):  
Avinash Ratre

Abstract Crowd emotion understanding is an interesting research area that assists the security personnel to read the emotion/activity of the crowd in the locality. Most of the traditional methods utilize the low-level visual features to understand the crowd emotions that extend the gap between the low- and the high-level features. With the aim to develop an automatic method for emotion recognition, this paper utilizes the deep convolutional neural network (deep CNN). For the effective emotion recognition, it is essential to select the key frames of the video using the wavelet-based Bhattacharya distance. The key frames are fed to the space-time interest points descriptor that extracts the features and forms the input vector to the classifier. Deep CNN is trained using the proposed Stochastic Gradient Descent–Whale Optimization Algorithm, which is the unification of the standard stochastic gradient descent algorithm with whale optimization algorithm. The proposed classifier recognizes the emotions of the crowd, such as angry, escape, fight, happy, normal, running/walking and violence. The analysis of the method proved that the proposed approaches acquired a maximal accuracy, specificity and sensitivity of 0.9693, 0.9936 and 0.9675, respectively.


Sign in / Sign up

Export Citation Format

Share Document