scholarly journals Development of a local scour prediction model clustered by soil class

Author(s):  
M. Annad ◽  
A. Lefkir ◽  
M. Mammar-kouadri ◽  
I. Bettahar

Abstract Several studies have been conducted to assess local scour formulas in order to select the most appropriate one. Confronted to the limits of the previous formulas, further studies have been performed to propose new local scour formulas. Generalizing a single scour formula, for all soil classes, seems approximate for such a complex phenomenon depending on several parameters and may eventually lead to considerable uncertainties in scour estimation. This study aims to propose several new scour formulas for different granulometric classes of the streambed by exploiting a large field database. The new scour formulas are based on multiple non-linear regression (MNLR) models. Supervised learning is used as an optimization tool to solve the hyper-parameters of each new equation by using the ‘Gradient Descent Algorithm’. The results show that the new formulas proposed in this study perform better than some other empirical formulas chosen for comparison. The results are presented as seven new formulas, as well as abacuses for the calculation of local scour by soil classes.

2021 ◽  
Author(s):  
Nishchal J ◽  
neel bhandari

Information is mounting exponentially, and the world is moving to hunt knowledge with the help of Big Data. The labelled data is used for automated learning and data analysis which is termed as Machine Learning. Linear Regression is a statistical method for predictive analysis. Gradient Descent is the process which uses cost function on gradients for minimizing the complexity in computing mean square error. This work presents an insight into the different types of Gradient descent algorithms namely, Batch Gradient Descent, Stochastic Gradient Descent and Mini-Batch Gradient Descent, which are implemented on a Linear regression dataset, and hence determine the computational complexity and other factors like learning rate, batch size and number of iterations which affect the efficiency of the algorithm.


2021 ◽  
Author(s):  
Nishchal J ◽  
neel bhandari

Information is mounting exponentially, and the world is moving to hunt knowledge with the help of Big Data. The labelled data is used for automated learning and data analysis which is termed as Machine Learning. Linear Regression is a statistical method for predictive analysis. Gradient Descent is the process which uses cost function on gradients for minimizing the complexity in computing mean square error. This work presents an insight into the different types of Gradient descent algorithms namely, Batch Gradient Descent, Stochastic Gradient Descent and Mini-Batch Gradient Descent, which are implemented on a Linear regression dataset, and hence determine the computational complexity and other factors like learning rate, batch size and number of iterations which affect the efficiency of the algorithm.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xiaoman Bian ◽  
Rushi Lan ◽  
Xiaoqin Wang ◽  
Chen Chen ◽  
Zhenbing Liu ◽  
...  

In recent years, hashing learning has received increasing attention in supervised video retrieval. However, most existing supervised video hashing approaches design hash functions based on pairwise similarity or triple relationships and focus on local information, which results in low retrieval accuracy. In this work, we propose a novel supervised framework called discriminative codebook hashing (DCH) for large-scale video retrieval. The proposed DCH encourages samples within the same category to converge to the same code word and maximizes the mutual distances among different categories. Specifically, we first propose the discriminative codebook via a predefined distance among intercode words and Bernoulli distributions to handle each hash bit. Then, we use the composite Kullback–Leibler (KL) divergence to align the neighborhood structures between the high-dimensional space and the Hamming space. The proposed DCH is optimized via the gradient descent algorithm. Experimental results on three widely used video datasets verify that our proposed DCH performs better than several state-of-the-art methods.


Author(s):  
Marco Mele ◽  
Cosimo Magazzino ◽  
Nicolas Schneider ◽  
Floriana Nicolai

AbstractAlthough the literature on the relationship between economic growth and CO2 emissions is extensive, the use of machine learning (ML) tools remains seminal. In this paper, we assess this nexus for Italy using innovative algorithms, with yearly data for the 1960–2017 period. We develop three distinct models: the batch gradient descent (BGD), the stochastic gradient descent (SGD), and the multilayer perceptron (MLP). Despite the phase of low Italian economic growth, results reveal that CO2 emissions increased in the predicting model. Compared to the observed statistical data, the algorithm shows a correlation between low growth and higher CO2 increase, which contradicts the main strand of literature. Based on this outcome, adequate policy recommendations are provided.


Photonics ◽  
2021 ◽  
Vol 8 (5) ◽  
pp. 165
Author(s):  
Shiqing Ma ◽  
Ping Yang ◽  
Boheng Lai ◽  
Chunxuan Su ◽  
Wang Zhao ◽  
...  

For a high-power slab solid-state laser, obtaining high output power and high output beam quality are the most important indicators. Adaptive optics systems can significantly improve beam qualities by compensating for the phase distortions of the laser beams. In this paper, we developed an improved algorithm called Adaptive Gradient Estimation Stochastic Parallel Gradient Descent (AGESPGD) algorithm for beam cleanup of a solid-state laser. A second-order gradient of the search point was introduced to modify the gradient estimation, and it was introduced with the adaptive gain coefficient method into the classical Stochastic Parallel Gradient Descent (SPGD) algorithm. The improved algorithm accelerates the search for convergence and prevents it from falling into a local extremum. Simulation and experimental results show that this method reduces the number of iterations by 40%, and the algorithm stability is also improved compared with the original SPGD method.


1995 ◽  
Vol 3 (3) ◽  
pp. 133-142 ◽  
Author(s):  
M. Hana ◽  
W.F. McClure ◽  
T.B. Whitaker ◽  
M. White ◽  
D.R. Bahler

Two artificial neural network models were used to estimate the nicotine in tobacco: (i) a back-propagation network and (ii) a linear network. The back-propagation network consisted of an input layer, an output layer and one hidden layer. The linear network consisted of an input layer and an output layer. Both networks used the generalised delta rule for learning. Performances of both networks were compared to the multiple linear regression method MLR of calibration. The nicotine content in tobacco samples was estimated for two different data sets. Data set A contained 110 near infrared (NIR) spectra each consisting of reflected energy at eight wavelengths. Data set B consisted of 200 NIR spectra with each spectrum having 840 spectral data points. The Fast Fourier transformation was applied to data set B in order to compress each spectrum into 13 Fourier coefficients. For data set A, the linear regression model gave better results followed by the back-propagation network which was followed by the linear network. The true performance of the linear regression model was better than the back-propagation and the linear networks by 14.0% and 18.1%, respectively. For data set B, the back-propagation network gave the best result followed by MLR and the linear network. Both the linear network and MLR models gave almost the same results. The true performance of the back-propagation network model was better than the MLR and linear network by 35.14%.


Sign in / Sign up

Export Citation Format

Share Document