gradient descent methods
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 15)

H-INDEX

9
(FIVE YEARS 1)

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Jakob Jordan ◽  
Maximilian Schmidt ◽  
Walter Senn ◽  
Mihai A Petrovici

Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called ‘plasticity rules’, is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Zuowen Tan ◽  
Haohan Zhang ◽  
Peiyi Hu ◽  
Rui Gao

The Internet of Things (IoT) is one of the latest internet evolutions. Cloud computing is an important technique which realizes the computational demand of largely distributed IoT devices/sensors by employing various machine learning models. Gradient descent methods are widely employed to find the optimal coefficients of a machine learning model in the cloud computing. Commonly, the data are distributed among multiple data owners, whereas the target function is held by the model owner. The model owner can train its model over data owner’s data and provide predictions. However, the dataset or the target function’s confidentiality may not be kept in secret during computations. Thus, security threats and privacy risks arise. To address the data and model’s privacy mentioned above, we present two new outsourced privacy-preserving gradient descent (OPPGD) method schemes over horizontally or vertically partitioned data among multiple parties, respectively. Compared to previously proposed solutions, our methods improve in comprehensiveness in a more general scene. The data privacy and the model privacy are preserved during the whole learning and prediction procedures. In addition, the execution performance evaluation demonstrates that our schemes can help the model owner to optimize its target function and provide exact prediction with high efficiency and accuracy.


2021 ◽  
Author(s):  
Brice Tsakam

We present a novel system identification method that overcomes the local minima problem yet converges to accurate system parameters. In the first stage, the algorithm minimizes the wasserstein distance by the mean of an optimal transport solver without resorting to a gradient for the wasserstein distance cost. In the second stage, the algorithm achieves fast convergence usingconventional gradient descent methods in the euclidean distance cost.


2020 ◽  
Vol 11 (4) ◽  
pp. 131-141
Author(s):  
L. L. Titova ◽  
◽  
O. V. Nadtochiy ◽  

This article is devoted to the analysis of the most common optimization methods used in practical engineering problems of finding the extremum of multidimensional functions and the formation on the basis of the identified properties of recommendations for choosing the best on different data sets. In the process of analysis, various implementations of gradient descent methods, pulse methods, adaptive methods and quasi-Newtonian methods were considered, and the advantages and problems of each of the methods in their use were summarized. Developed computer program that implements the use of all considered methods. The computational experiment performed for the three functions showed that the zero -Rosenbrock and zero - Powell methods proved to be the most effective.


2020 ◽  
Vol 75 (1) ◽  
pp. 81-102
Author(s):  
Polycarp Omondi Okock ◽  
Jozef Urbán ◽  
Karol Mikula

AbstractThis paper presents an efficient 3D shape registration by using distance maps and stochastic gradient descent method. The proposed algorithm aims to find the optimal affine transformation parameters (translation, scaling and rotation) that maps two distance maps to each other. These distance maps represent the shapes as an interface and we apply level sets methods to calculate the signed distance to these interfaces. To maximize the similarity between the two distance maps, we apply sum of squared difference (SSD) optimization and gradient descent methods to minimize it. To address the shortcomings of the standard gradient descent method, i.e., many iterations to compute the minimum, we implemented the stochastic gradient descent method. The outcome of these two methods are compared to show the advantages of using stochastic gradient descent method. In addition, we implement computational optimization’s such as parallelization to speed up the registration process.


Sign in / Sign up

Export Citation Format

Share Document