AN ERROR ANALYSIS OF LAVRENTIEV REGULARIZATION IN LEARNING THEORY

2009 ◽  
Vol 02 (01) ◽  
pp. 129-140
Author(s):  
J. K. Sahoo ◽  
Arindama Singh

In this paper we study how Lavrentiev regularization can be used in the context of learning theory, especially in regularization networks that are closely related to support vector machines. We briefly discuss formulations of learning from examples in the context of ill-posed inverse problem and regularization. We then study the interplay between the Lavrentiev regularization of the concerned continuous and discretized ill-posed inverse problems. As the main result of this paper, we give an improved probabilistic bound for the regularization networks or least square algorithms, where we can afford to choose the regularization parameter in a larger interval.

2021 ◽  
Vol 39 (4) ◽  
pp. 1190-1197
Author(s):  
Y. Ibrahim ◽  
E. Okafor ◽  
B. Yahaya

Manual grid-search tuning of machine learning hyperparameters is very time-consuming. Hence, to curb this problem, we propose the use of a genetic algorithm (GA) for the selection of optimal radial-basis-function based support vector machine (RBF-SVM) hyperparameters; regularization parameter C and cost-factor γ. The resulting optimal parameters were used during the training of face recognition models. To train the models, we independently extracted features from the ORL face image dataset using local binary patterns (handcrafted) and deep learning architectures (pretrained variants of VGGNet). The resulting features were passed as input to either linear-SVM or optimized RBF-SVM. The results show that the models from optimized RBFSVM combined with deep learning or hand-crafted features yielded performances that surpass models obtained from Linear-SVM combined with the aforementioned features in most of the data splits. The study demonstrated that it is profitable to optimize the hyperparameters of an SVM to obtain the best classification performance. Keywords: Face Recognition, Feature Extraction, Local Binary Patterns, Transfer Learning, Genetic Algorithm and Support Vector  Machines.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0257901
Author(s):  
Yanjing Bi ◽  
Chao Li ◽  
Yannick Benezeth ◽  
Fan Yang

Phoneme pronunciations are usually considered as basic skills for learning a foreign language. Practicing the pronunciations in a computer-assisted way is helpful in a self-directed or long-distance learning environment. Recent researches indicate that machine learning is a promising method to build high-performance computer-assisted pronunciation training modalities. Many data-driven classifying models, such as support vector machines, back-propagation networks, deep neural networks and convolutional neural networks, are increasingly widely used for it. Yet, the acoustic waveforms of phoneme are essentially modulated from the base vibrations of vocal cords, and this fact somehow makes the predictors collinear, distorting the classifying models. A commonly-used solution to address this issue is to suppressing the collinearity of predictors via partial least square regressing algorithm. It allows to obtain high-quality predictor weighting results via predictor relationship analysis. However, as a linear regressor, the classifiers of this type possess very simple topology structures, constraining the universality of the regressors. For this issue, this paper presents an heterogeneous phoneme recognition framework which can further benefit the phoneme pronunciation diagnostic tasks by combining the partial least square with support vector machines. A French phoneme data set containing 4830 samples is established for the evaluation experiments. The experiments of this paper demonstrates that the new method improves the accuracy performance of the phoneme classifiers by 0.21 − 8.47% comparing to state-of-the-arts with different data training data density.


2021 ◽  
Vol 263 (5) ◽  
pp. 1029-1040
Author(s):  
Pierangelo Libianchi ◽  
Finn T. Agerkvist ◽  
Elena Shabalina

In sound field control, a set of control sources is used to match the pressure field generated by noise sources but with opposite phase to reduce the total sound pressure level in a defined area commonly referred to as dark zone. This is usually an ill-posed problem. The approach presented here employs a subspace iterative method where the number of iterations acts as the regularization parameter and controls unwanted side radiation, i.e. side lobes. More iterations lead to less regularization and more side lobes. The number of iterations is controlled by problem-specific stopping criteria. Simulations show the increase of lobing with increased number of iterations. The solutions are analysed through projections on the basis provided by the source strength modes corresponding to the right singular vector of the transfer function matrix. These projections show how higher order pressure modes (left singular vectors) become dominant with larger number of iterations. Furthermore, an active-set type method provides the constraints on the amplitude of the solution which is not possible with the conjugate gradient least square algorithm alone.


Sign in / Sign up

Export Citation Format

Share Document