regularization parameter
Recently Published Documents


TOTAL DOCUMENTS

806
(FIVE YEARS 341)

H-INDEX

36
(FIVE YEARS 10)

Author(s):  
Tamilarasi Suresh ◽  
Tsehay Admassu Assegie ◽  
Subhashni Rajkumar ◽  
Napa Komal Kumar

Heart disease is one of the most widely spreading and deadliest diseases across the world. In this study, we have proposed hybrid model for heart disease prediction by employing random forest and support vector machine. With random forest, iterative feature elimination is carried out to select heart disease features that improves predictive outcome of support vector machine for heart disease prediction. Experiment is conducted on the proposed model using test set and the experimental result evidently appears to prove that the performance of the proposed hybrid model is better as compared to an individual random forest and support vector machine. Overall, we have developed more accurate and computationally efficient model for heart disease prediction with accuracy of 98.3%. Moreover, experiment is conducted to analyze the effect of regularization parameter (C) and gamma on the performance of support vector machine. The experimental result evidently reveals that support vector machine is very sensitive to C and gamma.


2022 ◽  
Vol 12 (1) ◽  
pp. 114
Author(s):  
Frank Neugebauer ◽  
Marios Antonakakis ◽  
Kanjana Unnwongse ◽  
Yaroslav Parpaley ◽  
Jörg Wellmer ◽  
...  

MEG and EEG source analysis is frequently used for the presurgical evaluation of pharmacoresistant epilepsy patients. The source localization of the epileptogenic zone depends, among other aspects, on the selected inverse and forward approaches and their respective parameter choices. In this validation study, we compare the standard dipole scanning method with two beamformer approaches for the inverse problem, and we investigate the influence of the covariance estimation method and the strength of regularization on the localization performance for EEG, MEG, and combined EEG and MEG. For forward modelling, we investigate the difference between calibrated six-compartment and standard three-compartment head modelling. In a retrospective study, two patients with focal epilepsy due to focal cortical dysplasia type IIb and seizure freedom following lesionectomy or radiofrequency-guided thermocoagulation (RFTC) used the distance of the localization of interictal epileptic spikes to the resection cavity resp. RFTC lesion as reference for good localization. We found that beamformer localization can be sensitive to the choice of the regularization parameter, which has to be individually optimized. Estimation of the covariance matrix with averaged spike data yielded more robust results across the modalities. MEG was the dominant modality and provided a good localization in one case, while it was EEG for the other. When combining the modalities, the good results of the dominant modality were mostly not spoiled by the weaker modality. For appropriate regularization parameter choices, the beamformer localized better than the standard dipole scan. Compared to the importance of an appropriate regularization, the sensitivity of the localization to the head modelling was smaller, due to similar skull conductivity modelling and the fixed source space without orientation constraint.


2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Chen Xu ◽  
Ye Zhang

Abstract The means to obtain the adsorption isotherms is a fundamental open problem in competitive chromatography. A modern technique of estimating adsorption isotherms is to solve a nonlinear inverse problem in a partial differential equation so that the simulated batch separation coincides with actual experimental results. However, this identification process is usually ill-posed in the sense that the uniqueness of adsorption isotherms cannot be guaranteed, and moreover, the small noise in the measured response can lead to a large fluctuation in the traditional estimation of adsorption isotherms. The conventional mathematical method of solving this problem is the variational regularization, which is formulated as a non-convex minimization problem with a regularized objective functional. However, in this method, the choice of regularization parameter and the design of a convergent solution algorithm are quite difficult in practice. Moreover, due to the restricted number of injection profiles in experiments, the types of measured data are extremely limited, which may lead to a biased estimation. In order to overcome these difficulties, in this paper, we develop a new inversion method – the virtual injection promoting double feed-forward neural network (VIP-DFNN). In this approach, the training data contain various types of artificial injections and synthetic noisy measurement at outlet, generated by a conventional physics model – a time-dependent convection-diffusion system. Numerical experiments with both artificial and real data from laboratory experiments show that the proposed VIP-DFNN is an efficient and robust algorithm.


2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Santhosh George ◽  
C. D. Sreedeep ◽  
Ioannis K. Argyros

Abstract In this paper, we study secant-type iteration for nonlinear ill-posed equations involving 𝑚-accretive mappings in Banach spaces. We prove that the proposed iterative scheme has a convergence order at least 2.20557 using assumptions only on the first Fréchet derivative of the operator. Further, using a general Hölder-type source condition, we obtain an optimal error estimate. We also use the adaptive parameter choice strategy proposed by Pereverzev and Schock (2005) for choosing the regularization parameter.


Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractThis chapter gives details of the linear multiple regression model including assumptions and some pros and cons, the maximum likelihood. Gradient descendent methods are described for learning the parameters under this model. Penalized linear multiple regression is derived under Ridge and Lasso penalties, which also emphasizes the estimation of the regularization parameter of importance for its successful implementation. Examples are given for both penalties (Ridge and Lasso) and but not for penalized regression multiple regression framework for illustrating the circumstances when the penalized versions should be preferred. Finally, the fundamentals of penalized and non-penalized logistic regression are provided under a gradient descendent framework. We give examples of logistic regression. Each example comes with the corresponding R codes to facilitate their quick understanding and use.


Author(s):  
A.N. Grekov ◽  
◽  
A.A. Kabanov ◽  
S.Yu. Alekseev ◽  
◽  
...  

The paper discusses the improvement of the accuracy of an inertial navigation system created on the basis of MEMS sensors using machine learning (ML) methods. As input data for the classifier, we used information obtained from a developed laboratory setup with MEMS sensors on a sealed platform with the ability to adjust its tilt angles. To assess the effectiveness of the models, test curves were constructed with different values of the parameters of these models for each core in the case of a linear, polynomial radial basis function. The inverse regularization parameter was used as a parameter. The proposed algo-rithm based on MO has demonstrated its ability to correctly classify in the presence of noise typical for MEMS sensors, where good classification results were obtained when choosing the optimal values of hy-perparameters.


2021 ◽  
Vol 8 (1) ◽  
pp. 1
Author(s):  
Francesca Bevilacqua ◽  
Alessandro Lanza ◽  
Monica Pragliola ◽  
Fiorella Sgallari

The effectiveness of variational methods for restoring images corrupted by Poisson noise strongly depends on the suitable selection of the regularization parameter balancing the effect of the regulation term(s) and the generalized Kullback–Liebler divergence data term. One of the approaches still commonly used today for choosing the parameter is the discrepancy principle proposed by Zanella et al. in a seminal work. It relies on imposing a value of the data term approximately equal to its expected value and works well for mid- and high-count Poisson noise corruptions. However, the series truncation approximation used in the theoretical derivation of the expected value leads to poor performance for low-count Poisson noise. In this paper, we highlight the theoretical limits of the approach and then propose a nearly exact version of it based on Monte Carlo simulation and weighted least-square fitting. Several numerical experiments are presented, proving beyond doubt that in the low-count Poisson regime, the proposed modified, nearly exact discrepancy principle performs far better than the original, approximated one by Zanella et al., whereas it works similarly or slightly better in the mid- and high-count regimes.


2021 ◽  
Vol 14 (12) ◽  
pp. 7909-7928
Author(s):  
Markus D. Petters

Abstract. Tikhonov regularization is a tool for reducing noise amplification during data inversion. This work introduces RegularizationTools.jl, a general-purpose software package for applying Tikhonov regularization to data. The package implements well-established numerical algorithms and is suitable for systems of up to ~1000 equations. Included is an abstraction to systematically categorize specific inversion configurations and their associated hyperparameters. A generic interface translates arbitrary linear forward models defined by a computer function into the corresponding design matrix. This obviates the need to explicitly write out and discretize the Fredholm integral equation, thus facilitating fast prototyping of new regularization schemes associated with measurement techniques. Example applications include the inversion involving data from scanning mobility particle sizers (SMPSs) and humidified tandem differential mobility analyzers (HTDMAs). Inversion of SMPS size distributions reported in this work builds upon the freely available software DifferentialMobilityAnalyzers.jl. The speed of inversion is improved by a factor of ~200, now requiring between 2 and 5 ms per SMPS scan when using 120 size bins. Previously reported occasional failure to converge to a valid solution is reduced by switching from the L-curve method to generalized cross-validation as the metric to search for the optimal regularization parameter. Higher-order inversions resulting in smooth, denoised reconstructions of size distributions are now included in DifferentialMobilityAnalyzers.jl. This work also demonstrates that an SMPS-style matrixbased inversion can be applied to find the growth factor frequency distribution from raw HTDMA data while also accounting for multiply charged particles. The outcome of the aerosol-related inversion methods is showcased by inverting multi-week SMPS and HTDMA datasets from ground-based observations, including SMPS data obtained at Bodega Marine Laboratory during the CalWater 2/ACAPEX campaign and co-located SMPS and HTDMA data collected at the US Department of Energy observatory located at the Southern Great Plains site in Oklahoma, USA. Results show that the proposed approaches are suitable for unsupervised, nonparametric inversion of large-scale datasets as well as inversion in real time during data acquisition on low-cost reducedinstruction- set architectures used in single-board computers. The included software implementation of Tikhonov regularization is freely available, general, and domain-independent and thus can be applied to many other inverse problems arising in atmospheric measurement techniques and beyond.


Author(s):  
Markku Kuismin ◽  
Fatemeh Dodangeh ◽  
Mikko J Sillanpää

Abstract We introduce a new model selection criterion for sparse complex gene network modeling where gene co-expression relationships are estimated from data. This is a novel formulation of the gap statistic and it can be used for the optimal choice of a regularization parameter in graphical models. Our criterion favors gene network structure which differs from a trivial gene interaction structure obtained totally at random. We call the criterion the gap-com statistic (gap community statistic). The idea of the gap-com statistic is to examine the difference between the observed and the expected counts of communities (clusters) where the expected counts are evaluated using either data permutations or reference graph (the Erdős-Rényi graph) resampling. The latter represents a trivial gene network structure determined by chance. We put emphasis on complex network inference because the structure of gene networks is usually non-trivial. For example, some of the genes can be clustered together or some genes can be hub genes. We evaluate the performance of the gap-com statistic in graphical model selection and compare its performance to some existing methods using simulated and real biological data example.


2021 ◽  
Author(s):  
Frank Neugebauer ◽  
Marios Antonakakis ◽  
Kanjana Unnwongse ◽  
Yaroslav Parpaley ◽  
Jörg Wellmer ◽  
...  

AbstractMEG and EEG source analysis is frequently used for the presurgical evaluation of pharma-coresistant epilepsy patients. The source localization of the epileptogenic zone depends, among other aspects, on the selected inverse and forward approaches and their respective parameter choices. In this validation study, we compare for the inverse problem the standard dipole scanning method with two beamformer approaches and we investigate the influence of the covariance estimation method and the strength of regularization on the localization performance for EEG, MEG and combined EEG and MEG. For forward modeling, we investigate the difference between calibrated six-compartment and standard three-compartment head modeling. In a retrospective study of two patients with focal epilepsy due to focal cortical dysplasia type IIb and seizure-freedom following lesionectomy or radiofrequency-guided thermocoagulation, we used the distance of the localization of interictal epileptic spikes to the resection cavity resp. rediofrequency lesion as reference for good localization. We found that beamformer localization can be sensitive to the choice of the regularization parameter, which has to be individually optimized. Estimation of the covariance matrix with averaged spike data yielded more robust results across the modalities. MEG was the dominant modality and provided a good localization in one case, while it was EEG for the other. When combining the modalities, the good results of the dominant modality were mostly not spoiled by the weaker modality. For appropriate regularization parameter choices, the beamformer localized better than the standard dipole scan. Compared to the importance of an appropriate regularization, the sensitivity of the localization to the head modeling was smaller, due to similar skull conductivity modeling and the fixed source space without orientation constraint.


Sign in / Sign up

Export Citation Format

Share Document