shrinkage methods
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 29)

H-INDEX

8
(FIVE YEARS 2)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 515
Author(s):  
Alireza Salimy ◽  
Imene Mitiche ◽  
Philip Boreham ◽  
Alan Nesbitt ◽  
Gordon Morison

Fault signals in high-voltage (HV) power plant assets are captured using the electromagnetic interference (EMI) technique. The extracted EMI signals are taken under different conditions, introducing varying noise levels to the signals. The aim of this work is to address the varying noise levels found in captured EMI fault signals, using a deep-residual-shrinkage-network (DRSN) that implements shrinkage methods with learned thresholds to carry out de-noising for classification, along with a time-frequency signal decomposition method for feature engineering of raw time-series signals. The approach will be to train and validate several alternative DRSN architectures with previously expertly labeled EMI fault signals, with architectures then being tested on previously unseen data, the signals used will firstly be de-noised and a controlled amount of noise will be added to the signals at various levels. DRSN architectures are assessed based on their testing accuracy in the varying controlled noise levels. Results show DRSN architectures using the newly proposed residual-shrinkage-building-unit-2 (RSBU-2) to outperform the residual-shrinkage-building-unit-1 (RSBU-1) architectures in low signal-to-noise ratios. The findings show that implementing thresholding methods in noise environments provides attractive results and their methods prove to work well with real-world EMI fault signals, proving them to be sufficient for real-world EMI fault classification and condition monitoring.


2021 ◽  
pp. 4847-4858
Author(s):  
Emad Sh. M. Haddad ◽  
Feras Sh. M. Batah

The stress – strength model is one of the models that are used to compute reliability. In this paper, we derived mathematical formulas for the reliability of the stress – strength model that follows Rayleigh Pareto (Rayl. – Par) distribution. Here, the model has a single component, where strength Y is subjected to a stress X, represented by moment, reliability function, restricted behavior, and ordering statistics. Some estimation methods were used, which are the maximum likelihood, ordinary least squares, and two shrinkage methods, in addition to a newly suggested method for weighting the contraction. The performance of these estimates was studied empirically by using simulation experimentation that could give more varieties for different-sized samples for stress and strength. The most interesting finding indicates the superiority of the proposed shrinkage estimation method.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Faridoon Khan ◽  
Amena Urooj ◽  
Kalim Ullah ◽  
Badr Alnssyan ◽  
Zahra Almaspoor

This work compares Autometrics with dual penalization techniques such as minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) under asymmetric error distributions such as exponential, gamma, and Frechet with varying sample sizes as well as predictors. Comprehensive simulations, based on a wide variety of scenarios, reveal that the methods considered show improved performance for increased sample size. In the case of low multicollinearity, these methods show good performance in terms of potency, but in gauge, shrinkage methods collapse, and higher gauge leads to overspecification of the models. High levels of multicollinearity adversely affect the performance of Autometrics. In contrast, shrinkage methods are robust in presence of high multicollinearity in terms of potency, but they tend to select a massive set of irrelevant variables. Moreover, we find that expanding the data mitigates the adverse impact of high multicollinearity on Autometrics rapidly and gradually corrects the gauge of shrinkage methods. For empirical application, we take the gold prices data spanning from 1981 to 2020. While comparing the forecasting performance of all selected methods, we divide the data into two parts: data over 1981–2010 are taken as training data, and those over 2011–2020 are used as testing data. All methods are trained for the training data and then are assessed for performance through the testing data. Based on a root-mean-square error and mean absolute error, Autometrics remain the best in capturing the gold prices trend and producing better forecasts than MCP and SCAD.


2021 ◽  
Vol 8 (3) ◽  
pp. 477-484
Author(s):  
Alaa M. Hamad ◽  
Bareq B. Salman

Lomax distribution, a large-scale probabilistic distribution used in industry, economics, actuarial science, queue theory, and Internet traffic modeling, is the most important distribution in reliability theory. In this paper estimating the reliability of Restricted exponentiated Lomax distribution in two cases, when one component X strength and Y stress R=P(Y<X), and when system content two component series strength, Y stress by using different estimation method. such as maximum likelihood, least square and shrinkage methods. A comparison between the outcomes results of the applied methods has been carried out based on mean square error (MSE) to investigate the best method and the obtained results have been displayed via MATLAB software package.


2021 ◽  
Vol 62 ◽  
pp. 46-61
Author(s):  
Mingmian Cheng ◽  
Norman R. Swanson ◽  
Xiye Yang

2021 ◽  
Vol 10 (3) ◽  
pp. 32
Author(s):  
Xiaoting Wu ◽  
Min Zhang ◽  
Ruyun Jin ◽  
Gary L. Grunkemeier ◽  
Charles Maynard ◽  
...  

During hospital quality improvement activities, statistical approaches are critical to help assess hospital performance for benchmarking. Current statistical approaches are used primarily for research and reimbursement purposes. In this multiinstitutional study, these established statistical methods were evaluated for quality improvement applications. Leveraging a dataset of 42,199 patients who underwent coronary artery bypass grafting surgery from 2014 to 2016 across 90 hospitals, six statistical approaches were applied. The non-shrinkage methods were: (1) indirect standardization without hospital effect; (2) indirect standardization with hospital fixed effect; (3) direct standardization with hospital fixed effect. The shrinkage methods were: (4) indirect standardization with hospital random effect; (5) direct standardization with hospital random effect; (6) Bayesian method. Hospital performance related to operative mortality and major morbidity or mortality was compared across methods based on variation in adjusted rates, rankings, and performance outliers. Method performance was evaluated across procedure volume terciles: small (< 96 cases/year), medium (96-171), and large (> 171). Shrinkage methods reduced inter-hospital variation (min-max) for mortality (observed: 0%-10%; adjusted: 1.5%-2.4%) and major morbidity or mortality (observed: 2.6%-35%; adjusted: 6.9%-17.5%). Shrinkage methods shrunk hospital rates toward the group mean. Direct standardization with hospital random effect, compared to fixed effect, resulted in 16.7%-38.9% of hospitals changing quintile mortality ranking. Indirect standardization with hospital random effect resulted in no performance outliers among small and medium hospitals for mortality, while logistic and fixed effect methods identified one small and three medium outlier hospitals. The choice of statistical method greatly impacts hospital ranking and performance outlier’ status. These findings should be considered when benchmarking hospital performance for hospital quality improvement activities.


2021 ◽  
Author(s):  
Nima Nikvand

In this thesis, the problem of data denoising is studied, and two new denoising approaches are proposed. Using statistical properties of the additive noise, the methods provide adaptive data-dependent soft thresholding techniques to remove the additive noise. The proposed methods, Point-wise Noise Invlaidating Soft Thresholding (PNIST) and Accumulative Noise Invalidation Soft Thresholding (ANIST), are based on Noise Invalidation. The invalidation exploits basic properties of the additive noise in order to remove the noise effects as much as possible. There are similarities and differences between ANIST and PNIST. While PNIST performs better in the case of additive white Gaussian noise, ANIST can be used with both Gaussian and non Gaussian additive noise. As part of a data denoising technique, a new noise variance estimation is also proposed. The thresholds proposed by NIST approaches are comparable to the shrinkage methods, and our simulation results promise that the new methods can outperform the existing approaches in various applications. We also explore the area of image denoising as one of the main applications of data denoising and extend the proposed approaches to two dimensional applications. Simulations show that the proposed methods outperform common shrinkage methods and are comparable to the famous BayesShrink method in terms of Mean Square Error and visual quality.


2021 ◽  
Author(s):  
Nima Nikvand

In this thesis, the problem of data denoising is studied, and two new denoising approaches are proposed. Using statistical properties of the additive noise, the methods provide adaptive data-dependent soft thresholding techniques to remove the additive noise. The proposed methods, Point-wise Noise Invlaidating Soft Thresholding (PNIST) and Accumulative Noise Invalidation Soft Thresholding (ANIST), are based on Noise Invalidation. The invalidation exploits basic properties of the additive noise in order to remove the noise effects as much as possible. There are similarities and differences between ANIST and PNIST. While PNIST performs better in the case of additive white Gaussian noise, ANIST can be used with both Gaussian and non Gaussian additive noise. As part of a data denoising technique, a new noise variance estimation is also proposed. The thresholds proposed by NIST approaches are comparable to the shrinkage methods, and our simulation results promise that the new methods can outperform the existing approaches in various applications. We also explore the area of image denoising as one of the main applications of data denoising and extend the proposed approaches to two dimensional applications. Simulations show that the proposed methods outperform common shrinkage methods and are comparable to the famous BayesShrink method in terms of Mean Square Error and visual quality.


BMC Urology ◽  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Satoshi Funada ◽  
Yan Luo ◽  
Takashi Yoshioka ◽  
Kazuya Setoh ◽  
Yasuharu Tabara ◽  
...  

Abstract Background An accurate prediction model could identify high-risk subjects of incident Overactive bladder (OAB) among the general population and enable early prevention which may save on the related medical costs. However, no efficient model has been developed for predicting incident OAB. In this study, we will develop a model for predicting the onset of OAB at 5-year in the general population setting. Methods Data will be obtained from the Nagahama Cohort Project, a longitudinal, general population cohort study. The baseline characteristics were measured between Nov 28, 2008 and Nov 28, 2010, and follow-up was performed every 5 years. From the total of 9,764 participants (male: 3,208, female: 6,556) at baseline, we will exclude participants who could not attend the follow-up assessment and those who were defined as having OAB at baseline. The outcome will be incident OAB defined using the Overactive Bladder Symptom Score (OABSS) at follow-up assessment. Baseline questionnaires (demographic, health behavior, comorbidities and OABSS) and blood test data will be included as predictors. We will develop a logistic regression model utilizing shrinkage methods (LASSO penalization method). Model performance will be evaluated by discrimination and calibration. Net benefit will be evaluated by decision curve analysis. We will perform an internal validation and a temporal validation of the model. We will develop a web-based application to visualize the prediction model and facilitate its use in clinical practice. Discussion This will be the first study to develop a model to predict the incidence of OAB.


Sign in / Sign up

Export Citation Format

Share Document