robust fitting
Recently Published Documents


TOTAL DOCUMENTS

115
(FIVE YEARS 21)

H-INDEX

11
(FIVE YEARS 3)

2021 ◽  
Vol 922 (2) ◽  
pp. 115
Author(s):  
Kshitij Aggarwal ◽  
Devansh Agarwal ◽  
Evan F. Lewis ◽  
Reshma Anna-Thomas ◽  
Jacob Cardinal Tremblay ◽  
...  

Abstract We present an analysis of a densely repeating sample of bursts from the first repeating fast radio burst, FRB 121102. We reanalyzed the data used by Gourdji et al. and detected 93 additional bursts using our single-pulse search pipeline. In total, we detected 133 bursts in three hours of data at a center frequency of 1.4 GHz using the Arecibo telescope, and develop robust modeling strategies to constrain the spectro-temporal properties of all of the bursts in the sample. Most of the burst profiles show a scattering tail, and burst spectra are well modeled by a Gaussian with a median width of 230 MHz. We find a lack of emission below 1300 MHz, consistent with previous studies of FRB 121102. We also find that the peak of the log-normal distribution of wait times decreases from 207 to 75 s using our larger sample of bursts, as compared to that of Gourdji et al. Our observations do not favor either Poissonian or Weibull distributions for the burst rate distribution. We searched for periodicity in the bursts using multiple techniques, but did not detect any significant period. The cumulative burst energy distribution exhibits a broken power-law shape, with the lower- and higher-energy slopes of −0.4 ± 0.1 and −1.8 ± 0.2, with the break at (2.3 ± 0.2) × 1037 erg. We provide our burst fitting routines as a Python package burstfit 4 4 https://github.com/thepetabyteproject/burstfit that can be used to model the spectrogram of any complex fast radio burst or pulsar pulse using robust fitting techniques. All of the other analysis scripts and results are publicly available. 5 5 https://github.com/thepetabyteproject/FRB121102


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Dost Muhammad Khan ◽  
Muhammad Ali ◽  
Zubair Ahmad ◽  
Sadaf Manzoor ◽  
Sundus Hussain

Robust regression is an important iterative procedure that seeks analyzing data sets that are contaminated with outliers and unusual observations and reducing their impact over regression coefficients. Robust estimation methods have been introduced to deal with the problem of outliers and provide efficient and stable estimates in their presence. Various robust estimators have been developed in the literature to restrict the unbounded influence of the outliers or leverage points on the model estimates. Here, a new redescending M-estimator is proposed using a novel objective function with the prime focus on getting highly robust and efficient estimates that give promising results. It is evident from the results that, for normal and clean data, the proposed estimator is almost as efficient as ordinary least square method and, however, becomes highly resistant to outliers when it is used for contaminated datasets. The simulation study is being carried out to assess the performance of the proposed redescending M-estimator over different data generation scenarios including normal, t-distribution, and double exponential distributions with different levels of outliers’ contamination, and the results are compared with the existing redescending M-estimators, e.g., Huber, Tukey Biweight, Hampel, and Andrew-Sign function. The performance of the proposed estimators was also checked using real-life data applications of the estimators and found that the proposed estimators give promising results as compared to the existing estimators.


2021 ◽  
Author(s):  
Giang Truong ◽  
Huu Le ◽  
David Suter ◽  
Erchuan Zhang ◽  
Syed Zulqarnain Gilani

Stats ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 454-471
Author(s):  
Luca Greco ◽  
Giovanni Saraceno ◽  
Claudio Agostinelli

In this work, we deal with a robust fitting of a wrapped normal model to multivariate circular data. Robust estimation is supposed to mitigate the adverse effects of outliers on inference. Furthermore, the use of a proper robust method leads to the definition of effective outlier detection rules. Robust fitting is achieved by a suitable modification of a classification-expectation-maximization algorithm that has been developed to perform a maximum likelihood estimation of the parameters of a multivariate wrapped normal distribution. The modification concerns the use of complete-data estimating equations that involve a set of data dependent weights aimed to downweight the effect of possible outliers. Several robust techniques are considered to define weights. The finite sample behavior of the resulting proposed methods is investigated by some numerical studies and real data examples.


2021 ◽  
Vol 11 (11) ◽  
pp. 4831
Author(s):  
Marco Furlan Tassara ◽  
Kyriakos Grigoriadis ◽  
Georgios Mavros

Up-to-date predictive rubber friction models require viscoelastic modulus information; thus, the accurate representation of storage and loss modulus components is fundamental. This study presents two separate empirical formulations for the complex moduli of viscoelastic materials such as rubber. The majority of complex modulus models found in the literature are based on tabulated dynamic testing data. A wide range of experimentally obtained rubber moduli are used in this study, such as SBR (styrene-butadiene rubber), reinforced SBR with filler particles and typical passenger car tyre rubber. The proposed formulations offer significantly faster computation times compared to tabulated/interpolated data and an accurate reconstruction of the viscoelastic frequency response. They also link the model coefficients with critical sections of the data, such as the gradient of the slope in the storage modulus, or the peak values in loss tangent and loss modulus. One of the models is based on piecewise polynomial fitting and offers versatility by increasing the number of polynomial functions used to achieve better fitting, but with additional pre-processing time. The other model uses a pair of logistic-bell functions and provides a robust fitting capability and the fastest identification, as it requires a reduced number of parameters. Both models offer good correlations with measured data, and their computational efficiency was demonstrated via implementation in Persson’s friction model.


Author(s):  
Tat-Jun Chin ◽  
David Suter ◽  
Shin-Fang Ch’ng ◽  
James Quach
Keyword(s):  

Author(s):  
Meng Wang ◽  
Lihua Jiang ◽  
Ruiqi Jian ◽  
Joanne Y Chan ◽  
Qing Liu ◽  
...  

Abstract Motivation Data normalization is an important step in processing proteomics data generated in mass spectrometry experiments, which aims to reduce sample-level variation and facilitate comparisons of samples. Previously published methods for normalization primarily depend on the assumption that the distribution of protein expression is similar across all samples. However, this assumption fails when the protein expression data is generated from heterogenous samples, such as from various tissue types. This led us to develop a novel data-driven method for improved normalization to correct the systematic bias meanwhile maintaining underlying biological heterogeneity. Results To robustly correct the systematic bias, we used the density-power-weight method to down-weigh outliers and extended the one-dimensional robust fitting method described in the previous work to our structured data. We then constructed a robustness criterion and developed a new normalization algorithm, called RobNorm. In simulation studies and analysis of real data from the genotype-tissue expression project, we compared and evaluated the performance of RobNorm against other normalization methods. We found that the RobNorm approach exhibits the greatest reduction in systematic bias while maintaining across-tissue variation, especially for datasets from highly heterogeneous samples. Availabilityand implementation https://github.com/mwgrassgreen/RobNorm. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document