penalized likelihood
Recently Published Documents


TOTAL DOCUMENTS

434
(FIVE YEARS 59)

H-INDEX

47
(FIVE YEARS 3)

Tomography ◽  
2022 ◽  
Vol 8 (1) ◽  
pp. 158-174
Author(s):  
Xue Ren ◽  
Ji Eun Jung ◽  
Wen Zhu ◽  
Soo-Jin Lee

In this paper, we present a new regularized image reconstruction method for positron emission tomography (PET), where an adaptive weighted median regularizer is used in the context of a penalized-likelihood framework. The motivation of our work is to overcome the limitation of the conventional median regularizer, which has proven useful for tomographic reconstruction but suffers from the negative effect of removing fine details in the underlying image when the edges occupy less than half of the window elements. The crux of our method is inspired by the well-known non-local means denoising approach, which exploits the measure of similarity between the image patches for weighted smoothing. However, our method is different from the non-local means denoising approach in that the similarity measure between the patches is used for the median weights rather than for the smoothing weights. As the median weights, in this case, are spatially variant, they provide adaptive median regularization achieving high-quality reconstructions. The experimental results indicate that our similarity-driven median regularization method not only improves the reconstruction accuracy, but also has great potential for super-resolution reconstruction for PET.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jun Ma ◽  
Dominique-Laurent Couturier ◽  
Stephane Heritier ◽  
Ian C. Marschner

Abstract This paper considers the problem of semi-parametric proportional hazards model fitting where observed survival times contain event times and also interval, left and right censoring times. Although this is not a new topic, many existing methods suffer from poor computational performance. In this paper, we adopt a more versatile penalized likelihood method to estimate the baseline hazard and the regression coefficients simultaneously. The baseline hazard is approximated using basis functions such as M-splines. A penalty is introduced to regularize the baseline hazard estimate and also to ease dependence of the estimates on the knots of the basis functions. We propose a Newton–MI (multiplicative iterative) algorithm to fit this model. We also present novel asymptotic properties of our estimates, allowing for the possibility that some parameters of the approximate baseline hazard may lie on the parameter space boundary. Comparisons of our method against other similar approaches are made through an intensive simulation study. Results demonstrate that our method is very stable and encounters virtually no numerical issues. A real data application involving melanoma recurrence is presented and an R package ‘survivalMPL’ implementing the method is available on R CRAN.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Yao Liu ◽  
Mei-jia Gao ◽  
Jie Zhou ◽  
Fan Du ◽  
Liang Chen ◽  
...  

Abstract Background To compare the changes in quantitative parameters and the size and degree of 18F-fluorodeoxyglucose ([18F]FDG) uptake of malignant tumor lesions between Bayesian penalized-likelihood (BPL) and non-BPL reconstruction algorithms. Methods Positron emission tomography/computed tomography images of 86 malignant tumor lesions were reconstructed using the algorithms of ordered subset expectation maximization (OSEM), OSEM + time of flight (TOF), OSEM + TOF + point spread function (PSF), and BPL. [18F]FDG parameters of maximum standardized uptake value (SUVmax), SUVmean, metabolic tumor volume (MTV), total lesion glycolysis (TLG), and signal-to-background ratio (SBR) of these lesions were measured. Quantitative parameters between the different reconstruction algorithms were compared, and correlations between parameter variation and lesion size or the degree of [18F]FDG uptake were analyzed. Results After BPL reconstruction, SUVmax, SUVmean, and SBR were significantly increased, MTV was significantly decreased. The difference values of %ΔSUVmax, %ΔSUVmean, %ΔSBR, and the absolute value of %ΔMTV between BPL and OSEM + TOF were 40.00%, 38.50%, 33.60%, and 33.20%, respectively, which were significantly higher than those between BPL and OSEM + TOF + PSF. Similar results were observed in the comparison of OSEM and OSEM + TOF + PSF with BPL. The %ΔSUVmax, %ΔSUVmean, and %ΔSBR were all significantly negatively correlated with the size and degree of [18F]FDG uptake in the lesions, whereas significant positive correlations were observed for %ΔMTV and %ΔTLG. Conclusion The BPL reconstruction algorithm significantly increased SUVmax, SUVmean, and SBR and decreased MTV of tumor lesions, especially in small or relatively hypometabolic lesions.


Ecology ◽  
2021 ◽  
Author(s):  
Hannah L. Clipp ◽  
Amber L. Evans ◽  
Brin E. Kessinger ◽  
Kenneth Kellner ◽  
Christopher T. Rota

PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e11997
Author(s):  
Liam J. Revell

In recent years it has become increasingly popular to use phylogenetic comparative methods to investigate heterogeneity in the rate or process of quantitative trait evolution across the branches or clades of a phylogenetic tree. Here, I present a new method for modeling variability in the rate of evolution of a continuously-valued character trait on a reconstructed phylogeny. The underlying model of evolution is stochastic diffusion (Brownian motion), but in which the instantaneous diffusion rate (σ2) also evolves by Brownian motion on a logarithmic scale. Unfortunately, it’s not possible to simultaneously estimate the rates of evolution along each edge of the tree and the rate of evolution of σ2 itself using Maximum Likelihood. As such, I propose a penalized-likelihood method in which the penalty term is equal to the log-transformed probability density of the rates under a Brownian model, multiplied by a ‘smoothing’ coefficient, λ, selected by the user. λ determines the magnitude of penalty that’s applied to rate variation between edges. Lower values of λ penalize rate variation relatively little; whereas larger λ values result in minimal rate variation among edges of the tree in the fitted model, eventually converging on a single value of σ2 for all of the branches of the tree. In addition to presenting this model here, I have also implemented it as part of my phytools R package in the function multirateBM. Using different values of the penalty coefficient, λ, I fit the model to simulated data with: Brownian rate variation among edges (the model assumption); uncorrelated rate variation; rate changes that occur in discrete places on the tree; and no rate variation at all among the branches of the phylogeny. I then compare the estimated values of σ2 to their known true values. In addition, I use the method to analyze a simple empirical dataset of body mass evolution in mammals. Finally, I discuss the relationship between the method of this article and other models from the phylogenetic comparative methods and finance literature, as well as some applications and limitations of the approach.


2021 ◽  
Vol 5 (1) ◽  
pp. 2
Author(s):  
Emmanouil-Nektarios Kalligeris ◽  
Alex Karagrigoriou ◽  
Christina Parpoula

Regime switching in conjunction with penalized likelihood techniques could be a robust tool concerning the modelling of dynamic behaviours of consultation rate data. To that end, in this work we propose a methodology that combines the aforementioned techniques, and its performance and capabilities are tested through a real application.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Rasaki Olawale Olanrewaju

A Gamma distributed response is subjected to regression penalized likelihood estimations of Least Absolute Shrinkage and Selection Operator (LASSO) and Minimax Concave Penalty via Generalized Linear Models (GLMs). The Gamma related disturbance controls the influence of skewness and spread in the corrected path solutions of the regression coefficients.


Sign in / Sign up

Export Citation Format

Share Document