profile likelihood
Recently Published Documents


TOTAL DOCUMENTS

217
(FIVE YEARS 50)

H-INDEX

24
(FIVE YEARS 3)

2021 ◽  
Vol 81 (10) ◽  
Author(s):  
D. Baxter ◽  
I. M. Bloch ◽  
E. Bodnia ◽  
X. Chen ◽  
J. Conrad ◽  
...  

AbstractThe field of dark matter detection is a highly visible and highly competitive one. In this paper, we propose recommendations for presenting dark matter direct detection results particularly suited for weak-scale dark matter searches, although we believe the spirit of the recommendations can apply more broadly to searches for other dark matter candidates, such as very light dark matter or axions. To translate experimental data into a final published result, direct detection collaborations must make a series of choices in their analysis, ranging from how to model astrophysical parameters to how to make statistical inferences based on observed data. While many collaborations follow a standard set of recommendations in some areas, for example the expected flux of dark matter particles (to a large degree based on a paper from Lewin and Smith in 1995), in other areas, particularly in statistical inference, they have taken different approaches, often from result to result by the same collaboration. We set out a number of recommendations on how to apply the now commonly used Profile Likelihood Ratio method to direct detection data. In addition, updated recommendations for the Standard Halo Model astrophysical parameters and relevant neutrino fluxes are provided. The authors of this note include members of the DAMIC, DarkSide, DARWIN, DEAP, LZ, NEWS-G, PandaX, PICO, SBC, SENSEI, SuperCDMS, and XENON collaborations, and these collaborations provided input to the recommendations laid out here. Wide-spread adoption of these recommendations will make it easier to compare and combine future dark matter results.


2021 ◽  
pp. oemed-2021-107405
Author(s):  
David G Goldfarb ◽  
Rachel Zeig-Owens ◽  
Dana Kristjansson ◽  
Jiehui Li ◽  
Robert M Brackbill ◽  
...  

BackgroundThe World Trade Center (WTC) attacks on 11 September 2001 created a hazardous environment with known and suspected carcinogens. Previous studies have identified an increased risk of prostate cancer in responder cohorts compared with the general male population.ObjectivesTo estimate the length of time to prostate cancer among WTC rescue/recovery workers by determining specific time periods during which the risk was significantly elevated.MethodsPerson-time accruals began 6 months after enrolment into a WTC cohort and ended at death or 12/31/2015. Cancer data were obtained through linkages with 13 state cancer registries. New York State was the comparison population. We used Poisson regression to estimate hazard ratios and 95% CIs; change points in rate ratios were estimated using profile likelihood.ResultsThe analytic cohort included 54 394 male rescue/recovery workers. We observed 1120 incident prostate cancer cases. During 2002–2006, no association with WTC exposure was detected. Beginning in 2007, a 24% increased risk (HR: 1.24, 95% CI 1.16 to 1.32) was observed among WTC rescue/recovery workers when compared with New York State. Comparing those who arrived earliest at the disaster site on the morning of 11 September 2001 or any time on 12 September 2001 to those who first arrived later, we observed a positive, monotonic, dose-response association in the early (2002–2006) and late (2007–2015) periods.ConclusionsRisk of prostate cancer was significantly elevated beginning in 2007 in the WTC combined rescue/recovery cohort. While unique exposures at the disaster site might have contributed to the observed effect, screening practices including routine prostate specific antigen screening cannot be discounted.


2021 ◽  
Vol 2021 (9) ◽  
Author(s):  
Forrest Flesher ◽  
Katherine Fraser ◽  
Charles Hutchison ◽  
Bryan Ostdiek ◽  
Matthew D. Schwartz

Abstract One of the key tasks of any particle collider is measurement. In practice, this is often done by fitting data to a simulation, which depends on many parameters. Sometimes, when the effects of varying different parameters are highly correlated, a large ensemble of data may be needed to resolve parameter-space degeneracies. An important example is measuring the top-quark mass, where other physical and unphysical parameters in the simulation must be profiled when fitting the top-quark mass parameter. We compare four different methodologies for top-quark mass measurement: a classical histogram fit similar to one commonly used in experiment augmented by soft-drop jet grooming; a 2D profile likelihood fit with a nuisance parameter; a machine-learning method called DCTR; and a linear regression approach, either using a least-squares fit or with a dense linearly-activated neural network. Despite the fact that individual events are totally uncorrelated, we find that the linear regression methods work most effectively when we input an ensemble of events sorted by mass, rather than training them on individual events. Although all methods provide robust extraction of the top-quark mass parameter, the linear network does marginally best and is remarkably simple. For the top study, we conclude that the Monte-Carlo-based uncertainty on current extractions of the top-quark mass from LHC data can be reduced significantly (by perhaps a factor of 2) using networks trained on sorted event ensembles. More generally, machine learning from ensembles for parameter estimation has broad potential for collider physics measurements.


2021 ◽  
Author(s):  
Khandoker Mohammad

<p><b>In this thesis, we have investigated the efficiency of profile likelihood in the estimation of parameters from the Cox Proportional Hazards (PH) cure model and joint model of longitudinal and survival data. For the profile likelihood approach in the joint model of longitudinal and survival data, Hsieh et al. (2006) stated “No distributional or asymptotic theory is available to date, and even the standard errors (SEs), defined as the standard deviations of the parametric estimators, are difficult to obtain”. The reason behind this difficulty is the estimator of baseline hazard which involves implicit function in the profile likelihood estimation (Hirose and Liu, 2020). Hence finding the estimated SE of the parametric estimators from the Cox PH cure model and joint model using profile likelihood approach is a great challenge. Therefore, bootstrap method has been suggested to get the estimated standard errors while using the profile likelihood approach (Hsieh et al., 2006).</b></p> <p>To solve the difficulty, we have expanded the profile likelihood function directly without assuming the derivative of the profile likelihood score function and obtain the explicit form of the SE estimator using the profile likelihood score function. Our proposed alternative approach gives us not only analytical understanding of the profile likelihood estimation, but also provides closed form formula to compute the standard error of the profile likelihood maximum likelihood estimator in terms of profile likelihood score function. To show the advantage of our proposed approach in medical and clinical studies, we have analysed the simulated and real-life data, and compared our results with the output obtained from the smcure, JM(method: ’Cox-PH-GH’) and joineRML R-packages. The outputs suggest that the bootstrap method and our proposed approach have provided similar and comparable results. In addition, the average computation times of our approach are much less compared to the above mentioned R-packages.</p>


2021 ◽  
Author(s):  
Khandoker Mohammad

<p><b>In this thesis, we have investigated the efficiency of profile likelihood in the estimation of parameters from the Cox Proportional Hazards (PH) cure model and joint model of longitudinal and survival data. For the profile likelihood approach in the joint model of longitudinal and survival data, Hsieh et al. (2006) stated “No distributional or asymptotic theory is available to date, and even the standard errors (SEs), defined as the standard deviations of the parametric estimators, are difficult to obtain”. The reason behind this difficulty is the estimator of baseline hazard which involves implicit function in the profile likelihood estimation (Hirose and Liu, 2020). Hence finding the estimated SE of the parametric estimators from the Cox PH cure model and joint model using profile likelihood approach is a great challenge. Therefore, bootstrap method has been suggested to get the estimated standard errors while using the profile likelihood approach (Hsieh et al., 2006).</b></p> <p>To solve the difficulty, we have expanded the profile likelihood function directly without assuming the derivative of the profile likelihood score function and obtain the explicit form of the SE estimator using the profile likelihood score function. Our proposed alternative approach gives us not only analytical understanding of the profile likelihood estimation, but also provides closed form formula to compute the standard error of the profile likelihood maximum likelihood estimator in terms of profile likelihood score function. To show the advantage of our proposed approach in medical and clinical studies, we have analysed the simulated and real-life data, and compared our results with the output obtained from the smcure, JM(method: ’Cox-PH-GH’) and joineRML R-packages. The outputs suggest that the bootstrap method and our proposed approach have provided similar and comparable results. In addition, the average computation times of our approach are much less compared to the above mentioned R-packages.</p>


2021 ◽  
Vol 11 (8) ◽  
pp. 818
Author(s):  
Léo Adenis ◽  
Stéphane Plaszczynski ◽  
Basile Grammaticos ◽  
Johan Pallud ◽  
Mathilde Badoual

Diffuse low-grade gliomas are slowly growing tumors that always recur after treatment. In this paper, we revisit the modeling of the evolution of the tumor radius before and after the radiotherapy process and propose a novel model that is simple yet biologically motivated and that remedies some shortcomings of previously proposed ones. We confront this with clinical data consisting of time series of tumor radii from 43 patient records by using a stochastic optimization technique and obtain very good fits in all cases. Since our model describes the evolution of a tumor from the very first glioma cell, it gives access to the possible age of the tumor. Using the technique of profile likelihood to extract all of the information from the data, we build confidence intervals for the tumor birth age and confirm the fact that low-grade gliomas seem to appear in the late teenage years. Moreover, an approximate analytical expression of the temporal evolution of the tumor radius allows us to explain the correlations observed in the data.


2021 ◽  
Vol 81 (8) ◽  
Author(s):  
T. Abrahão ◽  
H. Almazan ◽  
J. C. dos Anjos ◽  
S. Appel ◽  
J. C. Barriere ◽  
...  

AbstractWe present a search for signatures of neutrino mixing of electron anti-neutrinos with additional hypothetical sterile neutrino flavors using the Double Chooz experiment. The search is based on data from 5 years of operation of Double Chooz, including 2 years in the two-detector configuration. The analysis is based on a profile likelihood, i.e. comparing the data to the model prediction of disappearance in a data-to-data comparison of the two respective detectors. The analysis is optimized for a model of three active and one sterile neutrino. It is sensitive in the typical mass range $${5 \times 10^{-3}}\,\mathrm{eV}^2 \lesssim \varDelta m^2_{41} \lesssim {3 \times 10^{-1}}\,\mathrm{eV}^2 $$ 5 × 10 - 3 eV 2 ≲ Δ m 41 2 ≲ 3 × 10 - 1 eV 2 for mixing angles down to $$\sin ^2 2\theta _{14} \gtrsim {0.02} $$ sin 2 2 θ 14 ≳ 0.02 . No significant disappearance additionally to the conventional disappearance related to $$\theta _{13} $$ θ 13 is observed and correspondingly exclusion bounds on the sterile mixing parameter $$\theta _{14} $$ θ 14 as a function of $$ \varDelta m^2_{41} $$ Δ m 41 2 are obtained.


2021 ◽  
Author(s):  
Lukas Refisch ◽  
Fabian Lorenz ◽  
Torsten Riedlinger ◽  
Hannes Taubenböck ◽  
Martina Fischer ◽  
...  

Background: The COVID-19 pandemic has led to a high interest in mathematical models describing and predicting the diverse aspects and implications of the virus outbreak. Model results represent an important part of the information base for the decision process on different administrative levels. The Robert-Koch-Institute (RKI) initiated a project whose main goal is to predict COVID-19-specific occupation of beds in intensive care units: Steuerungs-Prognose von Intensivmedizinischen COVID-19 Kapazitäten (SPoCK). The incidence of COVID-19 cases is a crucial predictor for this occupation. Methods: We developed a model based on ordinary differential equations for the COVID-19 spread with a time-dependent infection rate described by a spline. Furthermore, the model explicitly accounts for weekday-specific reporting and adjusts for reporting delay. The model is calibrated in a purely data-driven manner by a maximum likelihood approach. Uncertainties are evaluated using the profile likelihood method. The uncertainty about the appropriate modeling assumptions can be accounted for by including and merging results of different modelling approaches. Results: The model is calibrated based on incident cases on a daily basis and provides daily predictions of incident COVID-19 cases for the upcoming three weeks including uncertainty estimates for Germany and its subregions. Derived quantities such as cumulative counts and 7-day incidences with corresponding uncertainties can be computed. The estimation of the time-dependent infection rate leads to an estimated reproduction factor that is oscillating around one. Data-driven estimation of the dark figure purely from incident cases is not feasible. Conclusions: We successfully implemented a procedure to forecast near future COVID-19 incidences for diverse subregions in Germany which are made available to various decision makers via an interactive web application. Results of the incidence modeling are also used as a predictor for forecasting the need of intensive care units.


Author(s):  
Matthew J. Simpson ◽  
Alexander P. Browning ◽  
Christopher Drovandi ◽  
Elliot J. Carr ◽  
Oliver J. Maclaren ◽  
...  

We compute profile likelihoods for a stochastic model of diffusive transport motivated by experimental observations of heat conduction in layered skin tissues. This process is modelled as a random walk in a layered one-dimensional material, where each layer has a distinct particle hopping rate. Particles are released at some location, and the duration of time taken for each particle to reach an absorbing boundary is recorded. To explore whether these data can be used to identify the hopping rates in each layer, we compute various profile likelihoods using two methods: first, an exact likelihood is evaluated using a relatively expensive Markov chain approach; and, second, we form an approximate likelihood by assuming the distribution of exit times is given by a Gamma distribution whose first two moments match the moments from the continuum limit description of the stochastic model. Using the exact and approximate likelihoods, we construct various profile likelihoods for a range of problems. In cases where parameter values are not identifiable, we make progress by re-interpreting those data with a reduced model with a smaller number of layers.


2021 ◽  
Vol 31 (4) ◽  
Author(s):  
Samuel M. Fischer ◽  
Mark A. Lewis

AbstractProfile likelihood confidence intervals are a robust alternative to Wald’s method if the asymptotic properties of the maximum likelihood estimator are not met. However, the constrained optimization problem defining profile likelihood confidence intervals can be difficult to solve in these situations, because the likelihood function may exhibit unfavorable properties. As a result, existing methods may be inefficient and yield misleading results. In this paper, we address this problem by computing profile likelihood confidence intervals via a trust-region approach, where steps computed based on local approximations are constrained to regions where these approximations are sufficiently precise. As our algorithm also accounts for numerical issues arising if the likelihood function is strongly non-linear or parameters are not estimable, the method is applicable in many scenarios where earlier approaches are shown to be unreliable. To demonstrate its potential in applications, we apply our algorithm to benchmark problems and compare it with 6 existing approaches to compute profile likelihood confidence intervals. Our algorithm consistently achieved higher success rates than any competitor while also being among the quickest methods. As our algorithm can be applied to compute both confidence intervals of parameters and model predictions, it is useful in a wide range of scenarios.


Sign in / Sign up

Export Citation Format

Share Document