nominal level
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 22)

H-INDEX

8
(FIVE YEARS 1)

Author(s):  
Michael J. Adjabui ◽  
Jakperik Dioggban ◽  
Nathaniel K. Howard

We propose a new stepwise confidence set procedure for toxicity study based on ratio of mean difference. Statistical approaches for evaluating toxicity studies that properly control familywise error rate (FWER) for difference of means between treatments and a control already exist. However, in some therapeutic areas, ratio of mean differences is desirable. Therefore, we construct stepwise confidence procedure based on Fieller's confidence intervals for multiple ratio of mean difference without multiplicity adjustment for toxicological evaluation. Simulation study revealed that the FWER is well controlled at prespecified nominal level α. Also, the power of our approach increases with increasing sample size and ratio of mean differences.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Wei Bai ◽  
Mei Dong ◽  
Longhai Li ◽  
Cindy Feng ◽  
Wei Xu

Abstract Background For differential abundance analysis, zero-inflated generalized linear models, typically zero-inflated NB models, have been increasingly used to model microbiome and other sequencing count data. A common assumption in estimating the false discovery rate is that the p values are uniformly distributed under the null hypothesis, which demands that the postulated model fit the count data adequately. Mis-specification of the distribution of the count data may lead to excess false discoveries. Therefore, model checking is critical to control the FDR at a nominal level in differential abundance analysis. Increasing studies show that the method of randomized quantile residual (RQR) performs well in diagnosing count regression models. However, the performance of RQR in diagnosing zero-inflated GLMMs for sequencing count data has not been extensively investigated in the literature. Results We conduct large-scale simulation studies to investigate the performance of the RQRs for zero-inflated GLMMs. The simulation studies show that the type I error rates of the GOF tests with RQRs are very close to the nominal level; in addition, the scatter-plots and Q–Q plots of RQRs are useful in discerning the good and bad models. We also apply the RQRs to diagnose six GLMMs to a real microbiome dataset. The results show that the OTU counts at the genus level of this dataset (after a truncation treatment) can be modelled well by zero-inflated and zero-modified NB models. Conclusion RQR is an excellent tool for diagnosing GLMMs for zero-inflated count data, particularly the sequencing count data arising in microbiome studies. In the supplementary materials, we provided two generic R functions, called and , for calculating the RQRs given fitting outputs of the R package .


Author(s):  
A. Sai Ram

Abstract: Across the world in our day-to-day life, we come across various medical inaccuracies caused due to unreliable patient’s reminiscence. Statistically, communication problems are the most significant aspect that hampers the diagnosis of patient’s diseases. So, this paper represents the best theoretical solution to achieve patient care in the most adequate way. In these pandemic days, the communication gap between the patient and the physician has begun to decline to a nominal level. This paper demonstrates a vital solution and a steppingstone to the complete digitalization of the client’s illness catalogue. To attain the solution in a specified manner we are using adverse pre-existential technologies like data warehousing, database management system, cloud computing, big data, etc. We also persistently maintain the most secure, impenetrable infrastructure enabling the client’s data privacy. Keywords: Illness catalogue, cloud computing, data warehousing, database management systems, big data.


2021 ◽  
Author(s):  
Yves G Berger

Abstract An empirical likelihood test is proposed for parameters of models defined by conditional moment restrictions, such as models with non-linear endogenous covariates, with or without heteroscedastic errors or non-separable transformation models. The number of empirical likelihood constraints is given by the size of the parameter, unlike alternative semi-parametric approaches. We show that the empirical likelihood ratio test is asymptotically pivotal, without explicit studentisation. A simulation study shows that the observed size is close to the nominal level, unlike alternative empirical likelihood approaches. It also offers a major advantages over two-stage least-squares, because the relationship between the endogenous and instrumental variables does not need to be known. An empirical likelihood model specification test is also proposed.


Econometrics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 35
Author(s):  
Michael Creel

This paper studies method of simulated moments (MSM) estimators that are implemented using Bayesian methods, specifically Markov chain Monte Carlo (MCMC). Motivation and theory for the methods is provided by Chernozhukov and Hong (2003). The paper shows, experimentally, that confidence intervals using these methods may have coverage which is far from the nominal level, a result which has parallels in the literature that studies overidentified GMM estimators. A neural network may be used to reduce the dimension of an initial set of moments to the minimum number that maintains identification, as in Creel (2017). When MSM-MCMC estimation and inference is based on such moments, and using a continuously updating criteria function, confidence intervals have statistically correct coverage in all cases studied. The methods are illustrated by application to several test models, including a small DSGE model, and to a jump-diffusion model for returns of the S&P 500 index.


2021 ◽  
Author(s):  
Gang Chen ◽  
Paul A Taylor ◽  
Joel Stoddard ◽  
Robert W Cox ◽  
Peter A Bandettini ◽  
...  

Neuroimaging relies on separate statistical inferences at tens of thousands of spatial locations. Such massively univariate analysis typically requires adjustment for multiple testing in an attempt to maintain the family-wise error rate at a nominal level of 5%. We discuss how this approach is associated with substantial information loss because of an implicit but questionable assumption about the effect distribution across spatial units. To improve inference efficiency, predictive accuracy, and generalizability, we propose a Bayesian multilevel modeling framework. In addition, we make four actionable suggestions to alleviate information waste and to improve reproducibility: (1) abandon strict dichotomization; (2) report full results; (3) quantify effects, and (4) model data hierarchy.


Author(s):  
Chénagnon Frédéric Tovissodé ◽  
Romain Lucas Glèlè Kakaï

Most existing flexible count regression models allow only approximate inference. Balanced discretization is a simple method to produce a mean-parametrizable flexible count distribution starting from a continuous probability distribution. This makes easy the definition of flexible count regression models allowing exact inference under various types of dispersion (equi-, under- and overdispersion). This study describes maximum likelihood (ML) estimation and inference in count regression based on balanced discrete gamma (BDG) distribution and introduces a likelihood ratio based latent equidispersion (LE) test to identify the parsimonious dispersion model for a particular dataset. A series of Monte Carlo experiments were carried out to assess the performance of ML estimates and the LE test in the BDG regression model, as compared to the popular Conway-Maxwell-Poisson model (CMP). The results show that the two evaluated models recover population effects even under misspecification of dispersion related covariates, with coverage rates of asymptotic 95% confidence interval approaching the nominal level as the sample size increases. The BDG regression approach, nevertheless, outperforms CMP regression in very small samples (n = 15 − 30), mostly in overdispersed data. The LE test proves appropriate to detect latent equidispersion, with rejection rates converging to the nominal level as the sample size increases. Two applications on real data are given to illustrate the use of the proposed approach to count regression analysis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kuaikuai Duan ◽  
Wenhao Jiang ◽  
Kelly Rootes-Murdy ◽  
Gido H. Schoenmacker ◽  
Alejandro Arias-Vasquez ◽  
...  

AbstractAttention-deficit/hyperactivity disorder (ADHD) is a childhood-onset neuropsychiatric disorder and may persist into adulthood. Working memory and attention deficits have been reported to persist from childhood to adulthood. How neuronal underpinnings of deficits differ across adolescence and adulthood is not clear. In this study, we investigated gray matter of two cohorts, 486 adults and 508 adolescents, each including participants from ADHD and healthy controls families. Two cohorts both presented significant attention and working memory deficits in individuals with ADHD. Independent component analysis was applied to the gray matter of each cohort, separately, to extract cohort-inherent networks. Then, we identified gray matter networks associated with inattention or working memory in each cohort, and projected them onto the other cohort for comparison. Two components in the inferior, middle/superior frontal regions identified in adults and one component in the insula and inferior frontal region identified in adolescents were significantly associated with working memory in both cohorts. One component in bilateral cerebellar tonsil and culmen identified in adults and one component in left cerebellar region identified in adolescents were significantly associated with inattention in both cohorts. All these components presented a significant or nominal level of gray matter reduction for ADHD participants in adolescents, but only one showed nominal reduction in adults. Our findings suggest although the gray matter reduction of these regions may not be indicative of persistency of ADHD, their persistent associations with inattention or working memory indicate an important role of these regions in the mechanism of persistence or remission of the disorder.


2021 ◽  
Vol 13 (1) ◽  
pp. 62-70
Author(s):  
Jeanne Asteria Wawolangi ◽  
Anita Permatasari

The micro business is a marginal business type characterized by the use ofrelatively simple technology, a relatively small nominal level of capital, low accessto credit, and oriented towards local market needs. General problem faced inmanaging this business is the calculation of product costs, which are usually onlycalculated simply based on the purchase price of raw materials, while directproduction costs, because what is important is the smooth sale of these products. This study aims to assist micro-entrepreneurs in determining product costs, making it easier for business actors to determine the selling price of their products. This study uses a qualitative method.


2021 ◽  
Vol 11 (1) ◽  
pp. 121-134
Author(s):  
D.V. Kashirskiy ◽  
Olga V. Staroseltseva

The article presents the results of examination of personality specifics in the individuals charged with particularly serious crimes. The sample consisted of 59 men aged 18-60, the average age was 33.7 years. At the same time, 54 men with socially normative behavior and with no criminal record served as a comparison group. We used the following: "Value Spectrum" technique by D.A. Leontyev, "Who am I?" test by M.Kuhn & T. McPartland (adapted by T.V.Rumyantseva), Motivational Induction method by Joseph R. Nuttin. It has been established that in persons who were under investigation on charges of particularly serious crimes the personality values have been appropriated at the nominal level and don't serve as effective control of their behavior or activities. This category of persons are distinguished by the following: a narrowed down time perspective of one year, problem and nonadaptive self-identity, less prominent (as compared to the norm group) moral and educational needs and the need for creativity. Persons charged with particularly serious crimes have psychological self-protection and autonomy as their prevalent motivations. The results of the research can be used by experts in their investigative actions with regard to persons under discussion both at the stage of pre-trial investigation and during judicial proceedings, as well as be taken into account at forensic psychological examinations.


Sign in / Sign up

Export Citation Format

Share Document