scholarly journals A Fast and Robust Way to Estimate Overlap of Niches, and Draw Inference

Author(s):  
Judith H. Parkinson ◽  
Raoul Kutil ◽  
Jonas Kuppler ◽  
Robert R. Junker ◽  
Wolfgang Trutschnig ◽  
...  

Abstract The problem of quantifying the overlap of Hutchinsonian niches has received much attention lately, in particular in quantitative ecology, from where it also originates. However, the niche concept has the potential to also be useful in many other application areas, as for example in economics. We are presenting a fully nonparametric, robust solution to this problem, along with exact shortcut formulas based on rank-statistics, and with a rather intuitive probabilistic interpretation. Furthermore, by deriving the asymptotic sampling distribution of the estimators, we are proposing the first asymptotically valid inference method, providing confidence intervals for the niche overlap. The theoretical considerations are supplemented by simulation studies and a real data example.

Author(s):  
Eric Post

This chapter discusses the niche concept. One of the earliest applications of the niche theory in quantitative ecology addressed the seemingly simple question of the extent to which the niches of two species can overlap and allow co-occurrence or coexistence of the species. This question grew out of the then recent development of the notions of limiting similarity and niche packing, according to which coexistence among species with similar resource requirements was assumed to be promoted through minimization of niche overlap through divergence in habitat utilization patterns or character displacement. The answer is highly relevant in the context of climate change, or of any environmental change in general. Fluctuation in abiotic conditions such as mean annual temperature may be seen as just as important, if not more so, to the persistence or maintenance of the degree of niche overlap that is tolerable for co-occurring species as the trend in abiotic conditions itself.


Author(s):  
Guanghao Qi ◽  
Nilanjan Chatterjee

Abstract Background Previous studies have often evaluated methods for Mendelian randomization (MR) analysis based on simulations that do not adequately reflect the data-generating mechanisms in genome-wide association studies (GWAS) and there are often discrepancies in the performance of MR methods in simulations and real data sets. Methods We use a simulation framework that generates data on full GWAS for two traits under a realistic model for effect-size distribution coherent with the heritability, co-heritability and polygenicity typically observed for complex traits. We further use recent data generated from GWAS of 38 biomarkers in the UK Biobank and performed down sampling to investigate trends in estimates of causal effects of these biomarkers on the risk of type 2 diabetes (T2D). Results Simulation studies show that weighted mode and MRMix are the only two methods that maintain the correct type I error rate in a diverse set of scenarios. Between the two methods, MRMix tends to be more powerful for larger GWAS whereas the opposite is true for smaller sample sizes. Among the other methods, random-effect IVW (inverse-variance weighted method), MR-Robust and MR-RAPS (robust adjust profile score) tend to perform best in maintaining a low mean-squared error when the InSIDE assumption is satisfied, but can produce large bias when InSIDE is violated. In real-data analysis, some biomarkers showed major heterogeneity in estimates of their causal effects on the risk of T2D across the different methods and estimates from many methods trended in one direction with increasing sample size with patterns similar to those observed in simulation studies. Conclusion The relative performance of different MR methods depends heavily on the sample sizes of the underlying GWAS, the proportion of valid instruments and the validity of the InSIDE assumption. Down-sampling analysis can be used in large GWAS for the possible detection of bias in the MR methods.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2164
Author(s):  
Héctor J. Gómez ◽  
Diego I. Gallardo ◽  
Karol I. Santoro

In this paper, we present an extension of the truncated positive normal (TPN) distribution to model positive data with a high kurtosis. The new model is defined as the quotient between two random variables: the TPN distribution (numerator) and the power of a standard uniform distribution (denominator). The resulting model has greater kurtosis than the TPN distribution. We studied some properties of the distribution, such as moments, asymmetry, and kurtosis. Parameter estimation is based on the moments method, and maximum likelihood estimation uses the expectation-maximization algorithm. We performed some simulation studies to assess the recovery parameters and illustrate the model with a real data application related to body weight. The computational implementation of this work was included in the tpn package of the R software.


Author(s):  
Xiaozhou Wang ◽  
Xi Chen ◽  
Qihang Lin ◽  
Weidong Liu

The performance of clustering depends on an appropriately defined similarity between two items. When the similarity is measured based on human perception, human workers are often employed to estimate a similarity score between items in order to support clustering, leading to a procedure called crowdsourced clustering. Assuming a monetary reward is paid to a worker for each similarity score and assuming the similarities between pairs and workers' reliability have a large diversity, when the budget is limited, it is critical to wisely assign pairs of items to different workers to optimize the clustering result. We model this budget allocation problem as a Markov decision process where item pairs are dynamically assigned to workers based on the historical similarity scores they provided. We propose an optimistic knowledge gradient policy where the assignment of items in each stage is based on the minimum-weight K-cut defined on a similarity graph. We provide simulation studies and real data analysis to demonstrate the performance of the proposed method.


Stats ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 111-120 ◽  
Author(s):  
Dewi Rahardja

We construct a point and interval estimation using a Bayesian approach for the difference of two population proportion parameters based on two independent samples of binomial data subject to one type of misclassification. Specifically, we derive an easy-to-implement closed-form algorithm for drawing from the posterior distributions. For illustration, we applied our algorithm to a real data example. Finally, we conduct simulation studies to demonstrate the efficiency of our algorithm for Bayesian inference.


2020 ◽  
Vol 44 (5) ◽  
pp. 362-375
Author(s):  
Tyler Strachan ◽  
Edward Ip ◽  
Yanyan Fu ◽  
Terry Ackerman ◽  
Shyh-Huei Chen ◽  
...  

As a method to derive a “purified” measure along a dimension of interest from response data that are potentially multidimensional in nature, the projective item response theory (PIRT) approach requires first fitting a multidimensional item response theory (MIRT) model to the data before projecting onto a dimension of interest. This study aims to explore how accurate the PIRT results are when the estimated MIRT model is misspecified. Specifically, we focus on using a (potentially misspecified) two-dimensional (2D)-MIRT for projection because of its advantages, including interpretability, identifiability, and computational stability, over higher dimensional models. Two large simulation studies (I and II) were conducted. Both studies examined whether the fitting of a 2D-MIRT is sufficient to recover the PIRT parameters when multiple nuisance dimensions exist in the test items, which were generated, respectively, under compensatory MIRT and bifactor models. Various factors were manipulated, including sample size, test length, latent factor correlation, and number of nuisance dimensions. The results from simulation studies I and II showed that the PIRT was overall robust to a misspecified 2D-MIRT. Smaller third and fourth simulation studies were done to evaluate recovery of the PIRT model parameters when the correctly specified higher dimensional MIRT or bifactor model was fitted with the response data. In addition, a real data set was used to illustrate the robustness of PIRT.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Nachatchapong Kaewsompong ◽  
Paravee Maneejuk ◽  
Woraphon Yamaka

We propose a high-dimensional copula to model the dependence structure of the seemingly unrelated quantile regression. As the conventional model faces with the strong assumption of the multivariate normal distribution and the linear dependence structure, thus, we apply the multivariate exchangeable copula function to relax this assumption. As there are many parameters to be estimated, we consider the Bayesian Markov chain Monte Carlo approach to estimate the parameter interests in the model. Four simulation studies are conducted to assess the performance of our proposed model and Bayesian estimation. Satisfactory results from simulation studies are obtained suggesting the good performance and reliability of the Bayesian method used in our proposed model. The real data analysis is also provided, and the empirical comparison indicates our proposed model outperforms the conventional models in all considered quantile levels.


2020 ◽  
Vol 45 (5) ◽  
pp. 569-597
Author(s):  
Kazuhiro Yamaguchi ◽  
Kensuke Okada

In this article, we propose a variational Bayes (VB) inference method for the deterministic input noisy AND gate model of cognitive diagnostic assessment. The proposed method, which applies the iterative algorithm for optimization, is derived based on the optimal variational posteriors of the model parameters. The proposed VB inference enables much faster computation than the existing Markov chain Monte Carlo (MCMC) method, while still offering the benefits of a full Bayesian framework. A simulation study revealed that the proposed VB estimation adequately recovered the parameter values. Moreover, an example using real data revealed that the proposed VB inference method provided similar estimates to MCMC estimation with much faster computation.


2016 ◽  
Vol 40 (1) ◽  
pp. 318-330 ◽  
Author(s):  
Amirhossein Amiri ◽  
Reza Ghashghaei ◽  
Mohammad Reza Maleki

In this paper, we investigate the misleading effect of measurement errors on simultaneous monitoring of the multivariate process mean and variability. For this purpose, we incorporate the measurement errors into a hybrid method based on the generalized likelihood ratio (GLR) and exponentially weighted moving average (EWMA) control charts. After that, we propose four remedial methods to decrease the effects of measurement errors on the performance of the monitoring procedure. The performance of the monitoring procedure as well as the proposed remedial methods is investigated through extensive simulation studies and a real data example.


2021 ◽  
Vol 20 ◽  
pp. 134-143
Author(s):  
A. S. Al-Moisheer ◽  
A. F. Daghestani ◽  
K. S. Sultan

In this paper, we talk about a mixture of one-parameter Lindley and inverse Weibull distributions (MLIWD). First, We introduce and discuss the MLIWD. Then, we study the main statistical properties of the proposed mixture and provide some graphs of both the density and the associated hazard rate functions. After that, we estimate the unknown parameters of the proposed mixture via two estimation methods, namely, the generalized method of moments and maximum likelihood. In addition, we compare the estimation methods via some simulation studies to determine the efficacy of the two estimation methods. Finally, we evaluate the performance and behavior of the proposed mixture with different numerical examples and real data application in survival analysis.


Sign in / Sign up

Export Citation Format

Share Document