size measure
Recently Published Documents


TOTAL DOCUMENTS

98
(FIVE YEARS 35)

H-INDEX

11
(FIVE YEARS 2)

2022 ◽  
Vol 20 ◽  
pp. 331-343
Author(s):  
Wang Jianhong ◽  
Ricardo A. Ramirez-Mendoza

In this paper, interval prediction model is studied for model predictive control (MPC) strategy with unknown but bounded noise. After introducing the family of models and some basic information, some computational results are presented to construct interval predictor model, using linear regression structure whose regression parameters are included in a sphere parameter set. A size measure is used to scale the average amplitude of the predictor interval, then one optimal model that minimizes this size measure is efficiently computed by solving a linear programming problem. The active set approach is applied to solve the linear programming problem, and based on these optimization variables, the predictor interval of the considered model with sphere parameter set can be directly constructed. As for choosing a fixed non-negative number in our given size measure, a better choice is proposed by using the Karush-Kuhn-Tucker (KKT) optimality conditions. In order to apply interval prediction model into model predictive control, the midpoint of that interval is substituted in a quadratic optimization problem with inequality constrained condition to obtain the optimal control input. After formulating it as a standard quadratic optimization and deriving its dual form, the Gauss-Seidel algorithm is applied to solve the dual problem and convergence of Gauss-Seidel algorithm is provided too. Finally simulation examples confirm our theoretical results.


2021 ◽  
Author(s):  
Mirka Henninger ◽  
Rudolf Debelak ◽  
Carolin Strobl

To detect differential item functioning (DIF), Rasch trees search for optimal splitpoints in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF effects as significant in larger samples. This leads to larger trees, which split the sample into more subgroups. What would be more desirable is an approach that is driven more by effect size rather than sample size. In order to achieve this, we suggest to implement an additional stopping criterion: the popular ETS classification scheme based on the Mantel-Haenszel odds ratio. This criterion helps us to evaluate whether a split in a Rasch tree is based on a substantial or an ignorable difference in item parameters, and it allows the Rasch tree to stop growing when DIF between the identified subgroups is small. Furthermore, it supports identifying DIF items and quantifying DIF effect sizes in each split. Based on simulation results, we conclude that the Mantel-Haenszel effect size further reduces unnecessary splits in Rasch trees under the null hypothesis, or when the sample size is large but DIF effects are negligible. To make the stopping criterion easy-to-use for applied researchers, we have implemented the procedure in the statistical software R. Finally, we discuss how DIF effects between different nodes in a Rasch tree can be interpreted and emphasize the importance of purification strategies for the Mantel-Haenszel procedure on tree stopping and DIF item classification.


2021 ◽  
Vol 6 (3) ◽  
pp. 74-75
Author(s):  
Soudabeh Hamedi-Shahraki ◽  
Farshad Amirkhizi

Statistical significance does not necessarily mean clinical significance. A P value less than 0.05 does not guarantee the clinical effectiveness of a treatment. To assess the clinical valuable of a treatment, the effect size must be calculated. The number needed to treat (NNT) is an example of an effect size measure that can be very helpful in determining the clinical significance of a treatment. Therefore, it is recommended for all researchers and physicians to look beyond the P value and calculate the NNT for assessing the clinical significance of therapeutic measures and agents.


2021 ◽  
Vol VI (III) ◽  
pp. 71-78
Author(s):  
Muhammad Naveed Khalid ◽  
Farah Shafiq ◽  
Shehzad Ahmed

Differential item functioning (DIF) is a procedure to identify whether an item favours a particular group of respondents once they are matched on respective ability levels. There are numerous procedures reported in the literature to detect DIF, but the Mantel-Haenszel (MH), Standardized Proportion Difference (SPD), and BILOG-MG are frequently used to ensure the fairness of assessments. The aim of the present study was to compare procedural characteristics using empirical data. We found Mantel-Haenszel and standardized proportion difference provide comparable results while BILOG-MG has flagged a large number of items, but the magnitude of DIF was trivial from a test development perspective. The results also showed Mantel-Haenszel and standardized proportion difference index provide the effect size measure of DIF, which facilitates for further necessary actions, especially for item writers and practitioners.


2021 ◽  
Author(s):  
Katharina Groskurth ◽  
Matthias Bluemke ◽  
Clemens M. Lechner ◽  
Tenko Raykov

When scalar invariance does not hold, which is often the case in application scenarios, the amount of non-invariance bias may either be consequential for observed mean comparisons or not. So far, only a few attempts have been made to quantify the extent of bias due to measurement non-invariance. Building on Millsap and Olivera-Aguilar (2012), we derived a new effect size measure, called Measurement Invariance Violation Index (MIVI), from first principles. MIVI merely assumes partial scalar invariance for a set of items forming a scale and quantifies the intercept difference of one non-invariant item (at the item-score level) or several non-invariant items (at the scale-score level) as the share (i.e., proportion) of the total observed scale score difference between groups. One can inspect the cancelation effects of item bias at the scale-score level when using directional instead of absolute terms. We provide computational code and exemplify MIVI in simulated contexts.


2021 ◽  
Vol 14 (3) ◽  
pp. 205979912110559
Author(s):  
Johnson Ching-Hong Li ◽  
Marcello Nesca ◽  
Rory Michael Waisman ◽  
Yongtian Cheng ◽  
Virginia Man Chung Tze

A common research question in psychology entails examining whether significant group differences (e.g. male and female) can be found in a list of numeric variables that measure the same underlying construct (e.g. intelligence). Researchers often use a multivariate analysis of variance (MANOVA), which is based on conventional null-hypothesis significance testing (NHST). Recently, a number of quantitative researchers have suggested reporting an effect size measure (ES) in this research scenario because of the perceived shortcomings of NHST. Thus, a number of MANOVA ESs have been proposed (e.g. generalized eta squared [Formula: see text], generalized omega squared [Formula: see text]), but they rely on two key assumptions—multivariate normality and homogeneity of covariance matrices—which are frequently violated in psychological research. To solve this problem we propose a non-parametric (or assumptions-free) ES ( Aw) for MANOVA. The new ES is developed on the basis of the non-parametric A in ANOVA. To test Aw we conducted a Monte-Carlo simulation. The results showed that Aw was accurate (robust) across different manipulated conditions—including non-normal distributions, unequal covariance matrices between groups, total sample sizes, sample size ratios, true ES values, and numbers of dependent variables—thereby providing empirical evidence supporting the use of Aw, particularly when key assumptions are violated. Implications of the proposed Aw for psychological research and other disciplines are also discussed.


Bio-Research ◽  
2021 ◽  
Vol 19 (1) ◽  
pp. 1237-1245
Author(s):  
Ikenna Bruno Aguh ◽  
Zurmi Rabiu Sani ◽  
Lynda Chinanu Ohaleme ◽  
Andover Alfred Agba

Body mass index (BMI) has traditionally been used as an indicator of body size measure and composition. Although, other measures of adiposity of the abdomen such as waist circumference (WC), waist-hip ratio (WHR), neck circumference (NC) have been suggested as being superior to BMI in predicting disease outcome. This study was designed to compare different anthropometric variables in term of their ability to predict type 2 diabetes mellitus (T2DM). This was a case-control study in 240 participants involving 120 verified T2DM cases and 120 non-diabetics as control. Age, gender and anthropometric data were collected from each participant. Logistic regression models were used with areas under the receiver operating characteristic (AROC) curve to compare the variables predictive statistics. The AROC of WHR to identify T2DM patients was 0.678 (P<0.05), with sensitivity 62.5% of and specificity of 60.8%. The AROC for average arm circumference (AAC) model is 0.649 with sensitivity of 55.8% followed by BMI model (AROC 0.635) and WC model (AROC 0.600) (P<0.05). Hip circumference (HC) (AROC 0.508) and NC (AROC 0.492) models were not significant predictors of T2DM. Subjects of ≥60 years, AAC value ≥32.6 cm, BMI value ≥ 30 kg/m2, and WHR value ≥ 0.93 were at significantly (P<0.05) higher odds of developing T2DM than lower subjects with lower values. There were no significant differences (P>0.05) in the mean HC and NC values between the diabetic and non-diabetic subjects. The non-diabetic subjects have significantly (P>0.05) higher mean height value than the diabetic subjects. Measures of generalized and central obesity were significantly associated with increased risk of developing T2DM. This study revealed that WHR can predict type 2 diabetes mellitus risk more accurately than other anthropometric measures and can thus be helpful in predicting patients with future occurrence of diabetes and providing necessary interventions


2021 ◽  
pp. 1-12
Author(s):  
Carel P. van Schaik ◽  
Zegni Triki ◽  
Redouan Bshary ◽  
Sandra A. Heldstab

Both absolute and relative brain sizes vary greatly among and within the major vertebrate lineages. Scientists have long debated how larger brains in primates and hominins translate into greater cognitive performance, and in particular how to control for the relationship between the noncognitive functions of the brain and body size. One solution to this problem is to establish the slope of cognitive equivalence, i.e., the line connecting organisms with an identical bauplan but different body sizes. The original approach to estimate this slope through intraspecific regressions was abandoned after it became clear that it generated slopes that were too low by an unknown margin due to estimation error. Here, we revisit this method. We control for the error problem by focusing on highly dimorphic primate species with large sample sizes and fitting a line through the mean values for adult females and males. We obtain the best estimate for the slope of circa 0.27, a value much lower than those constructed using all mammal species and close to the value expected based on the genetic correlation between brain size and body size. We also find that the estimate of cognitive brain size based on cognitive equivalence fits empirical cognitive studies better than the encephalization quotient, which should therefore be avoided in future studies on primates and presumably mammals and birds in general. The use of residuals from the line of cognitive equivalence may change conclusions concerning the cognitive abilities of extant and extinct primate species, including hominins.


2021 ◽  
Vol 1 (1) ◽  
pp. 20-27
Author(s):  
Letnan Kolonel Elektronika Imat Rakhmat Hidayat, S.T., M.Eng

Prime number in growth computer science of number theory and very need to yield an tool which can yield an hardware storey level effectiveness use efficiency and Existing Tools can be used to awaken regular prime number sequence pattern, structure bit-array represent containing subdividing variables method of data aggregate with every data element which have type of equal, and also can be used in moth-balls the yielded number sequence. Prime number very useful to be applied by as bases from algorithm kriptografi key public creation, hash table, best algorithm if applied hence is prime number in order to can minimize collision (collisions) will happen, in determining pattern sequence of prime number which size measure is very big is not an work easy to, so that become problems which must be searched by the way of quickest to yield sequence of prime number which size measure is very big Serial use of prosesor in seeking sequence prime number which size measure is very big less be efficient remember needing of computing time which long enough, so also plural use prosesor in seeking sequence of prime number will concerning to price problem and require software newly. So that by using generator of prime number use structure bit-array expected by difficulty in searching pattern sequence of prime number can be overcome though without using plural processor even if, as well as time complexity minimization can accessed. Execution time savings gained from the research seen from the research data, using the algorithm on the input Atkins 676,999,999. 4235747.00 execution takes seconds. While the algorithm by using an array of input bits 676,999,999. 13955.00 execution takes seconds. So that there is a difference of execution time for 4221792.00 seconds.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
David Reiffen ◽  
Bruce Tuckman

Purpose Many recently enacted financial regulations exempt smaller entities. While the literature on systemic risk provides efficiency justifications for certain exemptions, the efficiency rationale depends on measuring size appropriately. This paper aims to argue that notional amount, the metric used in derivatives regulations, is a flawed measure of an entity’s contribution to systemic risk. This study discusses an alternative size measure – entity-netted notionals or ENNs – which better reflects risk exposure as discussed in that literature and provides empirical evidence on these two metrics. Design/methodology/approach This study first discusses the relationship between the systemic risk literature and size-based exemptions. This study then describes the current metric and our risk-based alternative. Finally, this paper presents regulatory data on US interest rate swaps (IRS) and uses this to characterize some features of risk exposure. Findings The unique data set provides empirical insight into how well the size metric used in current regulations corresponds to a more theoretically oriented measure. This study finds the relationship between the metrics is fairly weak for entities for whom the size-based exemption will soon be ending, and provide an empirical basis for understanding why they differ. This study also provides evidence on the correlation of risk within this group of entities. Practical implications The paper has important implications for regulation of derivatives and financial markets more generally. To the extent exemptions for small entities make good policy, having the appropriate metric is critical. As such, the metric could be a valuable tool for regulators. Originality/value This paper examines the likely objectives of size-based exemptions from financial regulations and relates them to the systemic risk literature. It provides a unique empirical description of IRS positions, which allows us to examine the relationship between the metric used by regulators and our alternative.


Sign in / Sign up

Export Citation Format

Share Document