scholarly journals State of the art in selection of variables and functional forms in multivariable analysis—outstanding issues

Author(s):  
Willi Sauerbrei ◽  
◽  
Aris Perperoglou ◽  
Matthias Schmid ◽  
Michal Abrahamowicz ◽  
...  
Policy Papers ◽  
2006 ◽  
Vol 2006 (64) ◽  
Author(s):  

The Board of Governors in a Resolution adopted on September 18 requested that the Executive Board reach agreement on a new quota formula, starting discussions soon after the Annual Meetings in Singapore. According to the Resolution, this work should be completed by the Annual Meetings in 2007, and no later than the IMFC Meeting in the Spring of 2008. The Resolution states that the new formula should provide a simpler and more transparent means of capturing members’ relative positions in the world economy. This new formula would provide the basis for a second round of ad hoc quota increases, as part of the program of quota and voice reform to be completed by the Annual Meetings in 2007, and no later than by the Annual Meetings of 2008. This paper explores key issues related to a new quota formula as background for an informal Board seminar. This seminar is the first opportunity for the Board to discuss the new formula since the adoption of the Resolution. The paper first reviews the broad considerations and principles that should guide the design of a new quota formula, taking as a starting point the roles of quotas in the Fund. The paper also considers more specific issues in that light, such as the selection of variables and possible functional forms for the new formula. In examining these issues, the paper draws on the extensive discussion of the quota formulas in recent years, taking up questions raised both within the Board and in other fora.


2021 ◽  
pp. 026553222110361
Author(s):  
Chao Han

Over the past decade, testing and assessing spoken-language interpreting has garnered an increasing amount of attention from stakeholders in interpreter education, professional certification, and interpreting research. This is because in these fields assessment results provide a critical evidential basis for high-stakes decisions, such as the selection of prospective students, the certification of interpreters, and the confirmation/refutation of research hypotheses. However, few reviews exist providing a comprehensive mapping of relevant practice and research. The present article therefore aims to offer a state-of-the-art review, summarizing the existing literature and discovering potential lacunae. In particular, the article first provides an overview of interpreting ability/competence and relevant research, followed by main testing and assessment practice (e.g., assessment tasks, assessment criteria, scoring methods, specificities of scoring operationalization), with a focus on operational diversity and psychometric properties. Second, the review describes a limited yet steadily growing body of empirical research that examines rater-mediated interpreting assessment, and casts light on automatic assessment as an emerging research topic. Third, the review discusses epistemological, psychometric, and practical challenges facing interpreting testers. Finally, it identifies future directions that could address the challenges arising from fast-changing pedagogical, educational, and professional landscapes.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Saba Moeinizade ◽  
Ye Han ◽  
Hieu Pham ◽  
Guiping Hu ◽  
Lizhi Wang

AbstractMultiple trait introgression is the process by which multiple desirable traits are converted from a donor to a recipient cultivar through backcrossing and selfing. The goal of this procedure is to recover all the attributes of the recipient cultivar, with the addition of the specified desirable traits. A crucial step in this process is the selection of parents to form new crosses. In this study, we propose a new selection approach that estimates the genetic distribution of the progeny of backcrosses after multiple generations using information of recombination events. Our objective is to select the most promising individuals for further backcrossing or selfing. To demonstrate the effectiveness of the proposed method, a case study has been conducted using maize data where our method is compared with state-of-the-art approaches. Simulation results suggest that the proposed method, look-ahead Monte Carlo, achieves higher probability of success than existing approaches. Our proposed selection method can assist breeders to efficiently design trait introgression projects.


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2837
Author(s):  
Saykat Dutta ◽  
Sri Srinivasa Raju M ◽  
Rammohan Mallipeddi ◽  
Kedar Nath Das ◽  
Dong-Gyu Lee

In multi/many-objective evolutionary algorithms (MOEAs), to alleviate the degraded convergence pressure of Pareto dominance with the increase in the number of objectives, numerous modified dominance relationships were proposed. Recently, the strengthened dominance relation (SDR) has been proposed, where the dominance area of a solution is determined by convergence degree and niche size (θ¯). Later, in controlled SDR (CSDR), θ¯ and an additional parameter (k) associated with the convergence degree are dynamically adjusted depending on the iteration count. Depending on the problem characteristics and the distribution of the current population, different situations require different values of k, rendering the linear reduction of k based on the generation count ineffective. This is because a particular value of k is expected to bias the dominance relationship towards a particular region on the Pareto front (PF). In addition, due to the same reason, using SDR or CSDR in the environmental selection cannot preserve the diversity of solutions required to cover the entire PF. Therefore, we propose an MOEA, referred to as NSGA-III*, where (1) a modified SDR (MSDR)-based mating selection with an adaptive ensemble of parameter k would prioritize parents from specific sections of the PF depending on k, and (2) the traditional weight vector and non-dominated sorting-based environmental selection of NSGA-III would protect the solutions corresponding to the entire PF. The performance of NSGA-III* is favourably compared with state-of-the-art MOEAs on DTLZ and WFG test suites with up to 10 objectives.


2020 ◽  
Vol V (IV) ◽  
pp. 1-9
Author(s):  
Aftab Anwar ◽  
Muhammad Masood Anwar ◽  
Ghulam Yahya Khan

Since inflation and trade openness rate are considered as critical measure of an economy's health. This article analyze the relation of Economic growth with Investment, Inflation and Trade Openness of Pakistan for 1970- 2019. The policy guide lines from analysis include promotion of policies to increase Investment and Trade-openness in short and long-terms. The study used ARDL bound-testing for long-term and Un-Restricted-Error Correction techniques to discover short-term interrelation amongst a selection of variables. Results of study revealed inflation negatively related to economic performance and positively linked to Investment and Trade-Openness. Findings of enquiry suggested government should focus more on investment friendly policies in the country.


Author(s):  
Weixiang Xu ◽  
Xiangyu He ◽  
Tianli Zhao ◽  
Qinghao Hu ◽  
Peisong Wang ◽  
...  

Large neural networks are difficult to deploy on mobile devices because of intensive computation and storage. To alleviate it, we study ternarization, a balance between efficiency and accuracy that quantizes both weights and activations into ternary values. In previous ternarized neural networks, a hard threshold Δ is introduced to determine quantization intervals. Although the selection of Δ greatly affects the training results, previous works estimate Δ via an approximation or treat it as a hyper-parameter, which is suboptimal. In this paper, we present the Soft Threshold Ternary Networks (STTN), which enables the model to automatically determine quantization intervals instead of depending on a hard threshold. Concretely, we replace the original ternary kernel with the addition of two binary kernels at training time, where ternary values are determined by the combination of two corresponding binary values. At inference time, we add up the two binary kernels to obtain a single ternary kernel. Our method dramatically outperforms current state-of-the-arts, lowering the performance gap between full-precision networks and extreme low bit networks. Experiments on ImageNet with AlexNet (Top-1 55.6%), ResNet-18 (Top-1 66.2%) achieves new state-of-the-art.


2015 ◽  
Author(s):  
Dorothy V Bishop ◽  
Paul A Thompson

Background: The p-curve is a plot of the distribution of p-values below .05 reported in a set of scientific studies. Comparisons between ranges of p-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication, p-hacking. We argue that binomial tests on the p-curve are not robust enough to be used for this purpose. Methods: P-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigating p-hacking. Results: We first show that a p-curve suggestive of p-hacking can be obtained if researchers misapply parametric tests to data that depart from normality, even when no p-hacking occurs. We go on to show that when there is ghost p-hacking, the shape of the p-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulated p-hacked data do not give the "p-hacking bump" just below .05 that is regarded as evidence of p-hacking, though there is a negative skew when simulated variables are inter-correlated. The way p-curves vary according to features of underlying data poses problems when automated text mining is used to detect p-values in heterogeneous sets of published papers. Conclusions: A significant bump in the p-curve just below .05 is not necessarily evidence of p-hacking, and lack of a bump is not indicative of lack of p-hacking. Furthermore, while studies with evidential value will usually generate a right-skewed p-curve, we cannot treat a right-skewed p-curve as an indicator of the extent of evidential value, unless we have a model specific to the type of p-values entered into the analysis. We conclude that it is not feasible to use the p-curve to estimate the extent of p-hacking and evidential value unless there is considerable control over the type of data entered into the analysis.


Sign in / Sign up

Export Citation Format

Share Document