Testing overidentifying restrictions with many instruments and heteroskedasticity using regularized Jackknife IV

2021 ◽  
Author(s):  
Marine Carrasco ◽  
Mohamed Doukali

Abstract This paper proposes a new overidentifying restrictions test in a linear model when the number of instruments (possibly weak) may be smaller or larger than the sample size n or even infinite in a heteroskedastic framework. The proposed J test combines two techniques: the Jackknife method and the regularization technique which consists in stabilizing the projection matrix. We theoretically show that our new test achieves the asymptotically correct size in the presence of many instruments. The simulation results demonstrate that our modified J statistic test has better empirical properties in small samples than existing J tests. We also propose a regularized F-test to assess the strength of the instruments, which is robust to heteroskedasticity and many instruments.


2010 ◽  
Vol 27 (2) ◽  
pp. 427-441 ◽  
Author(s):  
Stanislav Anatolyev ◽  
Nikolay Gospodinov

This paper studies the asymptotic validity of the Anderson–Rubin (AR) test and the J test for overidentifying restrictions in linear models with many instruments. When the number of instruments increases at the same rate as the sample size, we establish that the conventional AR and J tests are asymptotically incorrect. Some versions of these tests, which are developed for situations with moderately many instruments, are also shown to be asymptotically invalid in this framework. We propose modifications of the AR and J tests that deliver asymptotically correct sizes. Importantly, the corrected tests are robust to the numerosity of the moment conditions in the sense that they are valid for both few and many instruments. The simulation results illustrate the excellent properties of the proposed tests.



Author(s):  
Les Beach

To test the efficacy of the Personal Orientation Inventory in assessing growth in self-actualization in relation to encounter groups and to provide a more powerful measure of such changes, pre- and posttest data from 3 highly comparable encounter groups (N = 43) were combined for analysis. Results indicated that the Personal Orientation Inventory is a sensitive instrument for assessing personal growth in encounter groups and that a larger total sample size provides more significant results than those reported for small samples (e. g., fewer than 15 participants).



2017 ◽  
Vol 23 (5) ◽  
pp. 644-646 ◽  
Author(s):  
Maria Pia Sormani

The calculation of the sample size needed for a clinical study is the challenge most frequently put to statisticians, and it is one of the most relevant issues in the study design. The correct size of the study sample optimizes the number of patients needed to get the result, that is, to detect the minimum treatment effect that is clinically relevant. Minimizing the sample size of a study has the advantage of reducing costs, enhancing feasibility, and also has ethical implications. In this brief report, I will explore the main concepts on which the sample size calculation is based.



2011 ◽  
Vol 6 (2) ◽  
pp. 252-277 ◽  
Author(s):  
Stephen T. Ziliak

AbstractStudent's exacting theory of errors, both random and real, marked a significant advance over ambiguous reports of plant life and fermentation asserted by chemists from Priestley and Lavoisier down to Pasteur and Johannsen, working at the Carlsberg Laboratory. One reason seems to be that William Sealy Gosset (1876–1937) aka “Student” – he of Student'st-table and test of statistical significance – rejected artificial rules about sample size, experimental design, and the level of significance, and took instead an economic approach to the logic of decisions made under uncertainty. In his job as Apprentice Brewer, Head Experimental Brewer, and finally Head Brewer of Guinness, Student produced small samples of experimental barley, malt, and hops, seeking guidance for industrial quality control and maximum expected profit at the large scale brewery. In the process Student invented or inspired half of modern statistics. This article draws on original archival evidence, shedding light on several core yet neglected aspects of Student's methods, that is, Guinnessometrics, not discussed by Ronald A. Fisher (1890–1962). The focus is on Student's small sample, economic approach to real error minimization, particularly in field and laboratory experiments he conducted on barley and malt, 1904 to 1937. Balanced designs of experiments, he found, are more efficient than random and have higher power to detect large and real treatment differences in a series of repeated and independent experiments. Student's world-class achievement poses a challenge to every science. Should statistical methods – such as the choice of sample size, experimental design, and level of significance – follow the purpose of the experiment, rather than the other way around? (JEL classification codes: C10, C90, C93, L66)



PEDIATRICS ◽  
1989 ◽  
Vol 83 (3) ◽  
pp. A72-A72
Author(s):  
Student

The believer in the law of small numbers practices science as follows: 1. He gambles his research hypotheses on small samples without realizing that the odds against him are unreasonably high. He overestimates power. 2. He has undue confidence in early trends (e.g., the data of the first few subjects) and in the stability of observed patterns (e.g., the number and identity of significant results). He overestimates significance. 3. In evaluating replications, his or others', he has unreasonably high expectations about the replicability of significant results. He underestimates the breadth of confidence intervals. 4. He rarely attributes a deviation of results from expectations to sampling variability, because he finds a causal "explanation" for any discrepancy. Thus, he has little opportunity to recognize sampling variation in action. His belief in the law of small numbers, therefore, will forever remain intact.



2018 ◽  
Vol 15 (2) ◽  
pp. 93 ◽  
Author(s):  
Muhammad Fajar ◽  
Ony Arifianto

The autopilot on the aircraft is developed based on the mode of motion of the aircraft i.e. longitudinal and lateral-directional motion. In this paper, an autopilot is designed in lateral-directional mode for LSU-05 aircraft. The autopilot is designed at a range of aircraft operating speeds of 15 m/s, 20 m/s, 25 m/s, and 30 m/s at 1000 m altitude. Designed autopilots are Roll Attitude Hold, Heading Hold and Waypoint Following. Autopilot is designed based on linear model in the form of state-space. The controller used is a Proportional-Integral-Derivative (PID) controller. Simulation results show the value of overshoot / undershoot does not exceed 5% and settling time is less than 30 second if given step command. Abstrak Autopilot pada pesawat dikembangkan berdasarkan pada modus gerak pesawat yaitu modus gerak longitudinal dan lateral-directional. Pada makalah ini, dirancang autopilot pada modus gerak lateral-directional untuk pesawat LSU-05. Autopilot dirancang pada range kecepatan operasi pesawat yaitu 15 m/dtk, 20 m/dtk, 25 m/dtk, dan 30 m/dtk dengan ketinggian 1000 m. Autopilot yang dirancang adalah Roll Attitude Hold, Heading Hold dan Waypoint Following. Autopilot dirancang berdasarkan model linier dalam bentuk state-space. Pengendali yang digunakan adalah pengendali Proportional-Integral-Derivative (PID). Hasil simulasi menunjukan nilai overshoot/undershoot tidak melebihi 5% dan settling time kurang dari 30 detik jika diberikan perintah step.



Author(s):  
Ken Kobayashi ◽  
Naoki Hamada ◽  
Akiyoshi Sannai ◽  
Akinori Tanaka ◽  
Kenichi Bannai ◽  
...  

Multi-objective optimization problems require simultaneously optimizing two or more objective functions. Many studies have reported that the solution set of an M-objective optimization problem often forms an (M − 1)-dimensional topological simplex (a curved line for M = 2, a curved triangle for M = 3, a curved tetrahedron for M = 4, etc.). Since the dimensionality of the solution set increases as the number of objectives grows, an exponentially large sample size is needed to cover the solution set. To reduce the required sample size, this paper proposes a Bézier simplex model and its fitting algorithm. These techniques can exploit the simplex structure of the solution set and decompose a high-dimensional surface fitting task into a sequence of low-dimensional ones. An approximation theorem of Bézier simplices is proven. Numerical experiments with synthetic and real-world optimization problems demonstrate that the proposed method achieves an accurate approximation of high-dimensional solution sets with small samples. In practice, such an approximation will be conducted in the postoptimization process and enable a better trade-off analysis.



2017 ◽  
Vol 17 (9) ◽  
pp. 1623-1629 ◽  
Author(s):  
Berry Boessenkool ◽  
Gerd Bürger ◽  
Maik Heistermann

Abstract. High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.



1997 ◽  
Vol 26 (4) ◽  
pp. 839-851 ◽  
Author(s):  
Keith E. Muller ◽  
Virginia B. Pasour
Keyword(s):  


2018 ◽  
Vol 7 (3) ◽  
pp. 1257
Author(s):  
Khalil Azha Mohd Annuar ◽  
Nik Azran Ab. Hadi ◽  
Mohamad Haniff Harun ◽  
Mohd Firdaus Mohd Ab. Halim ◽  
Siti Nur Suhaila Mirin ◽  
...  

The overhead gantry crane systems are extensively used in harbours and factories for transportation of heavy loads. The crane speeding up, required for motion, always induces undesirable load swing. This writings present dynamic modelling of a 3D overhead gantry crane sys-tem based on closed-form equations of motion. By using the Lagrange technique, a 3D overhead gantry crane system nonlinear dynamic model is deriving. Then perform a linearization process to obtain a linear model dynamic system. Finally, simulation results systems re-sponses of the derived nonlinear and linear model are presented showing the accuracy and performance of both model.  



Sign in / Sign up

Export Citation Format

Share Document