scholarly journals The Impact Of Misdiagnosing A Structural Break On Standard Unit Root Tests: Monte Carlo Results For Small Sample Size And Power

2003 ◽  
Vol 27 (1) ◽  
pp. 57-74
Author(s):  
E Moolman ◽  
S K McCoskey
2018 ◽  
Vol 36 (1) ◽  
pp. 17-30 ◽  
Author(s):  
Nabila Jones ◽  
Hannah Bartlett

The aim of this review was to evaluate the literature that has investigated the impact of visual impairment on nutritional status. We identified relevant articles through a multi-staged systematic approach. Fourteen articles were identified as meeting the inclusion criteria. The sample size of the studies ranged from 9 to 761 participants. It was found that visual impairment significantly affects nutritional status. The studies reported that visually impaired people have an abnormal body mass index (BMI); a higher prevalence of obesity and malnutrition was reported. Visually impaired people find it difficult to shop for, eat, and prepare meals. Most studies had a small sample size, and some studies did not include a study control group for comparison. The limitations of these studies suggest that the findings are not conclusive enough to hold true for only those who are visually impaired. Further studies with a larger sample size are required with the aim of developing interventions.


2012 ◽  
Vol 30 (15_suppl) ◽  
pp. e15032-e15032
Author(s):  
Mihai Vasile Marinca ◽  
Irina Draga Caruntu ◽  
Ludmila Liliac ◽  
Simona Eliza Giusca ◽  
Andreea Marinca ◽  
...  

e15032 Background: The 1997 IGCCCG Consensus classification provides clinicians with enough information to efficiently choose between treatment options for most GCT patients. Nevertheless, therapy is ineffective in 5-10% of cases (even more in less developed countries), and about the same numbers experience severe side effects. This exploratory study aims to assess the impact of more rigorous and detailed pathology examination on improving the assignation of these patients to prognostic groups and, consequently, making optimal therapeutic decisions. Methods: Predefined features were reviewed on histology slides from 39 GCT patients followed-up for a median of 48.28 months. We designed a uniform pathology protocol, focused on identifying potential new prognostic factors. Categorical and continuous variables were quantified using light microscopy and computer-aided morphometry and, due to the small sample size, their statistical correlation was analyzed by exact tests and Spearman’s rho, respectively. Significant (2-sided p-value <0.05, under sample size reserve) coefficient values were entered in hierarchical cluster analysis (HCA). Results: Favorable IGCCCG group, presence of seminoma, glandular tissue pattern, presence and histoarchitecture of lymphocytic infiltrate associated better survival rates and lower risk of progression. Invasion of the epididymis and spermatic cord, presence of teratoma, choriocarcinoma and yolk-sac elements, papillary pattern and cell pleomorphism predicted poorer outcomes. HCA yielded 2 significantly distinct patient groups in terms of overall survival (p=0.018) and time to progression (p=0.080), but not disease-free survival (p=0.614). Conclusions: Quantification of tumor subtypes and other histology features of GCTs (e.g. necrosis, tissue patterns, inflammation) is feasible and, if standardized, may prove useful in optimal selection of risk groups, when performed by an experienced pathologist.


Author(s):  
Zhigang Wei ◽  
Limin Luo ◽  
Burt Lin ◽  
Dmitri Konson ◽  
Kamran Nikbin

Good durability/reliability performance of products can be achieved by properly constructing and implementing design curves, which are usually obtained by analyzing test data, such as fatigue S-N data. A good design curve construction approach should consider sample size, failure probability and confidence level, and these features are especially critical when test sample size is small. The authors have developed a design S-N curve construction method based on the tolerance limit concept. However, recent studies have shown that the analytical solutions based on the tolerance limit approach may not be accurate for very small sample size because of the assumptions and approximations introduced to the analytical approach. In this paper a Monte Carlo simulation approach is used to construct design curves for test data with an assumed underlining normal (or lognormal) distribution. The difference of factor K, which measures the confidence level of the test data, between the analytical solution and the Monte Carlo simulation solutions is compared. Finally, the design curves constructed based on these methods are demonstrated and compared using fatigue S-N data with small sample size.


2014 ◽  
Vol 27 (9) ◽  
pp. 3393-3404 ◽  
Author(s):  
Michael K. Tippett ◽  
Timothy DelSole ◽  
Anthony G. Barnston

Abstract Regression is often used to calibrate climate model forecasts with observations. Reliability is an aspect of forecast quality that refers to the degree of correspondence between forecast probabilities and observed frequencies of occurrence. While regression-corrected climate forecasts are reliable in principle, the estimated regression parameters used in practice are affected by sampling error. The low skill and small sample sizes typically encountered in climate prediction imply substantial sampling error in the estimated regression parameters. Here the reliability of regression-corrected climate forecasts is analyzed for the case of joint-Gaussian distributed ensemble forecasts and observations with regression parameters estimated by least squares. Hypothesis testing of the regression parameters provides direct information about the skill and reliability of the uncorrected ensemble-based probability forecasts. However, the regression-corrected probability forecasts with estimated parameters are systematically “overconfident” because sampling error causes a positive bias in the regression forecast signal variance, despite the fact that the estimates of the regression parameters are themselves unbiased. An analytical description of the reliability diagram of a generic regression-corrected climate forecast is derived and is shown to depend on sample size and population correlation skill, with small sample size and low skill being factors that increase overconfidence. The analytical reliability estimate is shown to capture the effect of sampling error in synthetic data experiments and in a 29-yr dataset of NOAA Climate Forecast System version 2 predictions of seasonal precipitation totals over the Americas. The impact of sampling error on the reliability of regression-corrected forecast has been previously unrecognized and affects all regression-based forecasts. The use of regression parameters estimated by shrinkage methods such as ridge regression substantially reduces overconfidence.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Zhengyuan Xu ◽  
Yu Liu ◽  
Mingquan Ye ◽  
Lei Huang ◽  
Hao Yu ◽  
...  

In recent years, sparse representation based classification (SRC) has emerged as a popular technique in face recognition. Traditional SRC focuses on the role of the l1-norm but ignores the impact of collaborative representation (CR), which employs all the training examples over all the classes to represent a test sample. Due to issues like expression, illumination, pose, and small sample size, face recognition still remains as a challenging problem. In this paper, we proposed a patch based collaborative representation method for face recognition via Gabor feature and measurement matrix. Using patch based collaborative representation, this method can solve the problem of the lack of accuracy for the linear representation of the small sample size. Compared with holistic features, the multiscale and multidirection Gabor feature shows more robustness. The usage of measurement matrix can reduce large data volume caused by Gabor feature. The experimental results on several popular face databases including Extended Yale B, CMU_PIE, and LFW indicated that the proposed method is more competitive in robustness and accuracy than conventional SR and CR based methods.


1992 ◽  
Vol 20 (1) ◽  
pp. 73-78
Author(s):  
Jacqueline M. Atkinson ◽  
Denise A. Coia

Using an ABA design, the impact of the unexpected delivery of Irn Bru to an out-patient clinic for depressed men was investigated using the Montgomery-Åsberg scale for depression. A significant improvement in both behaviour and affect was seen immediately, some benefit still showing at one month follow-up. The effect of the procedure on the multidisciplinary team is also discussed. Some methodological issues, including small sample size are explored. Despite the methodological problems the serious element of the study points to the important impact of unexpected, non-therapeutic elements on clinical behaviour, possibly as a result of the challenge to the therapist-patient relationship.


2011 ◽  
Vol 103 ◽  
pp. 366-371 ◽  
Author(s):  
Wei Hong Zhong ◽  
Xiu Shui Ma ◽  
Ying Dao Li ◽  
Yuan Li

In a contact measurement process, the coordinate measuring machine(CMM)probe will bring dynamic measurement error, therefore, dynamic calibration of the probe tip effective diameter should to be done at different probing speeds, and calibration uncertainty should to be given. There are some problems, slow convergence and unstable, using Monte Carlo (MC) method in uncertainty. In this paper, Quasi Monte Carlo (QMC) method is presented in the probe tip effective diameter uncertainty evaluation. At a certain positioning speed and distance approximation, probe tip effective diameter experimental tests are done with changing probing speeds. MC and QMC methods are used on uncertainty evaluation respectively, and the results are compared and analyzed. The simulation shows that QMC can be used on dynamic uncertainty evaluation of CMM probe tip. Compared with MC, QMC obtains a better stability and precision in small sample size and gains higher computing speed in large sample size.显示对应的拉丁字符的拼音 字典名词 assessment动词 assessevaluatepass judgment


Sign in / Sign up

Export Citation Format

Share Document