Prediction of Achievement of Doctoral Students in Education

1992 ◽  
Vol 74 (2) ◽  
pp. 419-423 ◽  
Author(s):  
Raymond C. Kluever ◽  
Kathy E. Green

The purpose of this study was to analyze associations among Graduate Record Examination scores, graduate grade point average, and faculty ratings of success for 311 doctoral students in a School of Education. Moderate correlations obtained between examination scores and the criterion variables of faculty ratings and graduate GPAs. Students in the lowest 10% of the examination score range had significantly lower criterion grades and ratings than those in the highest 10%. Examination scores and score combinations are a useful gross screening indicator of potential success for doctoral study in education.

1993 ◽  
Vol 18 (1) ◽  
pp. 91-107 ◽  
Author(s):  
Rebecca Zwick

A validity study was conducted to examine the degree to which GMAT scores and undergraduate grade-point average (UGPA) could predict first-year average (FYA) and final grade-point average in doctoral programs in business. A variety of empirical Bayes regression models, some of which took into account possible differences in regressions across schools and cohorts, were investigated for this purpose. Indexes of model fit showed that the most parsimonious model, which did not allow for school or cohort effects, was just as useful for prediction as the more complex models. The three preadmissions measures were found to be associated with graduate school grades, though to a lesser degree than in MBA programs. The prediction achieved using UGPA alone as a predictor tended to be more accurate than that obtained using GMAT verbal (GMATV) and GMAT quantitative (GMATQ) scores together. Including all three predictors was more effective than using only UGPA. The most likely explanation for the lower levels of prediction than in MBA programs is that doctoral programs tend to be more selective. Within-school means on GMATV, GMATQ, UGPA, and FYA were higher than those found in MBA validity studies; within-school standard deviations on FYA tended to be smaller. Among these very select, academically competent doctoral students, highly accurate prediction of grades may not be possible.


2015 ◽  
Vol 69 (Suppl. 1) ◽  
pp. 6911505120p1
Author(s):  
Alaena Haber ◽  
Allie Fen ◽  
Katherine Perrine ◽  
Jessica Jin ◽  
Molly Bathje ◽  
...  

1978 ◽  
Vol 42 (3) ◽  
pp. 779-783 ◽  
Author(s):  
A. Marilyn Sime

This study examined the predictive validity of the Undergraduate Assessment Program (UP) Aptitude Test, Verbal and Quantitative scores, Remote Associates Test scores, Formula Analysis Test scores, and undergraduate grade-point average (GPA). Criterion variables consisted of Master's GPAs and faculty ratings of six student characteristics. Subjects were 52 students enrolled in the Graduate School at the University of Minnesota majoring in Nursing. Undergraduate GPA correlated .45 with Master's GPA and .35 with one faculty rating. Formula Analysis Test scores correlated from .45 to .70 with all six faculty ratings. Remote Associates Test scores and UP aptitude test verbal and quantitative scores did not correlate at a statistically significant level with any criterion measure. Results show undergraduate GPA to be a valid predictor of graduate success. Although the Formula Analysis Test scores showed predictive ability of faculty ratings, this research instrument requires further study before use as a graduate admissions measure.


2019 ◽  
Vol 5 (1) ◽  
pp. eaat7550 ◽  
Author(s):  
Casey W. Miller ◽  
Benjamin M. Zwickl ◽  
Julie R. Posselt ◽  
Rachel T. Silvestrini ◽  
Theodore Hodapp

This study aims to understand the effectiveness of typical admissions criteria in identifying students who will complete the Physics Ph.D. Multivariate statistical analysis of roughly one in eight physics Ph.D. students from 2000 to 2010 indicates that the traditional admissions metrics of undergraduate grade point average (GPA) and the Graduate Record Examination (GRE) Quantitative, Verbal, and Physics Subject Tests do not predict completion as effectively admissions committees presume. Significant associations with completion were found for undergraduate GPA in all models and for GRE Quantitative in two of four studied models; GRE Physics and GRE Verbal were not significant in any model. It is notable that completion changed by less than 10% for U.S. physics major test takers scoring in the 10th versus 90th percentile on the Quantitative test. Aside from these limitations in predicting Ph.D. completion overall, overreliance on GRE scores in admissions processes also selects against underrepresented groups.


2007 ◽  
Vol 87 (9) ◽  
pp. 1181-1193 ◽  
Author(s):  
Ralph R Utzman ◽  
Daniel L Riddle ◽  
Dianne V Jewell

Background and Purpose: The purpose of this study was to determine whether admissions data could be used to estimate physical therapist student risk for failing the National Physical Therapy Examination (NPTE). Subjects: A nationally representative sample of 20 physical therapist education programs provided data on 3,365 students. Methods: Programs provided data regarding demographic characteristics, undergraduate grade point average (uGPA), and quantitative and verbal Graduate Record Examination scores (qGRE, vGRE). The Federation of State Boards of Physical Therapy provided NPTE data. Data were analyzed using hierarchical logistic regression. Results: A prediction rule that included uGPA, vGRE, qGRE, and race or ethnicity was developed from the entire sample. Prediction rules for individual programs showed large variation. Discussion and Conclusion: Undergraduate grade point average, GRE scores, and race or ethnicity can be useful for estimating student risk for failing the NPTE. Programs should use GPA and GRE scores along with other data to calculate their own estimates of student risk.


1992 ◽  
Vol 71 (3) ◽  
pp. 1019-1022 ◽  
Author(s):  
J. Daniel House ◽  
James J. Johnson

This study was intended to investigate the predictive relationship between GRE scores, cumulative undergraduate grade point average, and the length of time (in semesters) from the initiation of graduate study until final completion of a master's degree. Students' records were evaluated for 291 graduate students in psychology who completed master's degrees during a six-year period. Higher cumulative undergraduate grade point averages were significantly correlated with fewer semesters required for completion of a degree for the entire sample. No values of chi squared for the contrasts between program areas were significant, indicating that the correlations obtained can be considered estimates of the same population values.


Sign in / Sign up

Export Citation Format

Share Document