Studies of Scholarly Productivity in Social Work Using Citation Analysis

1992 ◽  
Vol 28 (3) ◽  
pp. 291-299 ◽  
Author(s):  
Waldo C. Klein ◽  
Martin Bloom
2002 ◽  
Vol 90 (3) ◽  
pp. 1051-1054 ◽  
Author(s):  
John T. Pardeck

This study explored the scholarly productivity of editors of selected social work and psychology journals. Analysis indicated that editors of psychology journals had statistically significantly greater scholarly achievement than editors of social work journals. These findings suggest that scholarly achievement as measured appears to be of less importance in appointing editors to the five selected social work journals than in appointing editors to the five selected psychology journals.


1999 ◽  
Vol 23 (1) ◽  
pp. 67-83 ◽  
Author(s):  
C. Dwayne Wilson ◽  
M. Anwar Hossain ◽  
Bernard Lubin ◽  
Mokone Malebo

1994 ◽  
Vol 46 (9) ◽  
pp. 225-230 ◽  
Author(s):  
Yitzhak Berman ◽  
A. Solomon Eaglstein

2018 ◽  
Vol 10 (1) ◽  
pp. 87-99
Author(s):  
Thomas E. Smith ◽  
Tyler Edison Carter ◽  
Philip J. Osteen ◽  
Lisa S. Panisch

Purpose This study builds on previous investigations on the scholarship of social work faculty using h-index scores. The purpose of this paper is to compare two methods of determining the excellence of social work doctoral programs. Design/methodology/approach This study compared rankings in 75 social work doctoral programs using h-index vs the US News and World Report (USNWR) list. The accuracy of predicting scholarly productivity from USNWR rankings was determined by joint membership in the same quantile block. Information on USNWR rankings, h-index, years of experience, academic rank, and faculty gender were collected. Regression analysis was used in creating a predictive model. Findings Only 39 percent of USNWR rankings accurately predicted which programs had their reputation and scholarly productivity in the same rating block. Conversely, 41 percent of programs had reputations in a higher block than their scholarly productivity would suggest. The regression model showed that while h-index was a strong predictor of USNWR rank (b=0.07, 95% CI: 0.05, 0.08), additional variance was explained by the unique contributions of faculty size (b=0.01, 95% CI: 0.01, 0.02), college age (b=0.002, 95% CI: <0.001, 0.003), and location in the southeast (b=−0.22, 95% CI: −0.39, −0.06). Originality/value For many programs, reputation and scholarly productivity coincide. Other programs have markedly different results between the two ranking systems. Although mean program h-indices are the best predictor of USNWR rankings, caution should be used in making statements about inclusion in the “top 10” or “top 20” programs.


Sign in / Sign up

Export Citation Format

Share Document