US academic libraries' staffing and expenditure trends (1996–2016)

2020 ◽  
Vol 41 (4/5) ◽  
pp. 247-268 ◽  
Author(s):  
Starr Hoffman ◽  
Samantha Godbey

PurposeThis paper explores trends over time in library staffing and staffing expenditures among two- and four-year colleges and universities in the United States.Design/methodology/approachResearchers merged and analyzed data from 1996 to 2016 from the National Center for Education Statistics for over 3,500 libraries at postsecondary institutions. This study is primarily descriptive in nature and addresses the research questions: How do staffing trends in academic libraries over this period of time relate to Carnegie classification and institution size? How do trends in library staffing expenditures over this period of time correspond to these same variables?FindingsAcross all institutions, on average, total library staff decreased from 1998 to 2012. Numbers of librarians declined at master’s and doctoral institutions between 1998 and 2016. Numbers of students per librarian increased over time in each Carnegie and size category. Average inflation-adjusted staffing expenditures have remained steady for master's, baccalaureate and associate's institutions. Salaries as a percent of library budget decreased only among doctoral institutions and institutions with 20,000 or more students.Originality/valueThis is a valuable study of trends over time, which has been difficult without downloading and merging separate data sets from multiple government sources. As a result, few studies have taken such an approach to this data. Consequently, institutions and libraries are making decisions about resource allocation based on only a fraction of the available data. Academic libraries can use this study and the resulting data set to benchmark key staffing characteristics.

1998 ◽  
Vol 27 (3) ◽  
pp. 351-369 ◽  
Author(s):  
MICHAEL NOBLE ◽  
SIN YI CHEUNG ◽  
GEORGE SMITH

This article briefly reviews American and British literature on welfare dynamics and examines the concepts of welfare dependency and ‘dependency culture’ with particular reference to lone parents. Using UK benefit data sets, the welfare dynamics of lone mothers are examined to explore the extent to which they inform the debates. Evidence from Housing Benefits data show that even over a relatively short time period, there is significant turnover in the benefits-dependent lone parent population with movement in and out of income support as well as movement into other family structures. Younger lone parents and owner-occupiers tend to leave the data set while older lone parents and council tenants are most likely to stay. Some owner-occupier lone parents may be relatively well off and on income support for a relatively short time between separation and a financial settlement being reached. They may also represent a more highly educated and highly skilled group with easier access to the labour market than renters. Any policy moves paralleling those in the United States to time limit benefit will disproportionately affect older lone parents.


2017 ◽  
Vol 7 ◽  
pp. 46-49 ◽  
Author(s):  
Michael F. Pesko ◽  
Johanna Catherine Maclean ◽  
Cameron M. Kaplan ◽  
Steven C. Hill

2020 ◽  
Vol 122 (11) ◽  
pp. 1-32
Author(s):  
Michael A. Gottfried ◽  
Vi-Nhuan Le ◽  
J. Jacob Kirksey

Background It is of grave concern that kindergartners are missing more school than students in any other year of elementary school; therefore, documenting which students are absent and for how long is of upmost importance. Yet, doing so for students with disabilities (SWDs) has received little attention. This study addresses this gap by examining two cohorts of SWDs, separated by more than a decade, to document changes in attendance patterns. Research Questions First, for SWDs, has the number of school days missed or chronic absenteeism rates changed over time? Second, how are changes in the number of school days missed and chronic absenteeism rates related to changes in academic emphasis, presence of teacher aides, SWD-specific teacher training, and preschool participation? Subjects This study uses data from the Early Childhood Longitudinal Study (ECLS), a nationally representative data set of children in kindergarten. We rely on both ECLS data sets— the kindergarten classes of 1998–1999 and 2010–2011. Measures were identical in both data sets, making it feasible to compare children across the two cohorts. Given identical measures, we combined the data sets into a single data set with an indicator for being in the older cohort. Research Design This study examined two sets of outcomes: The first was number of days absent, and the second was likelihood of being chronically absent. These outcomes were regressed on a measure for being in the older cohort (our key measure for changes over time) and numerous control variables. The error term was clustered by classroom. Findings We found that SWDs are absent more often now than they were a decade earlier, and this growth in absenteeism was larger than what students without disabilities experienced. Absenteeism among SWDs was higher for those enrolled in full-day kindergarten, although having attended center-based care mitigates this disparity over time. Implications are discussed. Conclusions Our study calls for additional attention and supports to combat the increasing rates of absenteeism for SWDs over time. Understanding contextual shifts and trends in rates of absenteeism for SWDs in kindergarten is pertinent to crafting effective interventions and research geared toward supporting the academic and social needs of these students.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Tressy Thomas ◽  
Enayat Rajabi

PurposeThe primary aim of this study is to review the studies from different dimensions including type of methods, experimentation setup and evaluation metrics used in the novel approaches proposed for data imputation, particularly in the machine learning (ML) area. This ultimately provides an understanding about how well the proposed framework is evaluated and what type and ratio of missingness are addressed in the proposals. The review questions in this study are (1) what are the ML-based imputation methods studied and proposed during 2010–2020? (2) How the experimentation setup, characteristics of data sets and missingness are employed in these studies? (3) What metrics were used for the evaluation of imputation method?Design/methodology/approachThe review process went through the standard identification, screening and selection process. The initial search on electronic databases for missing value imputation (MVI) based on ML algorithms returned a large number of papers totaling at 2,883. Most of the papers at this stage were not exactly an MVI technique relevant to this study. The literature reviews are first scanned in the title for relevancy, and 306 literature reviews were identified as appropriate. Upon reviewing the abstract text, 151 literature reviews that are not eligible for this study are dropped. This resulted in 155 research papers suitable for full-text review. From this, 117 papers are used in assessment of the review questions.FindingsThis study shows that clustering- and instance-based algorithms are the most proposed MVI methods. Percentage of correct prediction (PCP) and root mean square error (RMSE) are most used evaluation metrics in these studies. For experimentation, majority of the studies sourced the data sets from publicly available data set repositories. A common approach is that the complete data set is set as baseline to evaluate the effectiveness of imputation on the test data sets with artificially induced missingness. The data set size and missingness ratio varied across the experimentations, while missing datatype and mechanism are pertaining to the capability of imputation. Computational expense is a concern, and experimentation using large data sets appears to be a challenge.Originality/valueIt is understood from the review that there is no single universal solution to missing data problem. Variants of ML approaches work well with the missingness based on the characteristics of the data set. Most of the methods reviewed lack generalization with regard to applicability. Another concern related to applicability is the complexity of the formulation and implementation of the algorithm. Imputations based on k-nearest neighbors (kNN) and clustering algorithms which are simple and easy to implement make it popular across various domains.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


2017 ◽  
Vol 24 (4) ◽  
pp. 1052-1064 ◽  
Author(s):  
Yong Joo Lee ◽  
Seong-Jong Joo ◽  
Hong Gyun Park

Purpose The purpose of this paper is to measure the comparative efficiency of 18 Korean commercial banks under the presence of negative observations and examine performance differences among them by grouping them according to their market conditions. Design/methodology/approach The authors employ two data envelopment analysis (DEA) models such as a Banker, Charnes, and Cooper (BCC) model and a modified slacks-based measure of efficiency (MSBM) model, which can handle negative data. The BCC model is proven to be translation invariant for inputs or outputs depending on output or input orientation. Meanwhile, the MSBM model is unit invariant in addition to translation invariant. The authors compare results from both models and choose one for interpreting results. Findings Most Korean banks recovered from the worst performance in 2011 and showed similar performance in recent years. Among three groups such as national banks, regional banks, and special banks, the most special banks demonstrated superb performance across models and years. Especially, the performance difference between the special banks and the regional banks was statistically significant. The authors concluded that the high performance of the special banks was due to their nationwide market access and ownership type. Practical implications This study demonstrates how to analyze and measure the efficiency of entities when variables contain negative observations using a data set for Korean banks. The authors have tried two major DEA models that are able to handle negative data and proposed a practical direction for future studies. Originality/value Although there are research papers for measuring the performance of banks in Korea, all of the papers in the topic have studied efficiency or productivity using positive data sets. However, variables such as net incomes and growth rates frequently include negative observations in bank data sets. This is the first paper to investigate the efficiency of bank operations in the presence of negative data in Korea.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Denise Jackson ◽  
Ian Li

PurposeThere are ongoing concerns regarding university degree credentials leading to graduate-level employment. Tracking graduate underemployment is complicated by inconsistent measures and tendencies to report on outcomes soon after graduation. Our study explored transition into graduate-level work beyond the short-term, examining how determining factors change over time.Design/methodology/approachWe considered time-based underemployment (graduates are working less hours than desired) and overqualification (skills in employment not matching education level/type) perspectives. We used a national data set for 41,671 graduates of Australian universities in 2016 and 2017, surveyed at four months and three years' post-graduation, to explore determining factors in the short and medium-term. Descriptive statistical techniques and binary logistic regression were used to address our research aims.FindingsGraduates' medium-term employment states were generally positive with reduced unemployment and increased full-time job attainment. Importantly, most graduates that were initially underemployed transited to full-time work at three years post-graduation. However, around one-fifth of graduates were overqualified in the medium-term. While there was some evidence of the initially qualified transitioning to matched employment, supporting career mobility theory, over one-third remaining overqualified. Skills, personal characteristics and degree-related factors each influenced initial overqualification, while discipline was more important in the medium-term.Originality/valueOur study explores both time-based underemployment and overqualification, and over time, builds on earlier work. Given the longer-term, negative effects of mismatch on graduates' career and wellbeing, findings highlight the need for career learning strategies to manage underemployment and consideration of future labour market policy for tertiary graduates.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kolawole Ogundari

Purpose The cyclical behavior of US crime rates reflects the dynamics of crime in the country. This paper aims to investigate the US's club convergence of crime rates to provide insights into whether the crime rates increased or decreased over time. The paper also analyzes the factors influencing the probability of states converging to a particular convergence club of crime. Design/methodology/approach The analysis is based on balanced panel data from all 50 states and the district of Columbia on violent and property crime rates covering 1976–2019. This yields a cross-state panel of 2,244 observations with 55 time periods and 51 groups. In addition, the author used a club clustering procedure to investigate the convergence hypothesis in the study. Findings The empirical results support population convergence of violent crime rates. However, the evidence that supports population convergence of property crime rates in the study is not found. Further analysis using the club clustering procedure shows that property crime rates converge into three clubs. The existence of club convergence in property crime rates means that the variation in the property crime rates tends to narrow among the states within each of the clubs identified in the study. Analysis based on an ordered probit model identifies economic, geographic and human capital factors that significantly drive the state's convergence club membership. Practical implications The central policy insight from these results is that crime rates grow slowly over time, as evident by the convergence of violent crime and club convergence of property crime in the study. Moreover, the existence of club convergence of property crime is an indication that policies to mitigate property crime might need to target states within each club. This includes the efforts to use state rather than national crime-fighting policies. Social implications As crimes are committed at the local level, this study's primary limitation is the lack of community-level data on crime and other factors considered. Analysis based on community-level data might provide a better representation of crime dynamics. However, the author hopes to consider this as less aggregated data are available to use in future research. Originality/value The paper provides new insights into the convergence of crime rates using the club convergence procedure in the USA. This is considered an improvement to the methods used in the previous studies.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mehdi Barati ◽  
Hadiseh Fariditavana

PurposeThe purpose of this study is to first assess how the US healthcare financing system is influenced by income variation. Then, it examines whether or not the impact of income variation is asymmetric.Design/methodology/approachFor the analyses of this paper, the autoregressive distributed lag (ARDL) model is implemented to a data set covering the period from 1960 to 2018.FindingsThe results provide evidence that major funding sources of aggregate healthcare expenditure (HCE) respond differently to changes in income. The results also imply that the effect of income is not always symmetric.Originality/valueMany studies have attempted to identify the relationship between income and HCE. A common feature of past studies is that they have only focused on aggregate HCE, while one might be interested in knowing how major funders of aggregate HCE would be affected by changes in income. Another common feature of past studies is that they have assumed that the relationship between income and HCE is symmetric.


Sign in / Sign up

Export Citation Format

Share Document