Assessment for Effective Intervention
Latest Publications


TOTAL DOCUMENTS

564
(FIVE YEARS 58)

H-INDEX

24
(FIVE YEARS 1)

Published By Sage Publications

1938-7458, 1534-5084

2021 ◽  
pp. 153450842110659
Author(s):  
Meaghan McKenna ◽  
Robert F. Dedrick ◽  
Howard Goldstein

This article describes the development of the Early Elementary Writing Rubric ( EEWR), an analytic assessment designed to measure kindergarten and first-grade writing and inform educators’ instruction. Crocker and Algina’s (1986) approach to instrument development and validation was used as a guide to create and refine the writing measure. Study 1 describes the development of the 10-item measure (response scale ranges from 0 = Beginning of Kindergarten to 5 = End of First Grade). Educators participated in focus groups, expert panel review, cognitive interviews, and pretesting as part of the instrument development process. Study 2 evaluates measurement quality in terms of score reliability and validity. Data from writing samples produced by 634 students in kindergarten and first-grade classrooms were collected during pilot testing. An exploratory factor analysis was conducted to evaluate the psychometric properties of the EEWR. A one-factor model fit the data for all writing genres and all scoring elements were retained with loadings ranging from 0.49 to 0.92. Internal consistency reliability was high and ranged from .89 to .91. Interrater reliability between the researcher and participants varied from poor to good and means ranged from 52% to 72%. First-grade students received higher scores than kindergartners on all 10 scoring elements. The EEWR holds promise as an acceptable, useful, and psychometrically sound measure of early writing. Further iterative development is needed to fully investigate its ability to accurately identify the present level of student performance and to determine sensitivity to developmental and instruction gains.


2021 ◽  
pp. 153450842110635
Author(s):  
Trude Nergård-Nilssen ◽  
Oddgeir Friborg

This article describes the development and psychometric properties of a new Dyslexia Marker Test for Children (Dysmate-C). The test was designed to identify Norwegian students who need special instructional attention. The computerized test includes measures of letter knowledge, phoneme awareness, rapid automatized naming, working memory, decoding, and spelling skills. Data were collected data from a sample of more than 1,100 students. Item response theory (IRT) was used for the psychometric evaluation, and principal component analysis for checking uni-dimensionality. IRT was further used to select and remove items, which significantly shortened the test battery without sacrificing reliability or discriminating ability. Cronbach’s alphas ranged between .84 and .95. Validity was established by examining how well the Dysmate-C identified students already diagnosed with dyslexia. Logistic regression and receiver operating characteristic (ROC) curve analyses indicated good to excellent accuracy in separating children with dyslexia from typical children (area under curve [AUC] = .92). The Dysmate-C meets the standards for reliability and validity. The use of regression-based norms, voice-over instructions, easy scoring procedures, accurate timing, and automatic computation of scores, make the test a useful tool. It may be used in as part screening procedure, and as part of a diagnostic assessment. Limitations and practical implications are discussed.


2021 ◽  
pp. 153450842110554
Author(s):  
Børge Strømgren ◽  
Kalliu Carvalho Couto

Norwegian schools are obliged to develop students’ social competences. Programs used are School-Wide Positive Behavioral Interventions and Supports (PBIS) or classroom-based aimed to teach students social-emotional ( Social and Emotional Learning [SEL]) skills in a broad sense. Some rating scales have been used to assess the effect of SEL programs on SEL skills. We explored the Norwegian version of the 12-item Social Emotional Assets and Resilience Scales–Child–Short Form (SEARS-C-SF). An exploratory factor analysis (EFA) was performed, proposing a one-factor solution which was confirmed by a confirmatory factor analysis (CFA). The scale reliability of .84 (λ2), means and standard deviations, as well as Tier levels were compared with the original short form. Finally, concurrent, discriminant, and convergent validity with different Strengths and Difficulties Questionnaire (SDQ) subscales were shown.


2021 ◽  
pp. 153450842110402
Author(s):  
Benjamin G. Solomon ◽  
Ole J. Forsberg ◽  
Monelle Thomas ◽  
Brittney Penna ◽  
Katherine M. Weisheit

Bayesian regression has emerged as a viable alternative for the estimation of curriculum-based measurement (CBM) growth slopes. Preliminary findings suggest such methods may yield improved efficiency relative to other linear estimators and can be embedded into data management programs for high-frequency use. However, additional research is needed, as Bayesian estimators require multiple specifications of the prior distributions. The current study evaluates the accuracy of several combinations of prior values, including three distributions of the residuals, two values of the expected growth rate, and three possible values for the precision of slope when using Bayesian simple linear regression to estimate fluency growth slopes for reading CBM. We also included traditional ordinary least squares (OLS) as a baseline contrast. Findings suggest that the prior specification for the residual distribution had, on average, a trivial effect on the accuracy of the slope. However, specifications for growth rate and precision of slope were influential, and virtually all variants of Bayesian regression evaluated were superior to OLS. Converging evidence from both simulated and observed data now suggests Bayesian methods outperform OLS for estimating CBM growth slopes and should be strongly considered in research and practice.


2021 ◽  
pp. 153450842110350
Author(s):  
Jillian Dawes ◽  
Benjamin Solomon ◽  
Daniel F. McCleary ◽  
Cutler Ruby ◽  
Brian C. Poncy

The current availability of research examining the precision of single-skill mathematics (SSM) curriculum-based measurements (CBMs) for progress monitoring is limited. Given the observed variance in administration conditions across current practice and research use, we examined potential differences between student responding and precision of slope when SSM-CBMs were administered individually and in group (classroom) conditions. No differences in student performance or measure precision were observed between conditions, indicating flexibility in the practical and research use of SSM-CBMs across administration conditions. In addition, findings contributed to the literature examining the stability of SSM-CBMs slopes of progress when used for instructional decision-making. Implications for the administration and interpretation of SSM-CBMs in practice are discussed.


2021 ◽  
pp. 153450842110308
Author(s):  
Jacqueline Huscroft-D’Angelo ◽  
Jessica Wery ◽  
Jodie D. Martin-Gutel ◽  
Corey Pierce ◽  
Kara Loftin

The Scales for Assessing Emotional Disturbance Screener–Third Edition ( SAED-3) is a standardized, norm-referenced measure designed to identify school-aged students at risk of emotional and behavioral problems. Four studies are reported to address the psychometric status of the SAED-3 Screener. Study 1 examined the internal consistency of the Screener using a sample of 1,430 students. Study 2 investigated the interrater reliability of the Screener results across 123 pairs of teachers who had worked with the student for at least 2 months. Study 3 assessed the extent to which the results from the Screener are consistent over time by examining test–retest reliability. Study 4 examined convergent validity by comparing the Screener to the Strength and Difficulties Questionnaire ( SDQ). Across all studies, samples were drawn from populations of students included in the nationally representative normative sample. The averaged coefficient alpha for the Screener was .88. Interrater reliability coefficient for the composite was .83. Test–retest reliability of the composite was .83. Correlations with the SDQ subscales ranged from .74 to .99, and the correlation of the Screener to the SDQ composite was .99. Limitations and implications for use of the Screener are discussed.


2021 ◽  
pp. 153450842110271
Author(s):  
Marika King ◽  
Anne L. Larson ◽  
Jay Buzhardt

Few, if any, reliable and valid screening tools exist to identify language delay in young Spanish–English speaking dual-language learners (DLLs). The early communication indicator (ECI) is a brief, naturalistic measure of expressive communication development designed to inform intervention decision-making and progress monitoring for infants and toddlers at-risk for language delays. We assessed the accuracy of the ECI as a language-screening tool for DLLs from Latinx backgrounds by completing classification accuracy analysis on 39 participants who completed the ECI and a widely used standardized reference, the Preschool Language Scales, 5th edition—Spanish, (PLS-5 Spanish). Sensitivity of the ECI was high, but the specificity was low, resulting in low classification accuracy overall. Given the limitations of using standalone assessments as a reference for DLLs, a subset of participants ( n = 22) completed additional parent-report measures related to identification of language delay. Combining the ECI with parent-report data, the specificity of the ECI remained high, and the sensitivity improved. Findings show preliminary support for the ECI as a language-screening tool, especially when combined with other information sources, and highlight the need for validated language assessment for DLLs from Latinx backgrounds.


2021 ◽  
pp. 153450842110149
Author(s):  
Allison R. Lombardi ◽  
Graham G. Rifenbark ◽  
Marcus Poppen ◽  
Kyle Reardon ◽  
Valerie L. Mazzotti ◽  
...  

In this study, we examined the structural validity of the Secondary Transition Fidelity Assessment (STFA), a measure of secondary schools’ use of programs and practices demonstrated by research to lead to meaningful college and career outcomes for all students, including students at-risk for or with disabilities, and students from diverse backgrounds. Drawing from evidence-based practices endorsed by the National Technical Assistance Center for Transition and the Council for Exceptional Children’s Division on Career Development and Transition, the instrument development and refinement process was iterative and involved collecting stakeholder feedback and pilot testing. Responses from a national sample of educators ( N = 1,515) were subject to an exploratory factor analysis resulting in five measurable factors: (a) Adolescent Engagement, (b) Inclusive and Tiered Instruction, (c) School-Family Collaboration, (d) District-Community Collaboration, and (e) Professional Capacity. The 5-factor model was subject to a confirmatory factor analysis which resulted in good model fit. Invariance testing on the basis of geographical region strengthened validity evidence and showed a high level of variability with regard to implementing evidence-based transition services. Findings highlight the need for consistent and regular use of a robust, self-assessment fidelity measure of transition service implementation to support all students’ transition to college and career.


2021 ◽  
pp. 153450842110149
Author(s):  
Martin T. Peters ◽  
Karin Hebbecker ◽  
Elmar Souvignier

Monitoring learning progress enables teachers to address students’ interindividual differences and to adapt instruction to students’ needs. We investigated whether using learning progress assessment (LPA) or using a combination of LPA and prepared material to help teachers implement assessment-based differentiated instruction resulted in improved reading skills for students. The study was conducted in second-grade classrooms in general primary education, and participants ( N = 33 teachers and N = 619 students) were assigned to one of three conditions: a control group (CG); a first intervention group (LPA), which received LPA only; or a second intervention group (LPA-RS), which received a combination of LPA and material for differentiated reading instruction (the “reading sportsman”). At the beginning and the end of one school year, students’ reading fluency and reading comprehension were assessed. Compared with business-as-usual reading instruction (the CG), providing teachers with LPA or both LPA and prepared material did not lead to higher gains in reading competence. Furthermore, no significant differences between the LPA and LPA-RS conditions were found. Corresponding analyses for lower- and higher-achieving students also revealed no differences between the treatment groups. Results are discussed regarding the implementation of LPA and reading instruction in general education.


2021 ◽  
pp. 153450842199877
Author(s):  
Wilhelmina van Dijk ◽  
A. Corinne Huggins-Manley ◽  
Nicholas A. Gage ◽  
Holly B. Lane ◽  
Michael Coyne

In reading intervention research, implementation fidelity is assumed to be positively related to student outcomes, but the methods used to measure fidelity are often treated as an afterthought. Fidelity has been conceptualized and measured in many different ways, suggesting a lack of construct validity. One aspect of construct validity is the fidelity index of a measure. This methodological case study examined how different decisions in fidelity indices influence relative rank ordering of individuals on the construct of interest and influence our perception of the relation between the construct and intervention outcomes. Data for this study came from a large State-funded project to implement multi-tiered systems of support for early reading instruction. Analyses were conducted to determine whether the different fidelity indices are stable in relative rank ordering participants and if fidelity indices of dosage and adherence data influence researcher decisions on model building within a multilevel modeling framework. Results indicated that the fidelity indices resulted in different relations to outcomes with the most commonly used fidelity indices for both dosage and adherence being the worst performing. The choice of index to use should receive considerable thought during the design phase of an intervention study.


Sign in / Sign up

Export Citation Format

Share Document