scholarly journals Measurement Invariance of a Direct Behavior Rating Multi Item Scale across Occasions

2019 ◽  
Vol 8 (2) ◽  
pp. 46 ◽  
Author(s):  
Markus Gebhardt ◽  
Jeffrey M. DeVries ◽  
Jana Jungjohann ◽  
Gino Casale ◽  
Andreas Gegenfurtner ◽  
...  

Direct Behavior Rating (DBR) as a behavioral progress monitoring tool can be designed as longitudinal assessment with only short intervals between measurement points. The reliability of these instruments has been mostly evaluated in observational studies with small samples based on generalizability theory. However, for a standardized use in the pedagogical field, a larger and broader sample is required in order to assess measurement invariance between different participant groups and over time. Therefore, we constructed a DBR, the Questionnaire for Monitoring Behavior in Schools (QMBS) with multiple items to measure the occurrence of specific externalizing and internalizing student classroom behaviors on a Likert scale (1 = never to 7 = always). In a pilot study, two trained raters observed 16 primary education students and rated the student behavior over all items with a satisfactory reliability. In the main study, 108 regular primary school students, 97 regular secondary students, and 14 students in a clinical setting were rated daily over one week (five measurement points). Item response theory (IRT) analyses confirmed the technical adequacy of the instrument and latent growth models demonstrated the instrument’s stability over time. Further development of the instrument and study designs to implement DBRs is discussed.

Author(s):  
Markus Gebhardt ◽  
Jeffrey M. DeVries ◽  
Jana Jungjohann ◽  
Gino Casale ◽  
Andreas Gegenfurtner ◽  
...  

Direct Behavior Rating (DBR) as a behavioral progress monitoring tool can be designed as longitudinal assessment with only short intervals between measurement points. The reliability of these instruments has been evaluated mostly in observational studies with small samples based on generalizability theory. However, for standardized use in the pedagogical field, a larger and broader sample is required in order to assess measurement invariance between different participant groups and over time. Therefore, we constructed a DBR with multiple items to measure the occurrence of specific externalizing and internalizing student classroom behaviors on a Likert scale (1 = never to 7 = always). In a pilot study, two trained raters observed 16 primary school students and rated the student behavior over all items with a satisfactory reliability. In the main study, 108 regular primary school students, 97 regular secondary school students and 14 students in a clinical setting were rated daily over one week (five measurement points). IRT analyses confirmed the instrument’s technical adequacy, and latent growth models demonstrated the instrument’s stability over time. Further development of the instrument and study designs to implement DBRs are discussed.


2019 ◽  
pp. 106342661988234 ◽  
Author(s):  
Brian Daniels ◽  
Amy M. Briesch ◽  
Robert J. Volpe ◽  
Julie Sarno Owens

Direct Behavior Rating (DBR) is an efficient method for monitoring changes in student behavior in response to intervention. Emerging research on DBR Multi-Item Scales (DBR-MIS) indicates that DBR-MIS has promising characteristics as a progress-monitoring assessment. Specifically, the multiple items within DBR-MIS allow stakeholders to measure specific behaviors at the item level, as well as global constructs at the scale level. In addition, studies have shown that fewer rating occasions are necessary to reach acceptable levels of dependability when using DBR-MIS as opposed to single-item scales (DBR-SIS). The purpose of the study was to develop and validate DBR-MIS problem behavior scales (Disruptive, Oppositional, Interpersonal Conflict, and Conduct Problems) that may be used to evaluate students’ response to social-emotional or behavioral intervention. Within the first phase of development, item content was generated and subjected to evaluation by panels of researchers and school-based consumers. Exploratory factor analysis (EFA) was then used in the second phase to identify items that represented the strongest indicators of each construct. Teachers ( N = 307) in Grades K–3, from 35 school districts across 13 states in the Northeastern, Midwestern, Southern, and Southwestern United States each completed ratings for one randomly selected student ( N = 307). Results of the EFA using a starting pool of nine to 11 items for each DBR-MIS initially indicated one-factor solutions for the Disruptive and Oppositional scales and a two-factor solution for the Interpersonal Conflict scale. Consequently, a new Conduct Problems scale was created from items loading on the second factor. Implications for progress monitoring and future research are discussed.


2020 ◽  
pp. 105984052096364
Author(s):  
Tania M. Haag ◽  
Gabriela Calderon Velazquez ◽  
Tresa Wiggins ◽  
Paul Spin ◽  
Sara B. Johnson ◽  
...  

Glasses wearing at school remains low even when glasses are provided. This study investigated whether a classroom intervention to promote glasses wearing was associated with increased glasses wearing and improved classroom behavior. A pretest, posttest design was implemented with 44 students in Grades 1–4 at an urban public elementary school. Over 5 weeks, teachers encouraged eyeglass wearing through a classroom tracker, verbal reminders, and incentives. Glasses wearing and student behavior were monitored using the Direct Behavior Rating Scale of academic engagement and behavior for 13 weeks, including 4 weeks before and after the intervention. Glasses wearing increased from 56% to 73% (95% confidence interval [CI] = [0.08, 0.26]) in the first 2 weeks of the intervention, but not after a spring recess. The intervention was associated with significantly improved academic engagement (4.31%, 95% CI [2.17, 6.45]), respect (3.55%, 95% CI [1.77, 5.34]), and disruption (−4.28%, 95% CI [−6.51, −2.06]) compared to baseline. Higher academic engagement and disruption persisted 4 weeks after the intervention ended. A classroom-based glasses tracking and incentive system is associated with improved eyeglass wearing and classroom behavior among elementary students. A longer term randomized trial is needed to confirm these promising results.


2018 ◽  
Vol 44 (2) ◽  
pp. 123-134
Author(s):  
Stephen P. Kilgus ◽  
T. Chris Riley-Tillman ◽  
Janine P. Stichter ◽  
Alexander M. Schoemann ◽  
Sarah Owens

A line of research has supported the development and validation of Direct Behavior Rating–Single Item Scales (DBR-SIS) for use in progress monitoring. Yet, this research was largely conducted within the general education setting with typically developing children. It is unknown whether the tool may be defensibly used with students exhibiting more substantial concerns, including students with social competence difficulties. The purpose of this investigation was to examine the concurrent validity of DBR-SIS in a middle school sample of students exhibiting substantial social competence concerns ( n = 58). Students were assessed using both DBR-SIS and systematic direct observation (SDO) across three target behaviors. Each student was enrolled in one of two interventions: the Social Competence Intervention or a business-as-usual control condition. Students were assessed across three time points, including baseline, mid-intervention, and postintervention. A review of across-time correlations indicated small to moderate correlations between DBR-SIS and SDO data ( r = .25–.45). Results further suggested that the relationships between DBR-SIS and SDO targets were small to large at baseline. Correlations attenuated over time, though differences across time points were not statistically significant. This was with the exception of academic engagement correlations, which remained moderate–high across all time points.


2018 ◽  
Vol 28 (1) ◽  
pp. 29-42 ◽  
Author(s):  
Chelsea L. Hustus ◽  
Julie Sarno Owens ◽  
Robert J. Volpe ◽  
Amy M. Briesch ◽  
Brian Daniels

The primary goal of this study was to assess the treatment sensitivity of four newly developed Direct Behavior Rating–Multi-Item Scales (DBR-MIS) that assess the domains of academic engagement, disruptive behavior, organizational skills, and oppositional behavior in the context of a Daily Report Card (DRC) intervention. To achieve this goal, we first evaluated the integrity and effectiveness of the DRC intervention in this sample. Participants included six elementary school teachers, each of whom delivered a DRC intervention with one student from their classroom, while completing DBR-MIS ratings on a daily basis for 2 months. Results confirmed the effectiveness of the DRC intervention (all DRC target behaviors demonstrated improvement, with at least half demonstrating improvement that was moderate to large in magnitude) and revealed a positive relationship between DRC implementation integrity and student outcomes. We found strong evidence for the treatment sensitivity of the DBR-MIS assessing academic engagement, disruptive behavior, and organizational skills. Results for the treatment sensitivity of the DBR-MIS oppositional scale were inconclusive. Implications for progress monitoring using the recently developed DBR-MIS are discussed.


2009 ◽  
Vol 34 (4) ◽  
pp. 201-213 ◽  
Author(s):  
Theodore J. Christ ◽  
T. Chris Riley-Tillman ◽  
Sandra M. Chafouleas

2016 ◽  
Vol 42 (2) ◽  
pp. 119-126 ◽  
Author(s):  
Faith G. Miller ◽  
T. Chris Riley-Tillman ◽  
Sandra M. Chafouleas ◽  
Alyssa A. Schardt

The purpose of this study was to investigate the impact of two different Direct Behavior Rating–Single Item Scale (DBR-SIS) formats on rating accuracy. A total of 119 undergraduate students participated in one of two study conditions, each utilizing a different DBR-SIS scale format: one that included percentage of time anchors on the DBR-SIS scale and an explicit reference to duration of the target behavior (percent group) and one that did not include percentage anchors nor a reference to duration of the target behavior (no percent group). Participants viewed nine brief video clips and rated student behavior using one of the two DBR-SIS formats. Rating accuracy was determined by calculating the absolute difference between participant ratings and two criterion measures: systematic direct observation scores and DBR-SIS expert ratings. Statistically significant differences between groups were found on only two occasions, pertaining to ratings of academically engaged behavior. Limitations and directions for future research are discussed.


Sign in / Sign up

Export Citation Format

Share Document