scholarly journals EcoEvo-MAPS: An Ecology and Evolution Assessment for Introductory through Advanced Undergraduates

2018 ◽  
Vol 17 (2) ◽  
pp. ar18 ◽  
Author(s):  
Mindi M. Summers ◽  
Brian A. Couch ◽  
Jennifer K. Knight ◽  
Sara E. Brownell ◽  
Alison J. Crowe ◽  
...  

A new assessment tool, Ecology and Evolution–Measuring Achievement and Progression in Science or EcoEvo-MAPS, measures student thinking in ecology and evolution during an undergraduate course of study. EcoEvo-MAPS targets foundational concepts in ecology and evolution and uses a novel approach that asks students to evaluate a series of predictions, conclusions, or interpretations as likely or unlikely to be true given a specific scenario. We collected evidence of validity and reliability for EcoEvo-MAPS through an iterative process of faculty review, student interviews, and analyses of assessment data from more than 3000 students at 34 associate’s-, bachelor’s-, master’s-, and doctoral-granting institutions. The 63 likely/unlikely statements range in difficulty and target student understanding of key concepts aligned with the Vision and Change report. This assessment provides departments with a tool to measure student thinking at different time points in the curriculum and provides data that can be used to inform curricular and instructional modifications.

2012 ◽  
Vol 9 (9) ◽  
pp. 10095-10113
Author(s):  
J. A. Marshall ◽  
A. J. Castillo ◽  
M. B. Cardenas

Abstract. Our objective is to characterize and assess upper division and graduate student thinking in hydrology. We accomplish this through development and testing of an assessment tool for a physical hydrology class. Students were asked to respond to two questions that probed understanding and one question that assessed their ability to apply their knowledge. Student and expert responses to the questions were then used to develop a rubric to score responses. Using the rubric, three researchers independently blind-coded the full set of pre and post artifacts, resulting in 89% inter-rater agreement on the pre-tests and 83% agreement on the post-tests. This result has limitations, including the small number of participants who were all from one institution and the fact that the rubric was still under development. Nevertheless, the high inter-rater agreement from a group of experts is significant; the rubric we developed is a potentially useful tool for assessment of learning and understanding physical hydrology.


2013 ◽  
Vol 17 (2) ◽  
pp. 829-836 ◽  
Author(s):  
J. A. Marshall ◽  
A. J. Castillo ◽  
M. B. Cardenas

Abstract. Our objective is to devise a mechanism to characterize and assess upper division and graduate student thinking in hydrology. We accomplish this through development and testing of an assessment tool for a physical hydrology class. The instrument was piloted in two sections of a physical hydrology course. Students were asked to respond to two questions that probed understanding and one question that assessed their ability to apply their knowledge, both prior to and after the course. Student and expert responses to the questions were classified into broad categories to develop a rubric to score responses. Using the rubric, three researchers independently blind-coded the full set of pre- and post-artifacts, resulting in 89% inter-rater agreement on the pre-tests and 83% agreement on the post-tests. The majority of responses made by students at the beginning of the class were characterized as showing only recognition of hydrology concepts from a non-physical perspective; post surveys indicated that the majority had moved to a basic understanding of physical processes, with some students achieving expert understanding. Our study has limitations, including the small number of participants who were all from one institution and the fact that the rubric was still under development. Nevertheless, the high inter-rater agreement from a group of experts indicates that the process we undertook is potentially useful for assessment of learning and understanding physical hydrology.


2019 ◽  
Vol 43 (1) ◽  
pp. 15-27 ◽  
Author(s):  
Katharine Semsar ◽  
Sara Brownell ◽  
Brian A. Couch ◽  
Alison J. Crowe ◽  
Michelle K. Smith ◽  
...  

We describe the development of a new, freely available, online, programmatic-level assessment tool, Measuring Achievement and Progress in Science in Physiology, or Phys-MAPS ( http://cperl.lassp.cornell.edu/bio-maps ). Aligned with the conceptual frameworks of Core Principles of Physiology, and Vision and Change Core Concepts, Phys-MAPS can be used to evaluate student learning of core physiology concepts at multiple time points in an undergraduate physiology program, providing a valuable longitudinal tool to gain insight into student thinking and aid in the data-driven reform of physiology curricula. Phys-MAPS questions have a modified multiple true/false design and were developed using an iterative process, including student interviews and physiology expert review to verify scientific accuracy, appropriateness for physiology majors, and clarity. The final version of Phys-MAPS was tested with 2,600 students across 13 universities, has evidence of reliability, and has no significant statement biases. Over 90% of the physiology experts surveyed agreed that each Phys-MAPS statement was scientifically accurate and relevant to a physiology major. When testing each statement for bias, differential item functioning analysis demonstrated only a small effect size (<0.008) of any tested demographic variable. Regarding student performance, Phys-MAPS can also distinguish between lower and upper division students, both across different institutions (average overall scores increase with each level of class standing; two-way ANOVA, P < 0.001) and within each of three sample institutions (each ANOVA, P ≤ 0.001). Furthermore, at the level of individual concepts, only evolution and homeostasis do not demonstrate the typical increase across class standing, suggesting these concepts likely present consistent conceptual challenges for physiology students.


2020 ◽  
pp. bmjnph-2020-000134
Author(s):  
Emily A Johnston ◽  
Kristina S Petersen ◽  
Jeannette M Beasley ◽  
Tobias Krussig ◽  
Diane C Mitchell ◽  
...  

IntroductionAdherence to cardioprotective dietary patterns can reduce risk for developing cardiometabolic disease. Rates of diet assessment and counselling by physicians are low. Use of a diet screener that rapidly identifies individuals at higher risk due to suboptimal dietary choices could increase diet assessment and brief counselling in clinical care.MethodsWe evaluated the relative validity and reliability of a 9-item diet risk score (DRS) based on the Healthy Eating Index (HEI)-2015, a comprehensive measure of diet quality calculated from a 160-item, validated food frequency questionnaire (FFQ). We hypothesised that DRS (0 (low risk) to 27 (high risk)) would inversely correlate with HEI-2015 score. Adults aged 35 to 75 years were recruited from a national research volunteer registry (ResearchMatch.org) and completed the DRS and FFQ in random order on one occasion. To measure reliability, participants repeated the DRS within 3 months.ResultsIn total, 126 adults (87% female) completed the study. Mean HEI-2015 score was 63.3 (95% CI: 61.1 to 65.4); mean DRS was 11.8 (95% CI: 10.8 to 12.8). DRS and HEI-2015 scores were inversely correlated (r=−0.6, p<0.001; R2=0.36). The DRS ranked 37% (n=47) of subjects in the same quintile, 41% (n=52) within ±1 quintile of the HEI-2015 (weighted κ: 0.28). The DRS had high reliability (n=102, ICC: 0.83). DRS mean completion time was 2 min.ConclusionsThe DRS is a brief diet assessment tool, validated against a FFQ, that can reliably identify patients with reported suboptimal intake. Future studies should evaluate the effectiveness of DRS-guided diet assessment in clinical care.Trial registration detailsClinicalTrials.gov (NCT03805373).


Author(s):  
M Stavrakas ◽  
G Menexes ◽  
S Triaridis ◽  
P Bamidis ◽  
J Constantinidis ◽  
...  

Abstract Objective This study developed an assessment tool that was based on the objective structured assessment for technical skills principles, to be used for evaluation of surgical skills in cortical mastoidectomy. The objective structured assessment of technical skill is a well-established tool for evaluation of surgical ability. This study also aimed to identify the best material and printing method to make a three-dimensional printed temporal bone model. Methods Twenty-four otolaryngologists in training were asked to perform a cortical mastoidectomy on a three-dimensional printed temporal bone (selective laser sintering resin). They were scored according to the objective structured assessment of technical skill in temporal bone dissection tool developed in this study and an already validated global rating scale. Results Two external assessors scored the candidates, and it was concluded that the objective structured assessment of technical skill in temporal bone dissection tool demonstrated some main aspects of validity and reliability that can be used in training and performance evaluation of technical skills in mastoid surgery. Conclusion Apart from validating the new tool for temporal bone dissection training, the study showed that evolving three-dimensional printing technologies is of high value in simulation training with several advantages over traditional teaching methods.


2018 ◽  
Vol 36 (2) ◽  
pp. 93-96 ◽  
Author(s):  
Sara Moradi Tuchayi ◽  
Hossein Alinia ◽  
Lucy Lan ◽  
Olabola Awosika ◽  
Abigail Cline ◽  
...  

2015 ◽  
Vol 23 (3) ◽  
pp. 485-498
Author(s):  
Martha R. Sleutel ◽  
Celestina Barbosa-Leiker ◽  
Marian Wilson

Background and Purpose: Evidence-based practice (EBP) is essential to optimal health care outcomes. Interventions to improve use of evidence depend on accurate assessments from reliable, valid, and user-friendly tools. This study reports psychometric analyses from a modified version of a widely used EBP questionnaire, the information literacy for nursing practice (ILNP). Methods: After content validity assessments by nurse researchers, a convenience sam ple of 2,439 nurses completed the revised 23-item questionnaire. We examined internal consistency and used factor analyses to assess the factor structure. Results: A modified 4-factor model demonstrated adequate fit to the data. Cronbach’s alpha was .80–.92 for the subscales. Conclusions: The shortened ILNP (renamed Healthcare EBP Assessment Tool or HEAT) demonstrated adequate content validity, construct validity, and reliability.


2021 ◽  
Vol 15 (12) ◽  
pp. 3505-3508
Author(s):  
Noor Ul Ain Fatima ◽  
Qurat-Ul- Ain ◽  
Fareeha Kausar ◽  
Mian Ali Raza ◽  
Misbah Waris ◽  
...  

Objective: To translate and validate the ABC-Scale in Urdu language to predict risk of fall in older population. Study design: Cross-cultural Translation and validation Place and Duration: Study was conducted in older adult community of Sialkot from March 2020 to December 2020. Methodology: Translation of ABC in Urdu was conducted by using Beaton et al guidelines. Two bilingual translators translated the original version into Urdu language step wise, correction process was followed. Then two backward translations were done by language expert. After all this process, the translated version was reviewed by the professionals and the final version was applied on 15 individuals. Its reliability and validity was tested on 60 older adults. Results: For test re test reliability, intra class correlation coefficient ICC was measured with a value of 0.984 Which shows good test re-test reliability. The internal consistency and reliability of ABC was calculated by Cronbach’s alpha for total score with a value of 0.985. Content validity was good with values of CVI ranging from 0.767 to 0.955. To test the discriminative validity, independent t test was used to show the difference between the healthy and unhealthy adults. Factor analysis of UABC showed total variance 81.277 and cumulative variance was also 81.277. To calculate construct validity of U-ABC Pearson’s correlation coefficient was used and measured as 0.558. Conclusion: It was concluded that Urdu version of UABC is a valid assessment tool for older adults with fear of fall. It has good content validity, construct validity and reliability. Keywords: activities specific balance scale, validation, Urdu translation, reliability, tool translation


2018 ◽  
Vol 29 (2) ◽  
pp. 152-161 ◽  
Author(s):  
Britt F. Pados

AbstractChildren with CHD often experience difficulty with oral feeding, which contributes to growth faltering in this population. Few studies have explored symptoms of problematic feeding in children with CHD using valid and reliable measures of oral feeding. The purpose of this study was to describe symptoms of problematic feeding in children with CHD compared to healthy children without medical conditions, taking into account variables that may contribute to symptoms of problematic feeding. Oral feeding was measured by the Pediatric Eating Assessment Tool, a parent report assessment of feeding with evidence of validity and reliability. This secondary analysis used data collected from web-based surveys completed by parents of 1093 children between 6 months and 7 years of age who were eating solid foods by mouth. General linear models were used to evaluate the differences between 94 children with CHD and 999 children without medical conditions based on the Pediatric Eating Assessment Tool total score and four subscale scores. Covariates tested in the models included breathing tube duration, type of CHD, gastroesophageal reflux, genetic disorder, difficulty with breast- or bottle-feeding during infancy, cardiac surgery, and current child age. Children with CHD had significantly more symptoms of problematic feeding than healthy children on the Pediatric Eating Assessment Tool total score, more physiologic symptoms, problematic mealtime behaviours, selective/restrictive eating, and oral processing dysfunction (p <0.001 for all), when taking into account relevant covariates. Additional research is needed in children with CHD to improve risk assessment and develop interventions to optimise feeding and growth.


2021 ◽  
Vol 19 (55) ◽  
pp. 671-686
Author(s):  
Hussain Alkharusi ◽  
Said Aldhafri ◽  
Ibrahim Al-Harthy ◽  
Hafidha Albarashdi ◽  
Marwa Alrajhi ◽  
...  

Introduction.  Homework is one of the daily assessment methods used by the classroom teacher. In the literature, there are many studies dealing with homework management from the perspectives of students and parents. However, studies concerning teachers' self-efficacy for homework management are scarce. This study aimed at developing and validating a scale for measuring teachers' self-efficacy for homework management. Method.  A descriptive research design was employed in this study.  The participants were 127 teachers randomly selected from one educational governorate in the Sultanate of Oman. The literature was reviewed to construct 20 items reflecting various aspects of the homework design and implementation. The items and the responses were subjected to a validation process. Results.  Factorial structure of the scale revealed three subscales: efficacy for planning and designing homework; efficacy for monitoring, assessing, and providing feedback on homework; and efficacy for considering individual differences in homework. The three subscales showed acceptable evidence of validity and reliability. Discussion and Conclusion.  The psychometric analysis of the teachers’ responses showed that the three subscales were reliable measures of teachers’ self-efficacy for homework management. These results support the usefulness of using the scale as an assessment tool for research purposes and the professional development of teachers. These results present new knowledge about teachers’ management of homework with planning and designing being the salient factor. 


Sign in / Sign up

Export Citation Format

Share Document