scholarly journals Assessing student understanding of physical hydrology

2013 ◽  
Vol 17 (2) ◽  
pp. 829-836 ◽  
Author(s):  
J. A. Marshall ◽  
A. J. Castillo ◽  
M. B. Cardenas

Abstract. Our objective is to devise a mechanism to characterize and assess upper division and graduate student thinking in hydrology. We accomplish this through development and testing of an assessment tool for a physical hydrology class. The instrument was piloted in two sections of a physical hydrology course. Students were asked to respond to two questions that probed understanding and one question that assessed their ability to apply their knowledge, both prior to and after the course. Student and expert responses to the questions were classified into broad categories to develop a rubric to score responses. Using the rubric, three researchers independently blind-coded the full set of pre- and post-artifacts, resulting in 89% inter-rater agreement on the pre-tests and 83% agreement on the post-tests. The majority of responses made by students at the beginning of the class were characterized as showing only recognition of hydrology concepts from a non-physical perspective; post surveys indicated that the majority had moved to a basic understanding of physical processes, with some students achieving expert understanding. Our study has limitations, including the small number of participants who were all from one institution and the fact that the rubric was still under development. Nevertheless, the high inter-rater agreement from a group of experts indicates that the process we undertook is potentially useful for assessment of learning and understanding physical hydrology.

2012 ◽  
Vol 9 (9) ◽  
pp. 10095-10113
Author(s):  
J. A. Marshall ◽  
A. J. Castillo ◽  
M. B. Cardenas

Abstract. Our objective is to characterize and assess upper division and graduate student thinking in hydrology. We accomplish this through development and testing of an assessment tool for a physical hydrology class. Students were asked to respond to two questions that probed understanding and one question that assessed their ability to apply their knowledge. Student and expert responses to the questions were then used to develop a rubric to score responses. Using the rubric, three researchers independently blind-coded the full set of pre and post artifacts, resulting in 89% inter-rater agreement on the pre-tests and 83% agreement on the post-tests. This result has limitations, including the small number of participants who were all from one institution and the fact that the rubric was still under development. Nevertheless, the high inter-rater agreement from a group of experts is significant; the rubric we developed is a potentially useful tool for assessment of learning and understanding physical hydrology.


2018 ◽  
Vol 17 (2) ◽  
pp. ar18 ◽  
Author(s):  
Mindi M. Summers ◽  
Brian A. Couch ◽  
Jennifer K. Knight ◽  
Sara E. Brownell ◽  
Alison J. Crowe ◽  
...  

A new assessment tool, Ecology and Evolution–Measuring Achievement and Progression in Science or EcoEvo-MAPS, measures student thinking in ecology and evolution during an undergraduate course of study. EcoEvo-MAPS targets foundational concepts in ecology and evolution and uses a novel approach that asks students to evaluate a series of predictions, conclusions, or interpretations as likely or unlikely to be true given a specific scenario. We collected evidence of validity and reliability for EcoEvo-MAPS through an iterative process of faculty review, student interviews, and analyses of assessment data from more than 3000 students at 34 associate’s-, bachelor’s-, master’s-, and doctoral-granting institutions. The 63 likely/unlikely statements range in difficulty and target student understanding of key concepts aligned with the Vision and Change report. This assessment provides departments with a tool to measure student thinking at different time points in the curriculum and provides data that can be used to inform curricular and instructional modifications.


Homeopathy ◽  
2020 ◽  
Vol 109 (04) ◽  
pp. 191-197
Author(s):  
Chetna Deep Lamba ◽  
Vishwa Kumar Gupta ◽  
Robbert van Haselen ◽  
Lex Rutten ◽  
Nidhi Mahajan ◽  
...  

Abstract Objectives The objective of this study was to establish the reliability and content validity of the “Modified Naranjo Criteria for Homeopathy—Causal Attribution Inventory” as a tool for attributing a causal relationship between the homeopathic intervention and outcome in clinical case reports. Methods Purposive sampling was adopted for the selection of information-rich case reports using pre-defined criteria. Eligible case reports had to fulfil a minimum of nine items of the CARE Clinical Case Reporting Guideline checklist and a minimum of three of the homeopathic HOM-CASE CARE extension items. The Modified Naranjo Criteria for Homeopathy Inventory consists of 10 domains. Inter-rater agreement in the scoring of these domains was determined by calculating the percentage agreement and kappa (κ) values. A κ greater than 0.4, indicating fair agreement between raters, in conjunction with the absence of concerns regarding the face validity, was taken to indicate the validity of a given domain. Each domain was assessed by four raters for the selected case reports. Results Sixty case reports met the inclusion criteria. Inter-rater agreement/concordance per domain was “perfect” for domains 1 (100%, κ = 1.00) and 2 (100%, κ = 1.00); “almost perfect” for domain 8 (97.5%, κ = 0.86); “substantial” for domains 3 (96.7%, κ = 0.80) and 5 (91.1%, κ = 0.70); “moderate” for domains 4 (83.3%, κ = 0.60), 7 (67.8%, κ = 0.46) and 9 (99.2%, κ = 0.50); and “fair” for domain 10 (56.1%, κ = 0.38). For domains 6A (46.7%, κ = 0.03) and 6B (50.3%, κ = 0.18), there was “slight agreement” only. Thus, the validity of the Modified Naranjo Criteria for Homeopathy tool was established for each of its domains, except for the two that pertain to direction of cure (domains 6A and 6B). Conclusion The Modified Naranjo Criteria for Homeopathy—Causal Attribution Inventory was identified as a valid tool for assessing the likelihood of a causal relationship between a homeopathic intervention and clinical outcome. Improved wordings for several criteria have been proposed for the assessment tool, under the new acronym “MONARCH”. Further assessment of two MONARCH domains is required.


1988 ◽  
Vol 4 (1) ◽  
pp. 61-70
Author(s):  
Corrinne A. Wiss ◽  
Wendy Burnett

The Boder Test of Reading-Spelling Patterns (Boder & Jarrico, 1982) is a widely used method for screening and defining reading problems at the level of the word. In order to apply this method in another language, in this case French, criteria for determining what constitutes a good phonetic equivalent for a misspelled word are required. It is essential to know which errors differentiate good and poor readers since errors that are commonly made by good readers are not diagnostic. This paper reports guidelines which have been developed by analyzing spelling errors in a sample of good and poor French immersion readers. These criteria for good phonetic equivalents can be applied, along with the method outlined in the Boder test manual, and used as an assessment tool for screening decoding and encoding problems in French immersion children. When used in conjunction with the English test, the assessment provides bilingual comparisons and guidelines for remedial programming.


2016 ◽  
Vol 14 (2) ◽  
pp. 158-177 ◽  
Author(s):  
Rhaine Borges Santos Pedreira ◽  
Saulo Vasconcelos Rocha ◽  
Clarice Alves dos Santos ◽  
Lélia Renata Carneiro Vasconcelos ◽  
Martha Cerqueira Reis

ABSTRACT Objective Assess the content validity of the Elderly Health Assessment Tool with low education. Methods The data collection instrument/questionnaire was prepared and submitted to an expert panel comprising four healthcare professionals experienced in research on epidemiology of aging. The experts were allowed to suggest item inclusion/exclusion and were asked to rate the ability of individual items in questionnaire blocks to encompass target dimensions as “not valid”, “somewhat valid” or “valid”, using an interval scale. Percent agreement and the Content Validity Index were used as measurements of inter-rater agreement; the minimum acceptable inter-rater agreement was set at 80%. Results The mean instrument percent agreement rate was 86%, ranging from 63 to 99%, and from 50 to 100% between and within blocks respectively. The Mean Content Validity Index score was 93.47%, ranging from 50 to 100% between individual items. Conclusion The instrument showed acceptable psychometric properties for application in geriatric populations with low levels of education. It enabled identifying diseases and assisted in choice of strategies related to health of the elderly.


2014 ◽  
Vol 22 (3) ◽  
pp. 212-225 ◽  
Author(s):  
Valerie Priscilla Goby ◽  
Catherine Nickerson

Purpose – This paper aims to focus on the successful efforts made at a university business school in the Gulf region to develop an assessment tool to evaluate the communication skills of undergraduate students as part of satisfying the Association to Advance Collegiate Schools of Business (AACSB) accreditation requirements. We do not consider the validity of establishing learning outcomes or meeting these according to AACSB criteria. Rather, we address ourselves solely to the design of a testing instrument that can measure the degree of student learning within the parameters of university-established learning outcomes. Design/methodology/approach – The testing of communication skills, as opposed to language, is notoriously complex, and we describe our identification of constituent items that make up the corpus of knowledge that business students need to attain. We discuss our development of a testing instrument which reflects the learning process of knowledge, comprehension and application. Findings – Our work acted as a valid indicator of the effectiveness of teaching and learning as well as a component of accreditation requirements. Originality/value – The challenge to obtain accreditation, supported by appropriate assessment procedures, is now a high priority for more and more universities in emerging, as well as in developed, economies. For business schools, the accreditation provided by AACSB remains perhaps the most sought after global quality assurance program, and our work illustrates how the required plotting and assessment of learning objectives can be accomplished.


2012 ◽  
Vol 11 (1) ◽  
pp. 47-57 ◽  
Author(s):  
Joyce M. Parker ◽  
Charles W. Anderson ◽  
Merle Heidemann ◽  
John Merrill ◽  
Brett Merritt ◽  
...  

We present a diagnostic question cluster (DQC) that assesses undergraduates' thinking about photosynthesis. This assessment tool is not designed to identify individual misconceptions. Rather, it is focused on students' abilities to apply basic concepts about photosynthesis by reasoning with a coordinated set of practices based on a few scientific principles: conservation of matter, conservation of energy, and the hierarchical nature of biological systems. Data on students' responses to the cluster items and uses of some of the questions in multiple-choice, multiple-true/false, and essay formats are compared. A cross-over study indicates that the multiple-true/false format shows promise as a machine-gradable format that identifies students who have a mixture of accurate and inaccurate ideas. In addition, interviews with students about their choices on three multiple-choice questions reveal the fragility of students' understanding. Collectively, the data show that many undergraduates lack both a basic understanding of the role of photosynthesis in plant metabolism and the ability to reason with scientific principles when learning new content. Implications for instruction are discussed.


2022 ◽  
Vol 7 (4) ◽  
pp. 703-706
Author(s):  
Prachi Nilraj Bakare ◽  
Rupali Maheshgauri ◽  
Deepaswi Bhavsar ◽  
Renu Magdum

Ophthalmic surgery involves very precise surgical skill, which is difficult to teach and even more cumbersome in assessment of resident’s surgical skill. Hence it’s a need of time to adopt newer tool for transferring as well as assessing surgical skill. With this concept in mind International Council of Ophthalmology (ICO) has developed various tools for assessing surgical skills. If we use this tool not only as learning tool but also to give constructive feedback on the surgical skills of resident doctors it will help in creating a competent ophthalmic surgeon and eventually help society in general. 1To develop more standardized surgical training; 2. To assess efficacy and feasibility of new tool in improving surgical skills of Post Graduate(PG) student; 3. To know the effect of constructive feedback on surgical performance. Small incision cataract surgery training is done by Rubric designed by ICO- OSCAR. The same tool was used to assess video recorded cataract surgery of residents by different faculties and assess their surgical skill. The assessor simply circled the observed performance description at each step of the procedure. The ICO-OSCAR score was completed. At the end of the case assessor immediately discussed operated case with student to provide timely, structured, specific performance feedback. Oscar score was recorded and analysed with inter rater agreement. OSCAR TOOL has very good inter rater agreement i.e.(0.96). Analysis of student & Observer feedback infers that OSCAR Tool is best tool for learning as well as assessment tool and is easy to use. Recorded surgeries & constructive feedback from assessor helped Post Graduate students to improve surgically. This resulted in best outcome for patient in terms of good visual acuity post operatively. The formative assessment of surgical skills becomes an integral part of our formal residency, training framework, it would be in the interest of our trainees and trainers that we should adopt the OSCAR tools to train and assess. These tools can add immense value to our residency as well fellowship surgical training and possibly help create a generation of competent trainee.Formative Assessment and constructive feedback in surgical training will improve the competency of new ophthalmic surgeons.Structured surgical training will be relatively easy to observe and perform, as trainee learns what is required to be competent.This will ultimately improve the overall quality of patient care.


2010 ◽  
Vol 25 (1) ◽  
pp. 52-58 ◽  
Author(s):  
Elena Savoia ◽  
Paul D. Biddinger ◽  
Jon Burstein ◽  
Michael A. Stoto

AbstractIntroduction:As proxies for actual emergencies, drills and exercises can raise awareness, stimulate improvements in planning and training, and provide an opportunity to examine how different components of the public health system would combine to respond to a challenge. Despite these benefits, there remains a substantial need for widely accepted and prospectively validated tools to evaluate agencies' and hospitals' performance during such events. Unfortunately, to date, few studies have focused on addressing this need. The purpose of this study was to assess the validity and reliability of a qualitative performance assessment tool designed to measure hospitals' communication and operational capabilities during a functional exercise.Methods:The study population included 154 hospital personnel representing nine hospitals that participated in a functional exercise in Massachusetts in June 2008. A 25-item questionnaire was developed to assess the following three hospital functional capabilities: (1) inter-agency communication; (2) communication with the public; and (3) disaster operations. Analyses were conducted to examine internal consistency, associations among scales, the empirical structure of the items, and inter-rater agreement.Results:Twenty-two questions were retained in the final instrument, which demonstrated reliability with alpha coefficients of 0.83 or higher for all scales. A three-factor solution from the principal components analysis accounted for 57% of the total variance, and the factor structure was consistent with the original hypothesized domains. Inter-rater agreement between participants' self-reported scores and external evaluators' scores ranged from moderate to good.Conclusions:The resulting 22-item performance measurement tool reliably measured hospital capabilities in a functional exercise setting, with preliminary evidence of concurrent and criterion-related validity.


Sign in / Sign up

Export Citation Format

Share Document