scholarly journals Patient Simulation: A Literary Synthesis of Assessment Tools in Anesthesiology

Author(s):  
Alice A. Edler ◽  
Ruth G. Fanning ◽  
Michael. I. Chen ◽  
Rebecca Claure ◽  
Dondee Almazan ◽  
...  

High-fidelity patient simulation (HFPS) has been hypothesized as a modality for assessing competency of knowledge and skill in patient simulation, but uniform methods for HFPS performance assessment (PA) have not yet been completely achieved. Anesthesiology as a field founded the HFPS discipline and also leads in its PA. This project reviews the types, quality, and designated purpose of HFPS PA tools in anesthesiology. We used the systematic review method and systematically reviewed anesthesiology literature referenced in PubMed to assess the quality and reliability of available PA tools in HFPS. Of 412 articles identified, 50 met our inclusion criteria. Seventy seven percent of studies have been published since 2000; more recent studies demonstrated higher quality. Investigators reported a variety of test construction and validation methods. The most commonly reported test construction methods included ?占퐉odified Delphi Techniques??for item selection, reliability measurement using inter-rater agreement, and intra-class correlations between test items or subtests. Modern test theory, in particular generalizability theory, was used in nine (18%) of studies. Test score validity has been addressed in multiple investigations and shown a significant improvement in reporting accuracy. However the assessment of predicative has been low across the majority of studies. Usability and practicality of testing occasions and tools was only anecdotally reported. To more completely comply with the gold standards for PA design, both shared experience of experts and recognition of test construction standards, including reliability and validity measurements, instrument piloting, rater training, and explicit identification of the purpose and proposed use of the assessment tool, are required.

2019 ◽  
Vol 11 (4) ◽  
pp. 422-429
Author(s):  
Jason A. Lord ◽  
Danny J. Zuege ◽  
Maria Palacios Mackay ◽  
Amanda Roze des Ordons ◽  
Jocelyn Lockyer

ABSTRACT Background Determining procedural competence requires psychometrically sound assessment tools. A variety of instruments are available to determine procedural performance for central venous catheter (CVC) insertion, but it is not clear which ones should be used in the context of competency-based medical education. Objective We compared several commonly used instruments to determine which should be preferentially used to assess competence in CVC insertion. Methods Junior residents completing their first intensive care unit rotation between July 31, 2006, and March 9, 2007, were video-recorded performing CVC insertion on task trainer mannequins. Between June 1, 2016, and September 30, 2016, 3 experienced raters judged procedural competence on the historical video recordings of resident performance using 4 separate tools, including an itemized checklist, Objective Structured Assessment of Technical Skills (OSATS), a critical error assessment tool, and the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE). Generalizability theory (G-theory) was used to compare the performance characteristics among the tools. A decision study predicted the optimal testing environment using the tools. Results At the time of the original recording, 127 residents rotated through intensive care units at the University of Calgary, Alberta, Canada. Seventy-seven of them (61%) met inclusion criteria, and 55 of those residents (71%) agreed to participate. Results from the generalizability study (G-study) demonstrated that scores from O-SCORE and OSATS were the most dependable. Dependability could be maintained for O-SCORE and OSATS with 2 raters. Conclusions Our results suggest that global rating scales, such as the OSATS or the O-SCORE tools, should be preferentially utilized for assessment of competence in CVC insertion.


Author(s):  
Khamis Elessi ◽  
Shireen Abed ◽  
Tayseer Jamal Afifi ◽  
Rawan Utt ◽  
Mahmood Elblbessy ◽  
...  

Background: Neonates frequently experience pain as a result of diagnostic or therapeutic interventions or as a result of a disease process. Neonates cannot verbalize their pain experience and depend on others to recognize, assess and manage their pain. Neonates may suffer immediate or long-term consequences of unrelieved pain. Accurate assessment of pain is essential to provide adequate management. Observational scales, which include physiological and behavioral responses to pain, are available to aid consistent pain management. Pain assessment is considered as the fifth vital sign. Objectives: Aims of the present study were (1) to compare two commonly cited neonatal pain assessment tools, Neonatal Pain, Agitation and Sedation Scale (N-PASS) and modified Pain Assessment Tool (mPAT), with regard to their psychometric qualities, (2) to explore intuitive clinicians' ratings by relating them to the tools' items and (3) to ensure that neonates receive adequate pain control. Methods: Two coders applied both pain assessment tools to 850 neonates while undergoing a painful or a stressful procedure. Each neonate was assessed before, during and after the procedure. The evaluation before and after the procedure was done using NPASS, while pain score during the procedure was assessed by mPAT. Analyses of variances and regression analyses were used to investigate whether tools could discriminate between the procedures and whether tools' items were predictors of pain severity. Results: Internal consistency, reliability and validity were high for both assessment tools. N-PASS tool discriminated between painful and stressful situations better than mPAT. There was no relation between the age of neonate and the pain score. Moreover, P-value was statistically significant between mPAT score and post procedural assessment score as well as between pre and post procedural assessment scores. Conclusion: Both assessment tools performed equally well regarding physiologic parameters. However, N-PASS makes it possible to assess pain during sedation. It was noticed that gaps exist between practitioner knowledge and attitude regarding neonatal pain.


2019 ◽  
Vol 52 (02) ◽  
pp. 216-221
Author(s):  
Sheeja Rajan ◽  
Ranjith Sathyan ◽  
L. S. Sreelesh ◽  
Anu Anto Kallerey ◽  
Aarathy Antharjanam ◽  
...  

AbstractMicrosurgical skill acquisition is an integral component of training in plastic surgery. Current microsurgical training is based on the subjective Halstedian model. An ideal microsurgery assessment tool should be able to deconstruct all the subskills of microsurgery and assess them objectively and reliably. For our study, to analyze the feasibility, reliability, and validity of microsurgery skill assessment, a video-based objective structured assessment of technical skill tool was chosen. Two blinded experts evaluated 40 videos of six residents performing microsurgical anastomosis for arteriovenous fistula surgery. The generic Reznick's global rating score (GRS) and University of Western Ontario microsurgical skills acquisition/assessment (UWOMSA) instrument were used as checklists. Correlation coefficients of 0.75 to 0.80 (UWOMSA) and 0.71 to 0.77 (GRS) for interrater and intrarater reliability showed that the assessment tools were reliable. Convergent validity of the UWOMSA tool with the prevalidated GRS tool showed good agreement. The mean improvement of scores with years of residency was measured with analysis of variance. Both UWOMSA (p-value: 0.034) and GRS (p-value: 0.037) demonstrated significant improvement in scores from postgraduate year 1 (PGY1) to PGY2 and a less marked improvement from PGY2 to PGY3. We conclude that objective assessment of microsurgical skills in an actual clinical setting is feasible. Tools like UWOMSA are valid and reliable for microsurgery assessment and provide feedback to chart progression of learning. Acceptance and validation of such objective assessments will help to improve training and bring uniformity to microsurgery education.


2015 ◽  
Vol 95 (1) ◽  
pp. 25-38 ◽  
Author(s):  
Michael G. O'Grady ◽  
Stacey C. Dusing

Background Play is vital for development. Infants and children learn through play. Traditional standardized developmental tests measure whether a child performs individual skills within controlled environments. Play-based assessments can measure skill performance during natural, child-driven play. Purpose The purpose of this study was to systematically review reliability, validity, and responsiveness of all play-based assessments that quantify motor and cognitive skills in children from birth to 36 months of age. Data Sources Studies were identified from a literature search using PubMed, ERIC, CINAHL, and PsycINFO databases and the reference lists of included papers. Study Selection Included studies investigated reliability, validity, or responsiveness of play-based assessments that measured motor and cognitive skills for children to 36 months of age. Data Extraction Two reviewers independently screened 40 studies for eligibility and inclusion. The reviewers independently extracted reliability, validity, and responsiveness data. They examined measurement properties and methodological quality of the included studies. Data Synthesis Four current play-based assessment tools were identified in 8 included studies. Each play-based assessment tool measured motor and cognitive skills in a different way during play. Interrater reliability correlations ranged from .86 to .98 for motor development and from .23 to .90 for cognitive development. Test-retest reliability correlations ranged from .88 to .95 for motor development and from .45 to .91 for cognitive development. Structural validity correlations ranged from .62 to .90 for motor development and from .42 to .93 for cognitive development. One study assessed responsiveness to change in motor development. Limitations Most studies had small and poorly described samples. Lack of transparency in data management and statistical analysis was common. Conclusions Play-based assessments have potential to be reliable and valid tools to assess cognitive and motor skills, but higher-quality research is needed. Psychometric properties should be considered for each play-based assessment before it is used in clinical and research practice.


2018 ◽  
Vol 3 (2) ◽  
Author(s):  
Liam Rooney

<p><span style="text-decoration: underline;"><strong>Background</strong></span></p><p>Dementia is a disease affecting 55,000 Irish people. (1)  It is characterised by progressive cognitive impairment, ranging from mild impairment, which may affect memory, to severe impairment where the ability to communicate may be absent.  These people are at risk of having their pain underassessed and undermanaged. (2)  A survey exploring Irish Paramedics and Advanced Paramedics views on the current pain assessment tools available to them, and whether these tools are suitable for use with dementia patients is proposed.  Existing observational pain assessment tools used with dementia patients are examined and their suitability for pre-hospital use discussed.</p><p><span style="text-decoration: underline;"><strong>Introduction</strong></span></p><p>Adults with cognitive impairments, such as dementia, are at a much higher risk of not receiving adequate analgesia for their pain. (3)  It is estimated between 40% and 80% of dementia patients regularly experience pain. (4)  Current pain assessment tools used pre-hospital in Ireland are: Numerical Rating Scale for patients &gt;8yrs, Wong Baker Scale for pediatric patients and the FLACC Scale for infants.  There is no specific pain assessment tool for use with patients who are not capable of self-reporting their level of pain.</p><p><span style="text-decoration: underline;"><strong>Objective</strong></span></p><p>This research aimed to identify observational pain assessment tools used in this cohort.  The most consistently recommended tools were identified.  The suitability of these tools for use in the pre-hospital setting assessed.</p><p><span style="text-decoration: underline;"><strong>Findings</strong></span></p><p>Literature review identified 29 observational pain assessment tools. There is a lack of literature relating to the pre-hospital setting.  The American Geriatric Society (AGS) identified six pain behaviors in dementia patients, changes in facial expression, activity patterns, interpersonal relationships and mental status, negative vocalisation, change in body language.  These six criteria should be the foundation of any pain assessment tool. (5) The three most consistently recommended tools identified were as follows:</p><p><em>Abbey Pain Scale</em></p><p>6 items assessed, meets AGS criteria, quick and easy to implement, moderate to good reliability and validity (6)</p><p><em>Doloplus 2</em></p><p>15 items assessed, meets 5 of 6 AGS criteria, requires observation over time, prior knowledge of patient required, moderate to good reliability and validity (6)</p><p><em>PAINAD</em></p><p>5 items assessed, meets 3 of 6 AGS criteria, less then 5 minutes to implement, may be influenced by psychological distress, good reliability and validity (6)</p><p> </p><p><span style="text-decoration: underline;"><strong>Conclusion</strong></span></p><p>The ability to self report pain is deemed “gold standard”.  Patients with mild to moderate disease, and indeed, some with severe disease, may retain the ability to self report.  An observational tool is required when dementia has progressed to the point where the patient becomes unable to self report or becomes non-verbal.  It is in these patients where undetected, misinterpreted or inaccurate assessment of pain becomes frequent. (7)  The aim of any tool is to gain a good assessment of pain, however, the pain scale used should be suitable to the clinical setting.  The feasibility of an assessment tool is an important factor along with reliability and validity.  No one assessment tool could be recommended over another.  Abbey and PAINAD have potential for use pre-hospital, however, further research, clinical evaluation and trial in an ambulance service is required.</p>


2014 ◽  
Vol 35 (4) ◽  
pp. 250-261 ◽  
Author(s):  
Matthias Ziegler ◽  
Arthur Poropat ◽  
Julija Mell

Short personality questionnaires are increasingly used in research and practice, with some scales including as few as two to five items per personality domain. Despite the frequency of their use, these short scales are often criticized on the basis of their reduced internal consistencies and their purported failure to assess the breadth of broad constructs, such as the Big 5 factors of personality. One reason for this might be the use of principles routed in Classical Test Theory during test construction. In this study, Generalizability Theory is used to compare psychometric properties of different scales based on the NEO-PI-R and BFI, two widely-used personality questionnaire families. Applying both Classical Test Theory (CTT) and Generalizability Theory (GT) allowed to identify the inner workings of test shortening. CTT-based analyses indicated that longer is generally better for reliability, while GT allowed differentiation between reliability for relative and absolute decisions, while revealing how different variance sources affect test score reliability estimates. These variance sources differed with scale length, and only GT allowed clear description of these internal consequences, allowing more effective identification of advantages and disadvantages of shorter and longer scales. Most importantly, the findings highlight the potential error proneness of focusing solely on reliability and scale length in test construction. Practical as well as theoretical consequences are discussed.


Author(s):  
Kai T. Horstmann ◽  
Johanna Ziegler ◽  
Matthias Ziegler

The assessment of situations and especially situational perceptions is the focus of this chapter. Based on the ABC principles of test construction (Ziegler, 2014b) and the road map to the taxonomization of situations (Rauthmann, 2015), this chapter shows how situational taxonomies and their assessment tools can be developed. These principles are exemplified by presenting three recent situational taxonomies and the effect different approaches have on the resulting taxonomy. Similarities and differences to established taxonomies of personality traits (such as the Big Five) are discussed. Furthermore, a new taxonomy and assessment tool is presented that captures personality traits and situational perception at the same time. Finally, challenges of future situational taxonomization, especially the need to establish a nomological net of situational perception and other, related constructs and psychological processes, are discussed.


2016 ◽  
Vol 3 ◽  
pp. JMECD.S30035 ◽  
Author(s):  
Hirotaka Onishi

Case presentation is used as a teaching and learning tool in almost all clinical education, and it is also associated with clinical reasoning ability. Despite this, no specific assessment tool utilizing case presentations has yet been established. SNAPPS (summarize, narrow, analyze, probe, plan, and select) and the One-minute Preceptor are well-known educational tools for teaching how to improve consultations. However, these tools do not include a specific rating scale to determine the diagnostic reasoning level. Mini clinical evaluation exercise (Mini-CEX) and RIME (reporter, interpreter, manager, and educator) are comprehensive assessment tools with appropriate reliability and validity. The vague, structured, organized and pertinent (VSOP) model, previously proposed in Japan and derived from RIME model, is a tool for formative assessment and teaching of trainees through case presentations. Uses of the VSOP model in real settings are also discussed.


2021 ◽  
Author(s):  
Mah Parsa ◽  
Muhammad Raisul Alam ◽  
Alex Mihailidis

Abstract Objectives: The main objective of this paper is to propose an approach for developing an Artificial Intelligence (AI)-powered Language Assessment (LA) tool. Such tools can be used to assess language impairments associated with dementia in older adults. The Machine Learning (ML) classifiers are the main parts of our proposed approach, therefore to develop an accurate tool with high sensitivity and specificity, we consider different binary classifiers and evaluate their performances. We also assess the reliability and validity of our approach by comparing the impact of different types of language tasks, features, and recording media on the performance of ML classifiers. Approach: Our approach includes the following steps: 1) Collecting language datasets or getting access to available language datasets; 2) Extracting linguistic and acoustic features from subjects' speeches which have been collected from subjects with dementia (N=9) and subjects without dementia (N=13); 3) Selecting most informative features and using them to train ML classifiers; and 4) Evaluating the performance of classifiers on distinguishing subjects with dementia from subjects without dementia and select the most accurate classier to be the basis of the AI tool. Results: Our results indicate that 1) we can nd more predictive linguistic markers to distinguish language impairment associated with dementia from participants' speech produced during the Picture Description (PD) language task than the Story Recall (SR) task; and 2) phone-based recording interfaces provide more high-quality language datasets than the web-based recording systems Conclusion: Our results verify that the tree-based classifiers, which have been trained using the linguistic and acoustic features extracted from interviews' transcript and audio, can be used to develop an AI-powered language assessment tool for detecting language impairment associated with dementia.


2016 ◽  
Vol 156 (1) ◽  
pp. 61-69 ◽  
Author(s):  
Rishabh Sethia ◽  
Thomas F. Kerwin ◽  
Gregory J. Wiet

Objective The aim of this report is to provide a review of the current literature for assessment of performance for mastoidectomy, to identify the current assessment tools available in the literature, and to summarize the evidence for their validity. Data Sources The MEDLINE database was accessed via PubMed. Review Methods Inclusion criteria consisted of English-language published articles that reported use of a mastoidectomy performance assessment tool. Studies ranged from 2007 to November 2015 and were divided into 2 groups: intraoperative assessments and those performed with simulation (cadaveric laboratory or virtual reality). Studies that contained specific reliability analyses were also highlighted. For each publication, validity evidence data were analyzed and interpreted according to conceptual definitions provided in a recent systematic review on the modern framework of validity evidence. Conclusions Twenty-three studies were identified that met our inclusion criteria for review, including 4 intraoperative objective assessment studies, 5 cadaveric studies, 10 virtual reality simulation studies, and 4 that used both cadaveric assessment and virtual reality. Implications for Practice A review of the literature revealed a wide variety of mastoidectomy assessment tools and varying levels of reliability and validity evidence. The assessment tool developed at Johns Hopkins possesses the most validity evidence of those reviewed. However, a number of agreed-on specific metrics could be integrated into a standardized assessment instrument to be used nationally. A universally agreed-on assessment tool will provide a means for developing standardized benchmarks for performing mastoid surgery.


Sign in / Sign up

Export Citation Format

Share Document