narrative comments
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 20)

H-INDEX

9
(FIVE YEARS 3)

Author(s):  
Timothy Chaplin ◽  
Heather Braund ◽  
Adam Szulewski ◽  
Nancy Dalgarno ◽  
Rylan Egan ◽  
...  

Background: The direct observation and assessment of learners’ resuscitation skills by an attending physician is challenging due to the unpredictable and time-sensitive nature of these events. Multisource feedback (MSF) may address this challenge and improve the quality of assessments provided to learners. We aimed to describe the similarities and differences in the assessment rationale of attending physicians, registered nurses, and resident peers in the context of a simulation-based resuscitation curriculum. Methods: We conducted a qualitative content analysis of narrative MSF of medical residents in their first postgraduate year of training who were participating in a simulation-based resuscitation course at two Canadian institutions. Assessments included an entrustment score and narrative comments from attending physicians, registered nurses, and resident peers in addition to self-assessment. Narrative comments were transcribed and analyzed thematically using a constant comparative method. Results: All 87 residents (100%) participating in the 2017-2018 course provided consent. A total of 223 assessments were included in our analysis. Four themes emerged from the narrative data: 1) Communication, 2) Leadership, 3) Demeanor, and 4) Medical Expert. Relative to other assessor groups, feedback from nurses focused on patient-centred care and communication while attending physicians focused on the medical expert theme. Peer feedback was the most positive. Self-assessments included comments within each of the four themes. Conclusions: In the context of a simulation-based resuscitation curriculum, MSF provided learners with different perspectives in their narrative assessment rationale and may offer a more holistic assessment of resuscitation skills within a competency-based medical education (CBME) program of assessment.


2021 ◽  
Author(s):  
Pia Liljamo ◽  
Anne Kuusisto ◽  
Timo Ukkola ◽  
Mikko Härkönen ◽  
Ulla-Mari Kinnunen

In Finland, the nationally unified and standardized nursing documentation model comprises the nursing process model and the Finnish Care Classification (FinCC). The aim of the study was to assess how well the further developed FinCC complies with actual nursing practices and how pragmatic and understandable it is. An e-questionnaire based on the revised version of the FinCC was sent to healthcare organizations (n=34) and Universities of Applied Sciences (n=14). Data was gathered and organized in Excel. Narrative comments were read and analyzed. The mean of questions of 17 components of both the FICND and the FICNI was over four (scale 1–5). The biggest revision of the FinCC is that different scales and evidence-based research have been utilized in the development of the terminology. Based on the findings, revisions have been made, and the new version, FinCC 4.0, will be published at the end of 2019.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Bingjing Mao ◽  
Cong Li

PurposeNarrative comments about dentists on physician review sites have been documented to increasingly influence people's selection of their dentists. From a communication standpoint, these comments are a type of narrative communication that people share their experiences with dentists by telling stories. Based on the frameworks of rhetoric structure theory and extended elaborated likelihood model, this study aimed to examine the effects of such storytelling from two perspectives including narrative structure and narrative focus.Design/methodology/approachA 4 (narrative structure) × 2 (narrative focus) between-subjects experiment was conducted to examine the proposed hypotheses and research questionsFindingsThe results showed that a one-sided comprehensive comment focusing on technical competence generated the strongest persuasion effects measured by attitude and behavioral intention. These effects were mediated by perceived narrative credibility and enjoyment.Originality/valueThis study contributes to the extant literature in two ways. First, it extends previous studies of online narrative comments by showing which narrative structure and focus are deemed to be more persuasive when selecting a dentist. Second, it offers a test of two routes of information processing (i.e. cognitive and experiential) to understand the mechanism underlying the effects of narrative comments.Peer reviewThe peer-review history for this article is available at: https://publons.com/publon/10.1108/OIR-08-2020-0359


Author(s):  
Aishwarya Roshan ◽  
Natalie Wagner ◽  
Anita Acai ◽  
Heather Emmerton-Coughlin ◽  
Ranil R. Sonnadara ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Yusuf Yilmaz ◽  
Alma Jurado Nunez ◽  
Ali Ariaeinejad ◽  
Mark Lee ◽  
Jonathan Sherbino ◽  
...  

BACKGROUND Residents receive a numeric performance rating (e.g., 1-7 scoring scale) along with a narrative (i.e., qualitative) feedback based on their performance in each workplace-based assessment (WBA). Aggregated qualitative data from WBA can be overwhelming to process and fairly adjudicate as part of a global decision about learner competence. Current approaches with qualitative data require a human rater to maintain attention and appropriately weigh various data inputs within the constraints of working memory before rendering a global judgment of performance. OBJECTIVE This study evaluates the accuracy of a decision support system for raters using natural language processing (NLP) and machine learning (ML). METHODS NLP was performed retrospectively on a complete dataset of narrative comments (i.e., text-based feedback to residents based on their performance on a task) derived from WBAs completed by faculty members from multiple hospitals associated with a single, large, residency program at McMaster University, Canada. Narrative comments were vectorized to quantitative ratings using bag-of-n-grams technique with three input types: unigram, bigrams, and trigrams. Supervised machine learning models using linear regression were trained for two outputs using the original ratings and dichotomized ratings (at risk or not). Sensitivity, specificity, and accuracy metrics are reported. RESULTS The database consisted of 7,199 unique direct observation assessments, containing both narrative comments and a 3 to 7 rating in imbalanced distribution (3-5: 726, and 6-7: 4,871 ratings). Total of 141 unique raters from five different hospitals and 45 unique residents participated over the course of five academic years. When comparing the three different input types for diagnosing if a trainee would be rated low (i.e., 1-5) or high (i.e., 6 or 7), our accuracy for trigrams was (87%), bigrams (86%), and unigrams (82%). We also found that all three input types had better prediction accuracy when using a bimodal cut (e.g., lower or higher) compared to predicting performance along the full 7-scale (50-52%). CONCLUSIONS The ML models can accurately identify underperforming residents via narrative comments provided for work-based assessments. The words generated in WBAs can be a worthy dataset to augment human decisions for educators tasked with processing large volumes of narrative assessments. CLINICALTRIAL N/A


2020 ◽  
Vol 30 (2) ◽  
pp. 89-103
Author(s):  
Marie Brasholt ◽  
Brenda Van den Bergh ◽  
Erinda Bllaca ◽  
Alba Mejía ◽  
Marie My Warborg Larsen ◽  
...  

Introduction: Independent monitoring of places of detention is considered an effective way of preventing torture, but some reports have shown that detainees may face reprisals after engaging with monitors.This pilot study aims to further investigate the nature and the extent of such reprisals. Methods: A cross-sectional survey among male prisoners in 4 prisons in Albania and 4 in Honduras was carried out using an interviewer-administered, structured questionnaire and collecting additional narrative comments. Strict ethical guidelines were followed, and follow-up visits took place to detect any sanctions following participation in the study. Results: 170 detainees were invited to par- ticipate of whom 164 accepted. Most were aware of monitoring visits and found them helpful. More than one-third reported that au- thorities had made special arrangements like cleaning and painting prior to the monitoring visits, and 34% of participants in Albania and 12% in Honduras had felt pressured to act in a specific way towards the monitors. One- fifth had experienced sanctions after the last monitoring visit, most often threats and humiliations. During the follow-up visits, the interviewees reported no incidents following their participation in the study. Discussion: This pilot study has shown that it is possible to collect information about detainees’ experience with monitoring visits through interviews while they are still detained. The fact that reprisals are reported prior to and fol- lowing monitoring visits points to the need of improving monitoring methodology to further lower the risk. Further research is needed to better understand the dynamics of the sanctions taking place with the aim of reaching a deeper understanding of potential preventive measures.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Eva K. Hennel ◽  
Ulrike Subotic ◽  
Christoph Berendonk ◽  
Daniel Stricker ◽  
Sigrid Harendza ◽  
...  

Abstract Background In medical settings, multisource feedback (MSF) is a recognised method of formative assessment. It collects feedback on a doctor’s performance from several perspectives in the form of questionnaires. Yet, no validated MSF questionnaire has been publicly available in German. Thus, we aimed to develop a German MSF questionnaire based on the CanMEDS roles and to investigate the evidence of its validity. Methods We developed a competency-based MSF questionnaire in German, informed by the literature and expert input. Four sources of validity evidence were investigated: (i) Content was examined based on MSF literature, blueprints of competency, and expert-team discussions. (ii) The response process was supported by analysis of a think-aloud study, narrative comments, “unable to comment” ratings and evaluation data. (iii) The internal structure was assessed by exploratory factor analysis, and inter-rater reliability by generalisability analysis. Data were collected during two runs of MSF, in which 47 residents were evaluated once (first run) or several times (second and third run) on 81 occasions of MSF. (iv) To investigate consequences, we analysed the residents’ learning goals and the progress as reported via MSF. Results Our resulting MSF questionnaire (MSF-RG) consists of 15 items and one global rating, which are each rated on a scale and accompanied by a field for narrative comments and cover a construct of a physician’s competence. Additionally, there are five open questions for further suggestions. Investigation of validity evidence revealed that: (i) The expert group agreed that the content comprehensively addresses clinical competence; (ii) The response processes indicated that the questions are understood as intended and supported the acceptance and usability; (iii) For the second run, factor analysis showed a one-factor solution, a Cronbach’s alpha of 0.951 and an inter-rater reliability of 0.797 with 12 raters; (iv) There are indications that residents benefitted, considering their individual learning goals and based on their ratings reported via MSF itself. Conclusions To support residency training with multisource feedback, we developed a German MSF questionnaire (MSF-RG), which is supported by four sources of validity evidence. This MSF questionnaire may be useful to implement MSF in residency training in German-speaking regions.


10.2196/18374 ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. e18374
Author(s):  
Stuart McLennan

Background Previous research internationally has only analyzed publicly available feedback on physician rating websites (PRWs). However, it appears that many PRWs are not publishing all the feedback they receive. Analysis of this rejected feedback could provide a better understanding of the types of feedback that are currently not published and whether this is appropriate. Objective The aim of this study was to examine (1) the number of patient feedback rejected from the Swiss PRW Medicosearch, (2) the evaluation tendencies of the rejected patient feedback, and (3) the types of issues raised in the rejected narrative comments. Methods The Swiss PRW Medicosearch provided all the feedback that had been rejected between September 16, 2008, and September 22, 2017. The feedback were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues. Results Between September 16, 2008, and September 22, 2017, Medicosearch rejected a total of 2352 patient feedback. The majority of feedback rejected (1754/2352, 74.6%) had narrative comments in the German language. However, 11.9% (279/2352) of the rejected feedback only provided a quantitative rating with no narrative comment. Overall, 25% (588/2352) of the rejected feedback were positive, 18.7% (440/2352) were neutral, and 56% (1316/2352) were negative. The average rating of the rejected feedback was 2.8 (SD 1.4). In total, 44 subcategories addressing the physician (n=20), staff (n=9), and practice (n=15) were identified. In total, 3804 distinct issues were identified within the 44 subcategories of the categorization framework; 75% (2854/3804) of the issues were related to the physician, 6.4% (242/3804) were related to the staff, and 18.6% (708/3804) were related to the practice. Frequently mentioned issues identified from the rejected feedback included (1) satisfaction with treatment (533/1903, 28%); (2) the overall assessment of the physician (392/1903, 20.6%); (3) recommending the physician (345/1903, 18.1%); (4) the physician’s communication (261/1903, 13.7%); (5) the physician’s caring attitude (220/1903, 11.6%); and (6) the physician’s friendliness (203/1903, 10.6%). Conclusions It is unclear why the majority of the feedback were rejected. This is problematic and raises concerns that online patient feedback are being inappropriately manipulated. If online patient feedback is going to be collected, there needs to be clear policies and practices about how this is handled. It cannot be left to the whims of PRWs, who may have financial incentives to suppress negative feedback, to decide which feedback is or is not published online. Further research is needed to examine how many PRWs are using criteria for determining which feedback is published or not, what those criteria are, and what measures PRWs are using to address the manipulation of online patient feedback.


2020 ◽  
Author(s):  
Stuart McLennan

BACKGROUND Previous research internationally has only analyzed publicly available feedback on physician rating websites (PRWs). However, it appears that many PRWs are not publishing all the feedback they receive. Analysis of this rejected feedback could provide a better understanding of the types of feedback that are currently not published and whether this is appropriate. OBJECTIVE The aim of this study was to examine (1) the number of patient feedback rejected from the Swiss PRW Medicosearch, (2) the evaluation tendencies of the rejected patient feedback, and (3) the types of issues raised in the rejected narrative comments. METHODS The Swiss PRW Medicosearch provided all the feedback that had been rejected between September 16, 2008, and September 22, 2017. The feedback were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues. RESULTS Between September 16, 2008, and September 22, 2017, Medicosearch rejected a total of 2352 patient feedback. The majority of feedback rejected (1754/2352, 74.6%) had narrative comments in the German language. However, 11.9% (279/2352) of the rejected feedback only provided a quantitative rating with no narrative comment. Overall, 25% (588/2352) of the rejected feedback were positive, 18.7% (440/2352) were neutral, and 56% (1316/2352) were negative. The average rating of the rejected feedback was 2.8 (SD 1.4). In total, 44 subcategories addressing the physician (n=20), staff (n=9), and practice (n=15) were identified. In total, 3804 distinct issues were identified within the 44 subcategories of the categorization framework; 75% (2854/3804) of the issues were related to the physician, 6.4% (242/3804) were related to the staff, and 18.6% (708/3804) were related to the practice. Frequently mentioned issues identified from the rejected feedback included (1) satisfaction with treatment (533/1903, 28%); (2) the overall assessment of the physician (392/1903, 20.6%); (3) recommending the physician (345/1903, 18.1%); (4) the physician’s communication (261/1903, 13.7%); (5) the physician’s caring attitude (220/1903, 11.6%); and (6) the physician’s friendliness (203/1903, 10.6%). CONCLUSIONS It is unclear why the majority of the feedback were rejected. This is problematic and raises concerns that online patient feedback are being inappropriately manipulated. If online patient feedback is going to be collected, there needs to be clear policies and practices about how this is handled. It cannot be left to the whims of PRWs, who may have financial incentives to suppress negative feedback, to decide which feedback is or is not published online. Further research is needed to examine how many PRWs are using criteria for determining which feedback is published or not, what those criteria are, and what measures PRWs are using to address the manipulation of online patient feedback.


Sign in / Sign up

Export Citation Format

Share Document