scholarly journals P-OGC50 An Objective Evaluation of Youtube Videos on Oesophageal Cancer using the PEMAT Score

2021 ◽  
Vol 108 (Supplement_9) ◽  
Author(s):  
Aya Musbahi ◽  
Arul Immanuel

Abstract Background Studies in patient literature particularly regarding online video literature in all fields are few. Scoring systems for video materials such as the validated PEMAT(Patient Education Material Assessment Tool)  have been used before to look at video patient literature. The aim of this study is to use the PEMAT tool to evaluate the quality of Youtube patient literature on oesophageal cancer and look at the inter rater reliability between lay and medical scorers. Methods A Youtube search was performed in April 2021 using the search terms “oesophageal cancer”, “esophageal cancer” “gullet cancer”. Characteristic data collected included language, ratings (thumbs up), type of video, country of origin and presence of advertising as well as intended audience. A PEMAT tool which is validated instrument to rate patient video material was used. A score of 70% is acceptable in the actionability and understandability domains. Cohen’s kappa coefficient was used to test inter-rater reliability between two lay person raters; and two medical raters. Results Seven sites were rated as understandable by the medical raters average and 13 were rated understandable by the lay raters average. Only two videos achieved best case scenario where both medical raters rated as understandable, rather than the average of both. Twelve videos were rated by both lay raters as understandable. Actionability rated poorer with only two videos rated as actionable on average by the medical raters and seven rated actionable by the lay raters on average. Conclusions Youtube videos on Oesophageal cancer score poorly in terms of actionability and understandability.

Author(s):  
Rithvik Reddy ◽  
Horace Cheng ◽  
Nicholas Jufas ◽  
Nirmal Patel

Objectives: The objective of this study is to assess quality of the most popular cholesteatoma videos on YouTube using recognized scoring systems and to determine if video quality metrics correlated with video popularity based on likes and views Design: Cross sectional survey of available data Setting: Metadata acquisition using YouTube searches using Australian IP addresses Participants: Three independent neuro-otologists partaking in scoring videos Main outcome measures: Each video was viewed and scored by three independent assessors using both a novel tool to score the usefulness of the video as well as the validated DISCERN scoring tool. Popularity metrics were analyzed and compared to video popularity. Results: A total of 90 YouTube videos were analyzed with an average 55,292 views per video with an average of 271 likes and 22 dislikes. The inter-rater correlation was moderate with Fleiss-kappa score 0.42 [P < 0.01] using a novel scoring tool for cholesteatoma and inter-rater correlation coefficient was 0.78 [95% CI = 0.58 - 0.90] indicating good reliability for DISCERN scores. The overall video quality was poor with higher DISCERN scores found in videos uploaded from Academic Institutions. Conclusions: Informative video quality on YouTube on cholesteatoma is overall of poor quality. Videos with unclassified sources or more dislikes correlated poorly with video quality. Given the increase in patients turning to the internet for information regarding their health conditions, otology and otolaryngology societies should be encouraged to publish high quality YouTube videos on cholesteatoma and other ear conditions.


Rheumatology ◽  
2020 ◽  
Vol 59 (Supplement_2) ◽  
Author(s):  
Kieran Murray ◽  
Timothy Murray ◽  
Candice Low ◽  
Anna O'Rourke ◽  
Douglas J Veale

Abstract Background Osteoarthritis is the most common cause of disability in people over 65 years old. The readability of of online osteoarthritis information has never been assessed. A 2003 study found the quality of online osteoarthritis information to be poor. This study reviews the quality of online information regarding osteoarthritis in 2018 using three validated scoring systems. Readability is reviewed for the first time, again using three validated tools. Methods The term osteoarthritis was searched across the three most popular English language search engines. The first 25 pages from each search engine were analysed. Duplicate pages, websites featuring paid advertisements, inaccessible pages (behind a pay wall, not available for geographical reasons) and non-text pages were excluded. Readability was measured using Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL) and Gunning-Fog Index (GFI). Website quality was scored using the the Journal of the American Medical Association (JAMA) benchmark criteria and DISCERN criteria. Presence or absence of HONcode certification, age of content, content producer and author characteristics were noted. Results 37 unique websites were suitable for analysis. Readability varied by assessment tool from 8th to 12th grade level. This compares with the recommended 7- 8th grade level. One (2.7%) website met all four JAMA Criteria. Mean DISCERN quality of information for OA websites was “fair”, comparing favourably with the “poor” grading of a 2003 study. HONCode endorsed websites (43.2%) were of a statistically significantly higher quality. Conclusion Quality of online health information for OA is “fair”. 2.7% of websites met JAMA benchmark criteria for quality. Readability was equal to or more difficult than recommendations. HONcode certification was indicative of higher quality, but not readability. Disclosures K. Murray None. T. Murray None. C. Low None. A. O'Rourke None. D.J. Veale None.


2021 ◽  
Vol 108 (Supplement_7) ◽  
Author(s):  
Aleksander Dawidziuk ◽  
Rishikesh Gandhewar ◽  
Kamal Shah ◽  
Kalyan Vemulapalli

Abstract Aims To evaluate the understandability, actionability and quality of perioperative patient information conveyed by YouTube videos covering safety of elective surgery during the COVID-19 pandemic. Methods YouTube search strategy was optimised using a combination of “COVID”, “safety”, and “surgery” terms. Each video was screened by two independent reviewers. The search was conducted on 9 January 2021. The understandability and actionability of videos were evaluated using validated Patient Education Materials Assessment Tool (PEMAT). Quality of perioperative patient information was determined with a novel 4-point checklist based on recommendations by the National Institute for Health and Care Excellence. The effect of video type on PEMAT and quality scores was assessed with Kruskal-Wallis Test. Scores were correlated with video metrics using Spearman's Rank. Results The primary search revealed 594 videos. After deduplication and exclusions, 108 materials were analysed. Majority of videos (n = 89) originated from the USA, with only 4 produced in the UK. Hospital-produced videos had the highest understandability scores [median (IQR): 83.33% (18.40%)] and patient testimonies the lowest [55.91% (33.24%)] (p=0.002). Hospital materials were also most actionable [2.25 (2.40)], with news reports scoring lowest [0.0 (0.8)] (p=0.049). Social distancing, preoperative COVID-19 testing, and wearing face masks were mentioned in 46, 41, and 48 videos respectively. Only 9 materials recommended self-isolation before surgery. There was no significant correlation between video metrics (e.g., length) and scores. Conclusions Short UK-specific videos should be created to outline accurate patient instructions for elective surgery during the COVID-19 pandemic and provide reassurance to help reduce the surgical backlog.


2020 ◽  
pp. archdischild-2019-318664
Author(s):  
Calvin Heal ◽  
Sarah Cotterill ◽  
Andrew Graeme Rowland ◽  
Natalie Garratt ◽  
Tony Long ◽  
...  

ObjectiveThe Paediatric Admission Guidance in the Emergency Department (PAGE) score is an assessment tool currently in development that helps predict hospital admission using components including patient characteristics, vital signs (heart rate, temperature, respiratory rate and oxygen saturation) and clinical features (eg, breathing, behaviour and nurse judgement). It aims to assist in safe admission and discharge decision making in environments such as emergency departments and urgent care centres. Determining the inter-rater reliability of scoring tools such as PAGE can be difficult. The aim of this study was to determine the inter-rater reliability of seven clinical components of the PAGE Score.DesignInter-rater reliability was measured by each patient having their clinical components recorded by two separate raters in succession. The first rater was the assessing nurse, and the second rater was a research nurse.SettingTwo emergency departments and one urgent care centre in the North West of England. Measurements were recorded over 1 week; data were collected for half a day at each of the three sites.PatientsA convenience sample of 90 paediatric attendees (aged 0–16 years), 30 from each of the three sites.Main outcome measuresTwo independent measures for each child were compared using kappa or prevalence-adjusted bias-adjusted kappa (PABAK). Bland-Altman plots were also constructed for continuous measurements.ResultsInter-rater reliability ranged from moderate (0.62 (95% CI 0.48 to 0.74) weighted kappa) to very good (0.98 (95% CI 95 to 0.99) weighted kappa) for all measurements except ‘nurse judgement’ for which agreement was fair (0.30, 95% CI 0.09 to 0.50 PABAK). Complete information from both raters on all the clinical components of the PAGE score were available for 73 children (81%). These total scores showed good’ inter-rater reliability (0.64 (95% CI 0.53 to 0.74) weighted kappa).ConclusionsOur findings suggest different nurses would demonstrate good inter-rater reliability when collecting acute assessments needed for the PAGE score, reinforcing the applicability of the tool. The importance of determining reliability in scoring systems is highlighted and a suitable methodology was presented.


Author(s):  
Vinaya Manchaiah ◽  
Monica L. Bellon-Harn ◽  
Marcella Michaels ◽  
Vinay Swarnalatha Nagaraj ◽  
Eldré W. Beukes

Abstract Background Increasingly, people access Internet-based health information about various chronic conditions including hearing loss and hearing aids. YouTube is one media source that has gained much popularity in recent years. Purpose The current study examines the source, content, understandability, and actionability of YouTube videos related to hearing aids. Research Design Cross-sectional design by analyzing the videos at single point in time. Study Sample One hundred most frequently viewed videos in YouTube. Intervention Not applicable. Data Collection and Analysis The 100 most-viewed English language videos targeting individuals seeking information regarding hearing aids were identified and manually coded. Data collection included general information about the video (e.g., source, title, authorship, date of upload, duration of video), popularity-driven measures (e.g., number of views, likes, dislikes), and the video source (consumer, professional, or media). The video content was analyzed to examine what pertinent information they contained in relation to a predetermined fact sheet. Understandability and actionability of the videos were examined using the Patient Education Material Assessment Tool for Audiovisual Materials. Results Of the 100 most-viewed videos, 11 were consumer-based, 80 were created by professionals, and the remaining 9 were media-based. General information about hearing aids, hearing aid types, and handling and maintenance of hearing aids were the most frequently discussed content categories with over 50% of all videos commenting on these areas. Differences were noted between source types in several content categories. The overall understandability scores for videos from all sources were 74%, which was considered adequate; however, the actionability scores for all the videos were 68%, which is considered inadequate. Conclusion YouTube videos about hearing aids focused on a range of issues and some differences were found between source types. The poor actionability of these videos may result in incongruous consumer actions. Content and quality of the information in hearing aid YouTube videos needs to be improved with input from professionals.


2021 ◽  
Vol 10 (4) ◽  
pp. 181-186
Author(s):  
Cem Yener ◽  
Sinan Ates

Aim: Non-invasive prenatal testing is a method that determines the risk of a fetus being born with certain genetic abnormalities. In this study, we aimed to examine the quality of information on YouTube for non-invasive prenatal testing. Methods: The term "Non-invasive prenatal testing" was entered in the YouTube search bar on May 1, 2021, and the top 50 YouTube videos of the non-invasive prenatal testing with the highest number of views were recorded after the exclusion of videos with a non-English language, videos repeated twice and irrelevant videos. Length of the videos, likes, and dislikes were recorded. Videos were evaluated by two obstetricians. A questionnaire consisting of 9 dichotomous questions was conducted to assess whether there was adequate information about non-invasive prenatal testing. In addition, video quality was evaluated with the Global Quality Scale, the Patient Education Materials Assessment Tool and the Journal of the American Medical Association Benchmark Criteria. Results: The mean Global Quality Scale was 2.96±0.62. Most videos answered the question: ‘What is non-invasive prenatal testing?’ (94%), and ‘How is non-invasive prenatal testing done?’ (82%). However, there was a lack of information about the limitation of non-invasive prenatal testing in certain situations (only %16 of videos answered limitations of non-invasive prenatal testing). Three (6%) of the videos had misinformation. The mean Global Quality Scale was 2.96±0.62. The Patient Education Materials Assessment Tool mean value was 72% and 58% in terms of understandability and actionability, respectively. The mean Journal of the American Medical Association Benchmark Criteria score was found as 1.4±0.8. Conclusion: The videos posted about non-invasive prenatal testing on YouTube were of poor-moderate quality. If the quality of the videos increases, patients can have sufficient and accurate information about non-invasive prenatal, especially during these pandemic days. Keywords: health information, prenatal diagnosis, online systems


2021 ◽  
Vol 2021 ◽  
pp. 1-5
Author(s):  
Joseph N. Hewitt ◽  
Joshua G. Kovoor ◽  
Christopher D. Ovenden ◽  
Gayatri P. Asokan

Background. Surgical patients frequently seek information from digital sources, particularly before common operations such as laparoscopic cholecystectomy (LC). YouTube provides a large amount of free educational content; however, it lacks regulation or peer review. To inform patient education, we evaluated the quality of YouTube videos on LC. Methods. We searched YouTube with the phrase “laparoscopic cholecystectomy.” Two authors independently rated quality of the first 50 videos retrieved using the JAMA, Health on the Net (HON), and DISCERN scoring systems. Data collected for each video included total views, time since upload, video length, total comments, and percentage positivity (proportion of likes relative to total likes plus dislikes). Interobserver reliability was assessed using an intraclass correlation coefficient (ICC). Association between quality and video characteristics was tested. Results. Mean video quality scores were poor, scoring 1.9/4 for JAMA, 2.0/5.0 for DISCERN, and 4.9/8.0 for HON. There was good interobserver reliability with an ICC of 0.78, 0.81, and 0.74, respectively. Median number of views was 21,789 (IQR 3000–61,690). Videos were mostly published by private corporations. No video characteristic demonstrated significant association with video quality. Conclusion. YouTube videos for LC are of low quality and insufficient for patient education. Treating surgeons should advise of the website’s limitations and direct patients to trusted sources of information.


Sign in / Sign up

Export Citation Format

Share Document