scholarly journals Quality of YouTube Videos on Laparoscopic Cholecystectomy for Patient Education

2021 ◽  
Vol 2021 ◽  
pp. 1-5
Author(s):  
Joseph N. Hewitt ◽  
Joshua G. Kovoor ◽  
Christopher D. Ovenden ◽  
Gayatri P. Asokan

Background. Surgical patients frequently seek information from digital sources, particularly before common operations such as laparoscopic cholecystectomy (LC). YouTube provides a large amount of free educational content; however, it lacks regulation or peer review. To inform patient education, we evaluated the quality of YouTube videos on LC. Methods. We searched YouTube with the phrase “laparoscopic cholecystectomy.” Two authors independently rated quality of the first 50 videos retrieved using the JAMA, Health on the Net (HON), and DISCERN scoring systems. Data collected for each video included total views, time since upload, video length, total comments, and percentage positivity (proportion of likes relative to total likes plus dislikes). Interobserver reliability was assessed using an intraclass correlation coefficient (ICC). Association between quality and video characteristics was tested. Results. Mean video quality scores were poor, scoring 1.9/4 for JAMA, 2.0/5.0 for DISCERN, and 4.9/8.0 for HON. There was good interobserver reliability with an ICC of 0.78, 0.81, and 0.74, respectively. Median number of views was 21,789 (IQR 3000–61,690). Videos were mostly published by private corporations. No video characteristic demonstrated significant association with video quality. Conclusion. YouTube videos for LC are of low quality and insufficient for patient education. Treating surgeons should advise of the website’s limitations and direct patients to trusted sources of information.

Author(s):  
Rithvik Reddy ◽  
Horace Cheng ◽  
Nicholas Jufas ◽  
Nirmal Patel

Objectives: The objective of this study is to assess quality of the most popular cholesteatoma videos on YouTube using recognized scoring systems and to determine if video quality metrics correlated with video popularity based on likes and views Design: Cross sectional survey of available data Setting: Metadata acquisition using YouTube searches using Australian IP addresses Participants: Three independent neuro-otologists partaking in scoring videos Main outcome measures: Each video was viewed and scored by three independent assessors using both a novel tool to score the usefulness of the video as well as the validated DISCERN scoring tool. Popularity metrics were analyzed and compared to video popularity. Results: A total of 90 YouTube videos were analyzed with an average 55,292 views per video with an average of 271 likes and 22 dislikes. The inter-rater correlation was moderate with Fleiss-kappa score 0.42 [P < 0.01] using a novel scoring tool for cholesteatoma and inter-rater correlation coefficient was 0.78 [95% CI = 0.58 - 0.90] indicating good reliability for DISCERN scores. The overall video quality was poor with higher DISCERN scores found in videos uploaded from Academic Institutions. Conclusions: Informative video quality on YouTube on cholesteatoma is overall of poor quality. Videos with unclassified sources or more dislikes correlated poorly with video quality. Given the increase in patients turning to the internet for information regarding their health conditions, otology and otolaryngology societies should be encouraged to publish high quality YouTube videos on cholesteatoma and other ear conditions.


2020 ◽  
Author(s):  
Muhammet Arif Özbek ◽  
Oguz Baran ◽  
Şevket Evran ◽  
Ahmet Kayhan ◽  
Tahsin Saygı ◽  
...  

Abstract Background: Most people face low back pain problems at least once in their lifetimes. With the advancing technology, people have been consulting the internet regarding their diagnoses more and more over the last 20 years. This study aims to evaluate the accuracy and reliability of YouTube videos on low back pain. Methods: The keyword “Low Back Pain” was used in our search on YouTube. The first 50 videos to come up in the search results were evaluated using JAMA, DISCERN, and GQS scoring systems. The individual correlation of each video and the correlation between the aforementioned scoring systems were statistically analyzed. Results: The average length of the 50 videos that were analyzed is 7,57 minutes (0,34 – 48,23 minutes), and the average daily view count of the videos is 331,14. Generally, video quality was found to be “poor”. On average, JAMA score was 1,64, DISCERN score was 1,63 and GQS score was 1,93. The most common videos found on the subject were those that were done by TV programs. And, videos by health information websites and by Hospitals / Doctors / Educational Institutions were, while still being below the threshold value, found to give higher quality information on the subject than the videos by other sources. Conclusion: Videos on YouTube regarding low back pain are of low quality, and most are created by unreliable sources. Therefore, such YouTube videos should not be recommended as patient education tools on low back pain. An important step in disseminating correct medical information to the public would be to have a platform where the accuracy and quality of given medical information are evaluated by medical experts.


Breast Care ◽  
2021 ◽  
pp. 1-11
Author(s):  
Alvaro Manuel Rodriguez Rodriguez ◽  
María Blanco-Diaz ◽  
Pedro Lopez Diaz ◽  
Marta de la Fuente Costa ◽  
Lirios Dueñas ◽  
...  

<b><i>Background:</i></b> The prolonged immobilization suggested after breast cancer (BC) surgery causes morbidity. Patients search the Internet, especially social networks, for recommended exercises. <b><i>Objective:</i></b> The aim of this observational study was to assess the quality of YouTube videos, accessible for any patient, about exercises after BC surgery. <b><i>Methods:</i></b> A systematic search was performed on YouTube. One hundred and fifty videos were selected and analyzed. Two statistical analyses were conducted based on machine-learning techniques. Videos were classified as “Relevant” and “Non-Relevant” using principal component analysis models. Popularity was evaluated by Video Power Index (VPI), informational quality and accuracy were measured using the DISCERN Scale and Global Quality Scale (GQS). Scoring criteria for exercises were established according to the exercises recommended by the Oncology Section of the American Physical Therapy Association (APTA). Interobserver agreement and individual correlations were statistically examined. <b><i>Results:</i></b> DISCERN scored a mean of 50.97 (standard deviation [SD] 19.19). HONcode scored 78.30 (11.02) and GQS scored 3.49 (0.74). Average number of views was 53,963 (SD 67,376), mean duration was 9:42 min (9:15), mean days online was 2,158 (922), mean view ratio was 27.14 (30.24), mean likes was 245 (320.5), mean dislikes was 13.4 (14.2), and mean VPI was 93.48 (5.42). <b><i>Conclusion:</i></b> The quality of YouTube videos of recommended exercises post-BC surgery is high and can be a translational activity to improve patients’ behavior. Health institutions and NGOs, with higher popularity levels than academic institutions, should consider this information when implementing new policies focused on video quality which can contribute to adaptive behavior in patients.


2021 ◽  
pp. 152483992098479
Author(s):  
Joseph G. L. Lee ◽  
Mahdi Sesay ◽  
Paula A. Acevedo ◽  
Zachary A. Chichester ◽  
Beth H. Chaney

The quality of patient education materials is an important issue for health educators, clinicians, and community health workers. We describe a challenge achieving reliable scores between coders when using the Patient Educational Materials Assessment Tool (PEMAT) to evaluate farmworker health materials in spring 2020. Four coders were unable to achieve reliability after three attempts at coding calibration. Further investigation identified improvements to the PEMAT codebook and evidence of the difficulty of achieving traditional interrater reliability in the form of Krippendorff’s alpha. Our solution was to use multiple raters and average ratings to achieve an acceptable score with an intraclass correlation coefficient. Practitioners using the PEMAT to evaluate materials should consider averaging the scores of multiple raters as PEMAT results otherwise may be highly sensitive to who is doing the rating. Not doing so may inadvertently result in the use of suboptimal patient education materials.


2021 ◽  
Vol 108 (Supplement_9) ◽  
Author(s):  
Aya Musbahi ◽  
Arul Immanuel

Abstract Background Studies in patient literature particularly regarding online video literature in all fields are few. Scoring systems for video materials such as the validated PEMAT(Patient Education Material Assessment Tool)  have been used before to look at video patient literature. The aim of this study is to use the PEMAT tool to evaluate the quality of Youtube patient literature on oesophageal cancer and look at the inter rater reliability between lay and medical scorers. Methods A Youtube search was performed in April 2021 using the search terms “oesophageal cancer”, “esophageal cancer” “gullet cancer”. Characteristic data collected included language, ratings (thumbs up), type of video, country of origin and presence of advertising as well as intended audience. A PEMAT tool which is validated instrument to rate patient video material was used. A score of 70% is acceptable in the actionability and understandability domains. Cohen’s kappa coefficient was used to test inter-rater reliability between two lay person raters; and two medical raters. Results Seven sites were rated as understandable by the medical raters average and 13 were rated understandable by the lay raters average. Only two videos achieved best case scenario where both medical raters rated as understandable, rather than the average of both. Twelve videos were rated by both lay raters as understandable. Actionability rated poorer with only two videos rated as actionable on average by the medical raters and seven rated actionable by the lay raters on average. Conclusions Youtube videos on Oesophageal cancer score poorly in terms of actionability and understandability.


2020 ◽  
Vol 08 (05) ◽  
pp. E598-E606
Author(s):  
Dhruvil Radadiya ◽  
Alexei Gonzalez-Estrada ◽  
Jorge Emilio Lira-Vera ◽  
Katia Lizarrga-Torres ◽  
Shayan Sinha Mahapatra ◽  
...  

Abstract Background and study aims Colonoscopy is an effective tool to prevent colorectal cancer. Social media has emerged as a source of medical information for patients.YouTube (a video sharing website) is the most popular video informative source. Therefore, we aimed to assess the educational quality of colonoscopy videos available on YouTube. Methods We performed a YouTube search using the keyword “colonoscopy” yielded 429 videos, of which 255 met the inclusion criteria. Colonoscopy Data Quality Score (C-DQS) was created to rate the quality of the videos (–10 to +40 points) based on a colonoscopy education video available on the Ameican Society of Gastrointestinal Endoscopy (ASGE) website. Each video was scored by six blinded reviewers independently using C-DQS. The Global Quality Score (GQS) was used for score validation. The intraclass correlation coefficient (ICC) was used to assess the similarity of the scores among reviewers. Results Professional societies had the highest number of videos (44.3 %). Videos from professional societies (6.94) and media (6.87) had significantly higher mean C-DQS compared to those from alternative medicine providers (1.19), companies (1.16), and patients (2.60) (P < 0.05). Mean C-DQS score of videos from healthcare providers (4.40) was not statistically different than other sources. There was a high degree of agreement among reviewers for the videos from all sources (ICC = 0.934; P < 0.001). Discussion YouTube videos are a poor source of information on colonoscopy. Professional societies and media are better sources of quality information for patient education on colonoscopy. The medical community may need to engage actively in enriching the quality of educational material available on YouTube.


2021 ◽  

Purpose: YouTubeTM is one of the most popular social media platforms on the internet, and patients with chronic disease frequently use it to seek treatment options. In this study, we aimed to evaluate the quality of YouTube videos about erectile dysfunction. Materials & methods: The terms "erectile dysfunction treatment'', "erectile dysfunction surgery'', and "cure erectile dysfunction'' were entered into the YouTube search bar. A total of 56 videos were included in the study. Videos' view counts; upload dates; like, dislike, and comment counts; uploader qualifications; length; and content were recorded. Video power index (VPI), Quality Criteria for Consumer Health Information (DISCERN), and Journal of the American Medical Association (JAMA) scores were determined. Results: Thirty-two (57.1%) videos consisted of real images, and 24 (42.9%) contained animated images. Twenty-four (42.9%) videos were uploaded by physicians, and 32 (57.1%) were uploaded by non-physicians. The mean like count of the videos was 5,307 ± 17.618, the mean dislike count was 560.07 ± 1548.07, and the mean comment count was 235 ± 373. The mean VPI value of the videos was calculated as 81.19 ± 21.19, the DISCERN score was 30.5 ± 8.1, and the JAMA score was 1.23 ± 0.55. Overall quality was very poor in 24 (42.9%) of the examined videos, poor in 21 (37.5%), average in 10 (17.9%), and good in one (1.8%). Conclusion: The overall quality of YouTube content on erectile dysfunction was not sufficient to provide reliable information for patients. Physicians should warn patients about the limitations of YouTube and direct them toward more appropriate sources of information.


2021 ◽  
Author(s):  
Jessica Westwood ◽  
Joshua Li Saw Hee ◽  
Gibran Farook Butt

Abstract Purpose: Healthcare information is easily accessible on YouTube, however it is unregulated and the quality may vary considerably. This study characterised and evaluated the content on YouTube regarding corneal transplantation surgery.Methods: YouTube was searched using ‘corneal transplant’ and the variations for penetrating and lamellar transplants. The results were deduplicated and screened for inclusion by two independent reviewers. A modified DISCERN tool was used to evaluate the quality of each video by two observers independently. Discrepancies were resolved through discussion and a third adjudicator where necessary.Results: 53 videos were included in this study and the mean overall DISCERN score was 1.91 out of 5 (SD = 0.90). Videos scored highest in relevance to corneal transplant (mean 3.89) and lowest in explaining which patients are unsuitable (mean 1.00) and offering sources of information (mean 1.11). The video with the highest viewer engagement (VPI) was a patient vlogging their experience of the procedure. Conclusion: Quality of YouTube content is variable and the lack of clarity over subtypes of corneal transplant can be confusing for patients. There is considerable scope to improve the use of visual aids, animations and diagrams within videos in order to supplement verbal information in clinic. Essential components needed to make an informed decision about corneal transplant are lacking in many videos, meaning videos may be a useful supplement but should not be relied on for comprehensive material.


2019 ◽  
Vol 2019 ◽  
pp. 1-6 ◽  
Author(s):  
Qasim Salimi ◽  
Thayer Nasereddin ◽  
Neel Patel ◽  
Reza Hashemipour ◽  
Augustine Tawadros ◽  
...  

Goals. The goal of this study was to develop an objective and detailed scoring system to assess the quality of bowel preparation. Background. The quality of bowel preparation impacts the success of the colonoscopy. We developed and compared a new bowel preparation scoring system, the New Jersey Bowel Preparation Scale (NJBPS), with existing systems that are limited by a lack of detail and objectivity in the Boston Bowel Preparation Scale (BBPS) and the Ottawa Bowel Preparation Scale (OBPS). Methods. This was a single-center, prospective, dual-observer study performed at Rutgers New Jersey Medical School University Hospital. Patients who were at medium risk for colorectal cancer and undergoing outpatient screening colonoscopy were enrolled in the study, and their bowel preparation was assessed separately by an attending and a fellow using each of the bowel preparation scoring systems. Results. 98 patients were analyzed in the study, of which 59% were female. Most of the patient population was African American (65%) or Hispanic (25%). The average age of the patient was 60 years. Chi-squared analysis using SPSS software revealed intraclass correlation coefficient values between attending and fellow scores for each scale. The NJBPS had the highest value at 0.988, while the BBPS and OBPS had values of 0.883 and 0.894. Limitations. Single-center study. Conclusions. The NJBPS and BBPS scores demonstrated a statistically significant agreement with each other. Overall, there was good interobserver agreement for all three scoring systems when comparing attendings to fellows for the same scoring system. However, the NJBPS possessed a stronger correlation.


2020 ◽  
Author(s):  
Connie Dodds ◽  
Andrew Blaikie ◽  
Sirjhun Patel

BACKGROUND The importance of red reflex testing as part of neonatal screening is recognised worldwide. The technique is ideally suited to online video-based instruction. The quality of online teaching material is however unknown OBJECTIVE We aim to objectively score the quality of red reflex demonstration videos in order to determine their pedagogical effectiveness and to assess the relationship between search engine ranking and quality. METHODS An internet search was performed on 12th February 2020 using keywords related to red reflex examination on search engine platforms YouTube, Google and DuckDuckGo. Video characteristics were recorded and popularity determined by calculating a Video Power Index (VPI) score. The videos were assessed by two medical students and two ophthalmologists using scoring systems: Red Reflex Specific (RRS), Understandability & Attractiveness (U&A), Reliability (JAMA), and Global Quality Score (GQS). A Total Quality Score (TQS) was created from RRS, U&A and JAMA scores as a measure of overall quality. Videos were categorised by source and in terms of usefulness. Correlations between audience interaction parameters and video quality alongside ranking position on the three search engines were investigated using Spearman’s rho. RESULTS Of the 625 videos screened, 14 were eligible for inclusion. Overall, videos had a mean TQS of 24.3/50 (range: 9 – 41) with six videos considered “educationally useful” based on the GQS. The main video source was physicians (43%), with videos uploaded by academics being of greatest overall quality (P = .023). There was a positive correlation between TQS and ranking position of videos using Google (rs= 0.569, P = .034) but not with other platforms. CONCLUSIONS The limited red reflex training videos currently available on the internet are generally poor and of variable quality. We recommend Google as a suitable platform to search for red reflex videos of better quality. These results may play a role in ensuring video teaching is performed optimally and highlight the need for improved access worldwide to videos of greater accuracy and reliability.


Sign in / Sign up

Export Citation Format

Share Document