scholarly journals The Readability of Patient Education Materials Pertaining to Gastrointestinal Procedures

2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Mohammad S. Nawaz ◽  
Laura E. McDermott ◽  
Savanna Thor

Introduction. Due to the ubiquity and ease of access of Internet, patients are able to access online health information more easily than ever. The American Medical Association recommends that patient education materials be targeted at or below the 6th grade level in order to accommodate a wider audience. In this study, we evaluate the difficulty of educational materials pertaining to common GI procedures; we analyze on the readability of online education materials for colonoscopy, flexible sigmoidoscopy, and esophagogastroduodenoscopy (EGD). Methods. Google search was performed using keywords of “colonoscopy,” “sigmoidoscopy,” and “EGD” with “patient information” at the end of each search term. The texts from a total of 18 studies, 6 for each of the procedures, were then saved. Each study was also subdivided into “Introduction,” “Preparation,” “Complications,” and if available, “Alternatives.” Furthermore, medical terminology that was properly explained, proper nouns, medication names, and other copyright text were removed in order to prevent inflation of the difficulty. Five validated readability tests were used to analyze each study and subsections: Coleman-Liau, New Dale-Chall, Flesch-Kincaid, Gunning Fog, SMOG. Results. Studies on colonoscopy, flexible sigmoidoscopy, and EGD had median readability grades of 9.7, 10.2, and 11.0, respectively. Analysis of the subsections revealed that the “Alternative” subsection was the most difficult to comprehend with a readability score of 11.4, whereas the “Introduction” subsection was the easiest to comprehend with a readability score of 9.5. Conclusion. Despite modifications to the studies that improved the readability scores, patient education materials were still significantly above the recommended 6th grade level across all websites. This study emphasizes that clear and simple language is warranted in order to create information that is suitable for most patients.

2020 ◽  
Vol 40 (11) ◽  
pp. NP636-NP642 ◽  
Author(s):  
Eric Barbarite ◽  
David Shaye ◽  
Samuel Oyer ◽  
Linda N Lee

Abstract Background In an era of widespread Internet access, patients increasingly look online for health information. Given the frequency with which cosmetic botulinum toxin injection is performed, there is a need to provide patients with high-quality information about this procedure. Objectives The aim of this study was to examine the quality of printed online education materials (POEMs) about cosmetic botulinum toxin. Methods An Internet search was performed to identify 32 websites of various authorship types. Materials were evaluated for accuracy and inclusion of key content points. Readability was measured by Flesch Reading Ease and Flesch-Kincaid Grade Level. Understandability and actionability were assessed with the Patient Education Materials Assessment Tool for Printed Materials. The effect of authorship was measured by undertaking analysis of variance between groups. Results The mean [standard deviation] accuracy score among all POEMs was 4.2 [0.7], which represents an accuracy of 76% to 99%. Mean comprehensiveness was 47.0% [16.4%]. Mean Flesch-Kincaid Grade Level and Flesch Reading Ease scores were 10.7 [2.1] and 47.9 [10.0], respectively. Mean understandability and actionability were 62.8% [18.8%] and 36.2% [26.5%], respectively. There were no significant differences between accuracy (P > 0.2), comprehensiveness (P > 0.5), readability (P > 0.1), understandability (P > 0.3), or actionability (P > 0.2) by authorship. Conclusions There is wide variability in the quality of cosmetic botulinum toxin POEMs regardless of authorship type. The majority of materials are written above the recommended reading level and fail to include important content points. It is critical that providers take an active role in the evaluation and endorsement of online patient education materials.


2020 ◽  
Vol 5 (4) ◽  
pp. 2473011420S0008
Author(s):  
Alan G. Shamrock ◽  
Burke Gao ◽  
Trevor Gulbrandsen ◽  
John E. Femino ◽  
Cesar de Cesar Netto ◽  
...  

Category: Ankle Arthritis; Ankle Introduction/Purpose: Patients often access online resources to learn about orthopedic procedures prior to undergoing elective surgery. In order to be fully understood by the average English-speaking adult, online health information must be written at an elementary school reading level. To be helpful to patients, educational resources should also be generally understandable and have actionable direction that positively affects their healthcare interactions. There are several previously validated indices for accessing the reading level of written materials. The Patient Education Materials Assessment Tool (PEMAT) provides a reliable and validated method to measure the understandability and actionability of education materials. The purpose of this study was to utilize PEMAT and readability algorithms to quantify readability, understandability and actionability of online patient education materials related to total ankle arthroplasty (TAA). Methods: Online patient education materials were identified using two independently conducted Google engine searches with the term ‘ankle replacement’. Using the top 50 search results, articles were included if they specifically served to educate patients regarding TAA. Exclusion criteria included news articles, non-text materials (video), research manuscripts, industry websites, and articles not related to TAA. The readability of included articles was quantified using the validated Flesch-Kincaid Grade Level index. The PEMAT form (Figure 1) for printed materials was used to assess understandability and actionability using a 0-100 scale for both measures of interest. Spearman’s correlation coefficient was utilized to examine the relationship between a website’s average rank on Google (from first to last) and its readability, understandability, and actionability. P-values of less than 0.05 were considered significant. Results: Forty-one websites met inclusion criteria. The mean Flesch Kincaid reading grade level was 13.7+-15.3 (range: 6.3-16.8), with no website written at an elementary school level. Article readability scores were not associated with Google search rank (p>0.301). Mean understandability and actionability scores were 70.4+-15.3 and 24.4+-24.3, respectively. Among understandability categories, only 9.8% (n=4) included summaries and only 46.3% (n=19) included visual aids. Among actionability categories, 58.5% (n=24) of websites identified at least one action for readers, but only 16.7% (n=4) of these studies broke down actions into explicit, easy to understand steps. Higher actionability scores were significantly associated with earlier Google search rank (rho:- 0.44, p=0.004), while higher understandability scores were associated with later Google search rank (rho: 0.53, p<0.001). Conclusion: No website describing TAA was written at or below the nationally recommended 6th grade reading level. Overall, TAA online educational materials scored poorly with respect to readability, understandability, and actionability. Article actionability but not understandability correlated with an earlier Google search rank. In the era of shared decision making, it is vital that patients understand procedures, as well as the risks and benefits prior to undergoing elective surgery. These results suggest that current publicly available resources for TAA remain inadequate for patient education.


2020 ◽  
Vol 5 (4) ◽  
pp. 2473011420S0022
Author(s):  
Burke Gao ◽  
Alan G. Shamrock ◽  
Trevor Gulbrandsen ◽  
John E. Femino ◽  
Cesar de Cesar Netto ◽  
...  

Category: Sports; Trauma Introduction/Purpose: Patients often access online resources to learn about orthopedic procedures prior to undergoing elective surgery. In order to be fully understood by the average English-speaking adult, online health information must be written at an elementary school reading level. To be helpful to patients, educational resources should also be generally understandable and have actionable direction that positively affects healthcare interactions. There are several previously validated indices for accessing the reading level of written materials. The Patient Education Materials Assessment Tool (PEMAT) provides a reliable and validated method to measure the understandability and actionability of education materials. The purpose of this study was to utilize PEMAT and readability algorithms to quantify readability, understandability and actionability of online patient education materials related to Achilles tendon repair. Methods: Online patient education materials were identified using two independently conducted Google engine searches with the term ‘Achilles tendon repair’. Using the top 50 search results, articles were included if they specifically served to educate patients regarding TAA. Exclusion criteria included news articles, non-text materials (video), research manuscripts, industry websites, and articles not related to Achilles tendon repair. The readability of included articles was quantified using the validated Flesch-Kincaid Grade Level index. The PEMAT form for printed materials was used to assess understandability and actionability using a 0-100 scale for both measures of interest. Spearman’s correlation coefficient was utilized to examine the relationship between a website’s average rank on Google (from first to last) and its readability, understandability, and actionability. P-values of less than 0.05 were considered significant. Results: Thirty-one websites met inclusion criteria. The mean Flesch Kincaid reading grade level was 10.8+-2.9, with only one website written below the 6th grade reading level. Higher Flesch-Kincaid grade was associated with later Google seach rank (rho: 0.488, p=0.010). Mean understandability and actionability scores were 67.1+-16.4% and 38.3+-28.4%, respectively. Among understandability criteria, only 12.9% (n=4) of articles included summaries and just 38.7% (n=12) included visual aids. Among actionability categories, 74% (n=23) of websites identified at least one action for readers, while only 60.8% (n=14) of these studies broke down actions into explicit, easy to understand steps. Actionability scores were not correlated with Google search rank (rho: -0.02, p=0.888), while higher understandability scores were associated with later Google search rank (rho: 0.45, p=0.017). Conclusion: Only one website describing Achilles tendon repair was written at or below the nationally recommended 6th grade reading level. Overall, Achilles tendon repair online educational materials scored poorly with respect to readability, understandability, and actionability. Articles that appeared earlier in the Google search had lower readability and understandability scores. In the era of shared decision making, it is vital that patients understand procedures, as well as the risks and benefits prior to undergoing elective surgery. These results suggest that current publicly available resources for Achilles tendon repair remain inadequate for patient education.


2022 ◽  
pp. 000348942110666
Author(s):  
Elysia Miriam Grose ◽  
Emily YiQin Cheng ◽  
Marc Levin ◽  
Justine Philteos ◽  
Jong Wook Lee ◽  
...  

Purpose: Complications related to parotidectomy can cause significant morbidity, and thus, the decision to pursue this surgery needs to be well-informed. Given that information available online plays a critical role in patient education, this study aimed to evaluate the readability and quality of online patient education materials (PEMs) regarding parotidectomy. Methods: A Google search was performed using the term “parotidectomy” and the first 10 pages of the search were analyzed. Quality and reliability of the online information was assessed using the DISCERN instrument. Flesch-Kincaid Grade Level (FKGL) and Flesch-Reading Ease Score (FRE) were used to evaluate readability. Results: Thirty-five PEMs met the inclusion criteria. The average FRE score was 59.3 and 16 (46%) of the online PEMs had FRE scores below 60 indicating that they were fairly difficult to very difficult to read. The average grade level of the PEMs was above the eighth grade when evaluated with the FKGL. The average DISCERN score was 41.7, which is indicative of fair quality. There were no significant differences between PEMs originating from medical institutions and PEMs originating from other sources in terms of quality or readability. Conclusion: Online PEMs on parotidectomy may not be comprehensible to the average individual. This study highlights the need for the development of more appropriate PEMs to inform patients about parotidectomy.


Author(s):  
Elysia M. Grose ◽  
Connor P. Holmes ◽  
Kaishan A. Aravinthan ◽  
Vincent Wu ◽  
John M. Lee

Abstract Background Given that nasal septoplasty is a common procedure in otolaryngology – head and neck surgery, the objective of this study was to evaluate the quality and readability of online patient education materials on septoplasty. Methods A Google search was performed using eight different search terms related to septoplasty. Six different tools were used to assess the readability of included patient education materials. These included the Flesch-Kincaid Grade Level, Flesch Reading Ease, Gunning-Fog Index, Simple Measure of Gobbledygook Index, Coleman-Liau Index, and Automated Readability Index. The DISCERN tool was used to assess quality and reliability. Results Eighty-five online patient education materials were included. The average Flesch-Reading Ease score for all patient education materials was 54.9 ± 11.5, indicating they were fairly difficult to read. The average reading grade level was 10.5 ± 2.0, which is higher than the recommended reading level for patient education materials. The mean DISCERN score was 42.9 ± 10.5 and 42% (36/85) of articles had DISCERN scores less than 39, corresponding to poor or very poor quality. Conclusion The majority of online patient education materials on septoplasty are written above the recommended reading levels and have significant deficiencies in terms of their quality and reliability. Clinicians and patients should be aware of the shortcomings of these resources and consider the impact they may have on patients’ decision making.


Author(s):  
Adam J. Beer ◽  
Michael Eggerstedt ◽  
Matthew J. Urban ◽  
Ryan M. Smith ◽  
Peter C. Revenaugh

AbstractInjectable facial fillers have become tremendously more popular in recent years, and the Internet offers a proportional amount of consumer-facing educational material. This study sought to explore the quality of these online materials. The top 20 Web sites offering educational materials about facial filler were identified via Google search and sorted by source: Medical Professional Boards, Hospitals and Providers, Medical News and Reference, and Fashion. The materials were assessed for overall quality with the validated DISCERN instrument. The authors also assessed understandability and actionability (Patient Education Material Assessment Tool - PEMAT), accuracy, comprehensiveness, and readability (Flesch-Kincaid Grade Level and Flesch Reading Ease). The mean DISCERN score was 46.9 ± 7.6, which is considered “fair” quality educational material; above “poor,” but below “good” and “excellent.” Understandability and actionability scores were low, particularly with respect to visual aids. The materials were generally accurate (76–99%), but scored poorly in comprehensiveness, as 15% failed to mention any risks/adverse effects and only 35% mentioned cost. On average, readability was at an 11th grade level, far more complex than ideal (< 6th grade level). Information disseminated from seemingly reputable sources such as professional boards and hospitals/providers were not of higher quality or superior in any of the above studied domains. In conclusion, online educational materials related to injectable facial fillers are of subpar quality, including those from academic and professional organizations. Visual aids were particularly weak. The facial rejuvenation community should make a concerted effort to set a higher standard for disseminating such information.


Cartilage ◽  
2016 ◽  
Vol 8 (2) ◽  
pp. 112-118 ◽  
Author(s):  
Dean Wang ◽  
Rohit G. Jayakar ◽  
Natalie L. Leong ◽  
Michael P. Leathers ◽  
Riley J. Williams ◽  
...  

Objective Patients commonly use the Internet to obtain their health-related information. The purpose of this study was to investigate the quality, accuracy, and readability of online patient resources for the management of articular cartilage defects. Design Three search terms (“cartilage defect,” “cartilage damage,” “cartilage injury”) were entered into 3 Internet search engines (Google, Bing, Yahoo). The first 25 websites from each search were collected and reviewed. The quality and accuracy of online information were independently evaluated by 3 reviewers using predetermined scoring criteria. The readability was evaluated using the Flesch-Kincaid (FK) grade score. Results Fifty-three unique websites were evaluated. Quality ratings were significantly higher in websites with a FK score >11 compared to those with a score of ≤11 ( P = 0.021). Only 10 websites (19%) differentiated between focal cartilage defects and diffuse osteoarthritis. Of these, 7 (70%) were elicited using the search term “cartilage defect” ( P = 0.038). The average accuracy of the websites was high (11.7 out of maximum 12), and the average FK grade level (13.4) was several grades higher than the recommended level for readable patient education material (eighth grade level). Conclusions The quality and readability of online patient resources for articular cartilage defects favor those with a higher level of education. Additionally, the majority of these websites do not distinguish between focal chondral defects and diffuse osteoarthritis, which can fail to provide appropriate patient education and guidance for available treatment. Clinicians should help guide patients toward high-quality, accurate, and readable online patient education material.


2021 ◽  
Author(s):  
Keon Pearson ◽  
Summer Ngo ◽  
Eson Ekpo ◽  
Ashish Sarraju ◽  
Grayson Baird ◽  
...  

BACKGROUND Lipoprotein (a) (Lp(a)) is a highly proatherogenic lipid fraction that is a clinically significant risk modifier. Patients wanting to learn more about Lp(a) are likely to use online patient educational materials (OPEM). However, the readability of OPEM may exceed the health literacy of the general public. OBJECTIVE This study aims to assess the readability of online patient education materials related to Lp(a). We hypothesized that the readability of these online materials would exceed the 6th grade level recommended by the American Medical Association (AMA). METHODS Using an online search engine, we queried the top 20 search results from 10 commonly used Lp(a)-related search terms to identify a total of 200 websites. We excluded duplicate websites, advertised results, research journal articles, or non-patient-directed materials, such as those intended only for health professionals or researchers. Grade-level readability was calculated using 5 standard readability metrics (Automated Readability Index, SMOG Index, Coleman Liau Index, Gunning Fog Score, Flesch Kincaid score) to produce robust point (mean) and interval (confidence interval) estimates of readability. Generalized estimating equations were used to model grade-level readability by each search term, with the 5 readability scores nested within each OPEM. RESULTS A total of 27 unique websites were identified for analysis. The average readability score for the aggregated results was 12.2 grade level (95% CI, 10.9798 - 13.3978). OPEM were grouped into 6 categories by primary source: industry, lay press, research foundation and non-profit organizations, university or government, clinic, and other. The most readable category was OPEM published by universities or government agencies (9.0, 95% CI 6.8-11.3). The least readable OPEM on average were the ones published by the lay press (13.0, 95% CI 11.2-14.8). All categories exceeded the 6th grade reading level recommended by the AMA. CONCLUSIONS Conclusions: Lack of access to readable OPEM may disproportionately affect patients with low health literacy. Ensuring that online content is understandable by broad audiences is a necessary component of increasing the impact of novel therapeutics and recommendations regarding Lp(a).


OTO Open ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 2473974X2110326
Author(s):  
Matthew Shneyderman ◽  
Grace E. Snow ◽  
Ruth Davis ◽  
Simon Best ◽  
Lee M. Akst

Objectives To assess readability and understandability of online materials for vocal cord leukoplakia. Study Design Review of online materials. Setting Academic medical center. Methods A Google search of “vocal cord leukoplakia” was performed, and the first 50 websites were considered for analysis. Readability was measured by the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and Simple Measure of Gobbledygook (SMOG). Understandability and actionability were assessed by 2 independent reviewers with the PEMAT-P (Patient Education Materials Assessment Tool for Printable Materials). Unpaired t tests compared scores between sites aimed at physicians and those at patients, and a Cohen’s kappa was calculated to measure interrater reliability. Results Twenty-two websites (17 patient oriented, 5 physician oriented) met inclusion criteria. For the entire cohort, FRES, FKGL, and SMOG scores (mean ± SD) were 36.90 ± 20.65, 12.96 ± 3.28, and 15.65 ± 3.57, respectively, indicating that materials were difficult to read at a >12th-grade level. PEMAT-P understandability and actionability scores were 73.65% ± 7.05% and 13.63% ± 22.47%. Statistically, patient-oriented sites were more easily read than physician-oriented sites ( P < .02 for each of the FRES, FKGL, and SMOG comparisons); there were no differences in understandability or actionability scores between these categories of sites. Conclusion Online materials for vocal cord leukoplakia are written at a level more advanced than what is recommended for patient education materials. Awareness of the current ways that these online materials are failing our patients may lead to improved education materials in the future.


2020 ◽  
Author(s):  
Trevor Gulbrandsen ◽  
Mary Kate Skalitzky ◽  
Alan Gregory Shamrock ◽  
Burke Gao ◽  
Obada Hasan ◽  
...  

BACKGROUND Patients often turn to online resources following the diagnosis of osteosarcoma. To be fully understood by the average American adult, the American Medical Association (AMA) and National Institutes of Health (NIH) recommend online health information to be written at a 6th grade level or lower. Previous analyses of osteosarcoma resources have not measured whether text is written such that readers can process key information (understandability) or identify available actions to take (actionability). The Patient Education Materials Assessment Tool (PEMAT) is a validated measurement of understandability and actionability. OBJECTIVE The purpose of this study was to evaluate osteosarcoma online resources utilizing measures of readability, understandability, and actionability. METHODS Using the search term “osteosarcoma”, two independent searches (Google.com) were performed and the top 50 results were collected. Websites were included if directed at providing patient education on osteosarcoma. Readability was quantified using validated algorithms: Flesh-Kincaid Grade Ease (FKGE), Flesch-Kincaid Grade-Level (FKGL). A higher FKGE score represents the material is easier to read. All other readability scores represent the US school grade level. Two independent PEMAT assessments were performed with independent scores assigned for both understandability and actionability. A PEMAT score of 70% or below is considered poorly understandable and/or poorly actionable. Statistical significance was defined as p≤0.05. RESULTS Of 53 unique websites, 37 websites (69.8%) met inclusion criteria. The mean FKGE was 40.8±13.6. The mean FKGL grade level was 12.0±2.4. No (0%) websites scored within the acceptable NIH/AHA recommended reading level. Overall, only 10.8% (n=4) and 2.7% (n=1) met the acceptable understandability and actionability threshold. CONCLUSIONS Overall, osteosarcoma online patient educational materials scored poorly with respect to readability, understandability, and actionability. None of the online resources scored at the recommended reading level. Only four met the appropriate score to considered understandable by the general public. Future efforts should be made to improve online resources in order to support patient understanding.


Sign in / Sign up

Export Citation Format

Share Document