scholarly journals Evaluation of Quality and Readability of Health Information Websites Identified through India’s Major Search Engines

2016 ◽  
Vol 2016 ◽  
pp. 1-6 ◽  
Author(s):  
S. Raj ◽  
V. L. Sharma ◽  
A. J. Singh ◽  
S. Goel

Background. The available health information on websites should be reliable and accurate in order to make informed decisions by community. This study was done to assess the quality and readability of health information websites on World Wide Web in India.Methods. This cross-sectional study was carried out in June 2014. The key words “Health” and “Information” were used on search engines “Google” and “Yahoo.” Out of 50 websites (25 from each search engines), after exclusion, 32 websites were evaluated. LIDA tool was used to assess the quality whereas the readability was assessed using Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and SMOG.Results. Forty percent of websites (n=13) were sponsored by government. Health On the Net Code of Conduct (HONcode) certification was present on 50% (n=16) of websites. The mean LIDA score (74.31) was average. Only 3 websites scored high on LIDA score. Only five had readability scores at recommended sixth-grade level.Conclusion. Most health information websites had average quality especially in terms of usability and reliability and were written at high readability levels. Efforts are needed to develop the health information websites which can help general population in informed decision making.

Author(s):  
N. E. Wrigley Kelly ◽  
K. E. Murray ◽  
C. McCarthy ◽  
D. B. O’Shea

AbstractHigh quality, readable health information is vital to mitigate the impact of the COVID-19 pandemic. The aim of this study was to assess the quality and readability of online COVID-19 information using 6 validated tools. This is a cross-sectional study. “COVID-19” was searched across the three most popular English language search engines. Quality was evaluated using the DISCERN score, Journal of the American Medical Association benchmark criteria and Health On the Net Foundation Code of Conduct. Readability was assessed using the Flesch Reading Ease Score, Flesch-Kincaid Grade Level and Gunning-Fog Index. 41 websites were suitable for analysis. 9.8% fulfilled all JAMA criteria. Only one website was HONCode certified. Mean DISCERN score was 47.8/80 (“fair”). This was highest in websites published by a professional society/medical journal/healthcare provider. Readability varied from an 8th to 12th grade level. The overall quality of online COVID-19 information was “fair”. Much of this information was above the recommended 5th to 6th grade level, impeding access for many.


2020 ◽  
Author(s):  
Esam Halboub ◽  
Mohammed Sultan Al-Akhali ◽  
Hesham M Al-Mekhlafi ◽  
Mohammed Nasser Alhajj

Abstract Objective: The study sought to assess the quality and readability of the web-based Arabic health information on COVID-19. Methods: Selected search engines were searched on 13 April 2020 for specific Arabic terms on COVID-19. The first 100 consecutive websites from each engine were obtained. The quality of the websites was analyzed using the Health on the Net Foundation Code of Conduct (HONcode), the Journal of the American Medical Association (JAMA) benchmarks, and the DISCERN benchmarks instrument. The readability was assessed using an online readability calculator tool. Results: Overall, 36 websites were found eligible for quality and readability analyses. Only one website (2.7%) was HONcode certified. No single website attained a high score based on the DISCERN tool; the mean score of all websites was 31.5±12.55. Regarding JAMA benchmarks, a mean score of 2.08±1.05 was achieved by the websites; however, only 4 (11.1%) websites achieved all JAMA criteria. The average grade levels for readability were 7.2±7.5, 3.3±0.6 and 93.5±19.4 for Flesch Kincaid Grade level, SMOG, Flesch Reading Ease, respectively. Conclusion: Most of the available web-based Arabic health information on COVID-19 doesn’t have the required level of quality, irrespective of being easy to read and understand by most of the general people.


2022 ◽  
pp. 000348942110666
Author(s):  
Elysia Miriam Grose ◽  
Emily YiQin Cheng ◽  
Marc Levin ◽  
Justine Philteos ◽  
Jong Wook Lee ◽  
...  

Purpose: Complications related to parotidectomy can cause significant morbidity, and thus, the decision to pursue this surgery needs to be well-informed. Given that information available online plays a critical role in patient education, this study aimed to evaluate the readability and quality of online patient education materials (PEMs) regarding parotidectomy. Methods: A Google search was performed using the term “parotidectomy” and the first 10 pages of the search were analyzed. Quality and reliability of the online information was assessed using the DISCERN instrument. Flesch-Kincaid Grade Level (FKGL) and Flesch-Reading Ease Score (FRE) were used to evaluate readability. Results: Thirty-five PEMs met the inclusion criteria. The average FRE score was 59.3 and 16 (46%) of the online PEMs had FRE scores below 60 indicating that they were fairly difficult to very difficult to read. The average grade level of the PEMs was above the eighth grade when evaluated with the FKGL. The average DISCERN score was 41.7, which is indicative of fair quality. There were no significant differences between PEMs originating from medical institutions and PEMs originating from other sources in terms of quality or readability. Conclusion: Online PEMs on parotidectomy may not be comprehensible to the average individual. This study highlights the need for the development of more appropriate PEMs to inform patients about parotidectomy.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Esam Halboub ◽  
Mohammed Sultan Al-Ak’hali ◽  
Hesham M. Al-Mekhlafi ◽  
Mohammed Nasser Alhajj

Abstract Background This study sought to assess the quality and readability of web-based Arabic health information on COVID-19. Methods Three search engines were searched on 13 April 2020 for specific Arabic terms on COVID-19. The first 100 consecutive websites from each engine were analyzed for eligibility, which resulted in a sample of 36 websites. These websites were subjected to quality assessments using the Journal of the American Medical Association (JAMA) benchmarks tool, the DISCERN tool, and Health on the Net Foundation Code of Conduct (HONcode) certification. The readability of the websites was assessed using an online readability calculator. Results Among the 36 eligible websites, only one (2.7%) was HONcode certified. No website attained a high score based on the criteria of the DISCERN tool; the mean score of all websites was 31.5 ± 12.55. As regards the JAMA benchmarks results, a mean score of 2.08 ± 1.05 was achieved by the websites; however, only four (11.1%) met all the JAMA criteria. The average grade levels for readability were 7.2 ± 7.5, 3.3 ± 0.6 and 93.5 ± 19.4 for the Flesch Kincaid Grade Level, Simple Measure of Gobbledygook, and Flesch Reading Ease scales, respectively. Conclusion Almost all of the most easily accessible web-based Arabic health information on COVID-19 does not meet recognized quality standards regardless of the level of readability and ability to be understood by the general population of Arabic speakers.


10.2196/14826 ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. e14826 ◽  
Author(s):  
Fuzhi Wang ◽  
Zhuoxin Wang ◽  
Weiwei Sun ◽  
Xiumu Yang ◽  
Zhiwei Bian ◽  
...  

Background As representatives of health information communication platforms accessed through mobile phones and mobile terminals, health-related WeChat public accounts (HWPAs) have a large consumer base in the Chinese-speaking world. However, there is still a lack of general understanding of the status quo of HWPAs and the quality of the articles they release. Objective The aims of this study were to assess the conformity of HWPAs to the Health on the Net Foundation Code of Conduct (HONcode) and to evaluate the suitability of articles disseminated by HWPAs. Methods The survey was conducted from April 23 to May 5, 2019. Based on the monthly (March 1-31, 2019) WeChat Index provided by Qingbo Big Data, the top 100 HWPAs were examined to evaluate their HONcode compliance. The first four articles published by each HWPA on the survey dates were selected as samples to evaluate their suitability. All materials were assessed by three raters. The materials were assessed using the HONcode checklist and the Suitability Assessment of Materials (SAM) score sheet. Data analysis was performed with SPSS version 17.0 (SPSS Inc, Chicago, IL, USA) and Excel version 2013 (Microsoft Inc, Washington DC, USA). Results A total of 93 HWPAs and 210 of their released articles were included in this study. For six of the eight principles, the 93 HWPAs nearly consistently did not meet the requirements of the HONcode. The HWPAs certified by Tencent Corporation (66/93, 71%) were generally slightly superior to those without such certification (27/93, 29%) in terms of compliance with HONcode principles. The mean SAM score for the 210 articles was 67.72 (SD 10.930), which indicated “adequate” suitability. There was no significant difference between the SAM scores of the articles published by certified and uncertified HWPAs (P=.07), except in the literacy requirements dimension (tdf=97=–2.418, P=.02). Conclusions The HWPAs had low HONcode conformity. Although the suitability of health information released by HWPAs was at a moderate level, there were still problems identified, such as difficulty in tracing information sources, excessive implicit advertisements, and irregular usage of charts. In addition, the low approval requirements of HWPAs were not conducive to improvement of their service quality.


2018 ◽  
Vol 33 (5) ◽  
pp. 487-492 ◽  
Author(s):  
Lubna Daraz ◽  
Allison S. Morrow ◽  
Oscar J. Ponce ◽  
Wigdan Farah ◽  
Abdulrahman Katabi ◽  
...  

Online health information should meet the reading level for the general public (set at sixth-grade level). Readability is a key requirement for information to be helpful and improve quality of care. The authors conducted a systematic review to evaluate the readability of online health information in the United States and Canada. Out of 3743 references, the authors included 157 cross-sectional studies evaluating 7891 websites using 13 readability scales. The mean readability grade level across websites ranged from grade 10 to 15 based on the different scales. Stratification by specialty, health condition, and type of organization producing information revealed the same findings. In conclusion, online health information in the United States and Canada has a readability level that is inappropriate for general public use. Poor readability can lead to misinformation and may have a detrimental effect on health. Efforts are needed to improve readability and the content of online health information.


2002 ◽  
Vol 36 (12) ◽  
pp. 1856-1861 ◽  
Author(s):  
David R Foster ◽  
Denise H Rhoney

BACKGROUND: Written information can be a valuable tool in patient education. Studies evaluating written information for various disease states have frequently demonstrated that the majority of literature is written at a readability level that exceeds that of the average patient, and it has been recommended that written communications for adult patients should be provided at a fifth-grade level or lower. OBJECTIVE: To assess the readability of printed patient information available to patients with epilepsy. METHODS: Samples of written patient information (n = 101) were obtained from various sources. The information was classified based on source, content, and intended audience, and readability was assessed using the Flesch Reading Ease Score (FRES) and Flesch—Kincaid Grade Level (FKGL) score. RESULTS: The mean FRES and FKGL score for all samples were 50.2 and 9.4, respectively. Significant differences were observed in both the FRES and FKGL score of material obtained from different sources; however, no differences were observed when material was analyzed according to content. The mean FRES and FKGL score for materials intended for adults were 49.6 and 9.5, respectively. In comparison, mean FRES and FKGL scores for materials intended for children/adolescents were 78.9 and 5.3, respectively. CONCLUSIONS: The majority of information tested was written at a level that exceeds the reading ability of many patients. The information intended for children is actually written at the appropriate level for an adult. Efforts should be taken to develop written teaching tools that target low-level readers, especially for a disease state that affects many children.


Author(s):  
Naudia Falconer ◽  
E. Reicherter ◽  
Barbara Billek-Sawhney ◽  
Steven Chesbro

The readability level of many patient education materials is too high for patients to comprehend, placing the patient’s health at risk. Since health professionals often recommend Internet-based patient education resources, they must ensure that the readability of information provided to consumers is at an appropriate level. Purpose: The purpose of this study was to determine the readability of educational brochures found on the American Physical Therapy Association (APTA) consumer website. Methods: Fourteen educational brochures on the APTA website in March 2008 were analyzed using the following assessments: Flesch-Kincaid Grade Level, Flesch Reading Ease, Fry Readability Formula, Simple Measure of Gobbledygook (SMOG), Checklist for Patient Education Materials, and Consumer Health Web Site Evaluation Checklist. Results: According to the Flesch-Kincaid and Flesch Reading Ease, over 90% of the brochures were written at greater than a sixth grade level. The mean reading level was grade 10.2 (range = 3.1 to 12) with a Reading Ease score between 31.5 to 79.9. Using the SMOG formula, the brochures had a mean reading level of grade 11.5 (range = 9 to 13). The Fry Readability showed that 85% of the brochures were written higher than a sixth grade level, with a mean reading level of grade 9.5 (range = 6 to 14). Conclusion: Findings suggest that most of the consumer education information available on the website of this health professional organization had readability scores that were too high for average consumers to read.


2021 ◽  
Vol 109 (1) ◽  
Author(s):  
Saeideh Valizadeh-Haghi ◽  
Yasser Khazaal ◽  
Shahabedin Rahmatizadeh

Objective: There are concerns about nonscientific and/or unclear information on the coronavirus disease 2019 (COVID-19) that is available on the Internet. Furthermore, people’s ability to understand health information varies and depends on their skills in reading and interpreting information. This study aims to evaluate the readability and creditability of websites with COVID-19-related information.Methods: The search terms “coronavirus,” “COVID,” and “COVID-19” were input into Google. The websites of the first thirty results for each search term were evaluated in terms of their credibility and readability using the Health On the Net Foundation code of conduct (HONcode) and Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), Gunning Fog, and Flesch Reading Ease Score (FRE) scales, respectively.Results: The readability of COVID-19-related health information on websites was suitable for high school graduates or college students and, thus, was far above the recommended readability level. Most websites that were examined (87.2%) had not been officially certified by HONcode. There was no significant difference in the readability scores of websites with and without HONcode certification.Conclusion: These results suggest that organizations should improve the readability of their websites and provide information that more people can understand. This could lead to greater health literacy, less health anxiety, and the provision of better preventive information about the disease.


1994 ◽  
Vol 12 (10) ◽  
pp. 2211-2215 ◽  
Author(s):  
S A Grossman ◽  
S Piantadosi ◽  
C Covahey

PURPOSE This study was conducted to assess the readability of informed consent forms that describe clinical oncology protocols. METHODS One hundred thirty-seven consent forms from 88 protocols that accrued patients at The Johns Hopkins Oncology Center were quantitatively analyzed. These included 58 of 99 (59%) institutional protocols approved by The Johns Hopkins Oncology Center's Clinical Research Committee and the Institutional Review Board (IRB) over a 2-year period, and 30 active Eastern Cooperative Oncology Group (ECOG), Radiation Therapy Oncology Group (RTOG), and Pediatric Oncology Group (POG) trials. The consent forms described phase I (17%), phase I/II (36%), phase III (29%), and nontherapeutic (18%) studies. Each was optically scanned, checked for accuracy, and analyzed using readability software. The following three readability indices were obtained for each consent form: the Flesch Reading Ease Score, and grade level readability as determined by the Flesch-Kincaid Formula and the Gunning Fog Index. RESULTS The mean +/- SD Flesch Reading Ease Score for the consent forms was 52.6 +/- 8.7 (range, 33 to 78). The mean grade level was 11.1 +/- 1.67 (range, 6 to 14) using the Flesch-Kincaid Formula and 14.1 +/- 1.8 (range, 8 to 17) using the Gunning Fog Index. Readability at or below an eighth-grade level was found in 6% of the consent forms using the Flesch-Kincaid Formula and in 1% using the Gunning Fog Index. Readability was similar for consent forms that described institutional, cooperative group, and phase I, II, and III protocols. CONCLUSION Consent forms from clinical oncology protocols are written at a level that is difficult for most patients to read, despite national, cooperative group, institutional, and departmental review. The consent process, which is crucial to clinical research, should be strengthened by improving the readability of the consent forms.


Sign in / Sign up

Export Citation Format

Share Document