scholarly journals Health websites on COVID-19: are they readable and credible enough to help public self-care?

2021 ◽  
Vol 109 (1) ◽  
Author(s):  
Saeideh Valizadeh-Haghi ◽  
Yasser Khazaal ◽  
Shahabedin Rahmatizadeh

Objective: There are concerns about nonscientific and/or unclear information on the coronavirus disease 2019 (COVID-19) that is available on the Internet. Furthermore, people’s ability to understand health information varies and depends on their skills in reading and interpreting information. This study aims to evaluate the readability and creditability of websites with COVID-19-related information.Methods: The search terms “coronavirus,” “COVID,” and “COVID-19” were input into Google. The websites of the first thirty results for each search term were evaluated in terms of their credibility and readability using the Health On the Net Foundation code of conduct (HONcode) and Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), Gunning Fog, and Flesch Reading Ease Score (FRE) scales, respectively.Results: The readability of COVID-19-related health information on websites was suitable for high school graduates or college students and, thus, was far above the recommended readability level. Most websites that were examined (87.2%) had not been officially certified by HONcode. There was no significant difference in the readability scores of websites with and without HONcode certification.Conclusion: These results suggest that organizations should improve the readability of their websites and provide information that more people can understand. This could lead to greater health literacy, less health anxiety, and the provision of better preventive information about the disease.

2016 ◽  
Vol 2016 ◽  
pp. 1-6 ◽  
Author(s):  
S. Raj ◽  
V. L. Sharma ◽  
A. J. Singh ◽  
S. Goel

Background. The available health information on websites should be reliable and accurate in order to make informed decisions by community. This study was done to assess the quality and readability of health information websites on World Wide Web in India.Methods. This cross-sectional study was carried out in June 2014. The key words “Health” and “Information” were used on search engines “Google” and “Yahoo.” Out of 50 websites (25 from each search engines), after exclusion, 32 websites were evaluated. LIDA tool was used to assess the quality whereas the readability was assessed using Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and SMOG.Results. Forty percent of websites (n=13) were sponsored by government. Health On the Net Code of Conduct (HONcode) certification was present on 50% (n=16) of websites. The mean LIDA score (74.31) was average. Only 3 websites scored high on LIDA score. Only five had readability scores at recommended sixth-grade level.Conclusion. Most health information websites had average quality especially in terms of usability and reliability and were written at high readability levels. Efforts are needed to develop the health information websites which can help general population in informed decision making.


2020 ◽  
Author(s):  
Amy P Worrall ◽  
Mary J Connolly ◽  
Aine O'Neill ◽  
Murray O'Doherty ◽  
Kenneth P Thornton ◽  
...  

Abstract Introduction: The internet is now the first line source of health information for many people worldwide. In the current Coronavirus Disease 2019 (COVID-19) global pandemic, health information is being produced, revised, updated and disseminated at an increasingly rapid rate. The general public are faced with a plethora of misinformation regarding COVID-19 and the readability of online information has an impact on their understanding of the disease. The accessibility of online healthcare information relating to COVID-19 is unknown.Methods: The Google® search engine was used to collate the first twenty webpage URLs for three individual searches for ‘COVID’, ‘COVID-19’, and ‘coronavirus’ from Ireland, the United Kingdom, Canada and the United States. The Gunning Fog Index (GFI), Flesch-Kincaid Grade (FKG) Score, Flesch Reading Ease Score (FRES), Simple Measure of Gobbledygook (SMOG) score were calculated to assess the readability.Results: There were poor levels of readability webpages reviewed, with only 17.2% of webpages at a universally readable level. There was a significant difference in readability between the different webpages based on their information source (p <0.01). Public Health organisations and Government organisations provided the most readable COVID-19 material, while digital media sources were significantly less readable. There were no significant differences in readability between regions.Conclusion: Much of the general public have relied on online information during the pandemic. Information on COVID-19 should be made more readable, and those writing webpages and information tools should ensure universal accessibility is considered in their production. Governments and healthcare practitioners should have an awareness of the online sources of information available, and ensure that readability of our own productions is at a universally readable level which will increase understanding and adherence to health guidelines.


10.2196/14826 ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. e14826 ◽  
Author(s):  
Fuzhi Wang ◽  
Zhuoxin Wang ◽  
Weiwei Sun ◽  
Xiumu Yang ◽  
Zhiwei Bian ◽  
...  

Background As representatives of health information communication platforms accessed through mobile phones and mobile terminals, health-related WeChat public accounts (HWPAs) have a large consumer base in the Chinese-speaking world. However, there is still a lack of general understanding of the status quo of HWPAs and the quality of the articles they release. Objective The aims of this study were to assess the conformity of HWPAs to the Health on the Net Foundation Code of Conduct (HONcode) and to evaluate the suitability of articles disseminated by HWPAs. Methods The survey was conducted from April 23 to May 5, 2019. Based on the monthly (March 1-31, 2019) WeChat Index provided by Qingbo Big Data, the top 100 HWPAs were examined to evaluate their HONcode compliance. The first four articles published by each HWPA on the survey dates were selected as samples to evaluate their suitability. All materials were assessed by three raters. The materials were assessed using the HONcode checklist and the Suitability Assessment of Materials (SAM) score sheet. Data analysis was performed with SPSS version 17.0 (SPSS Inc, Chicago, IL, USA) and Excel version 2013 (Microsoft Inc, Washington DC, USA). Results A total of 93 HWPAs and 210 of their released articles were included in this study. For six of the eight principles, the 93 HWPAs nearly consistently did not meet the requirements of the HONcode. The HWPAs certified by Tencent Corporation (66/93, 71%) were generally slightly superior to those without such certification (27/93, 29%) in terms of compliance with HONcode principles. The mean SAM score for the 210 articles was 67.72 (SD 10.930), which indicated “adequate” suitability. There was no significant difference between the SAM scores of the articles published by certified and uncertified HWPAs (P=.07), except in the literacy requirements dimension (tdf=97=–2.418, P=.02). Conclusions The HWPAs had low HONcode conformity. Although the suitability of health information released by HWPAs was at a moderate level, there were still problems identified, such as difficulty in tracing information sources, excessive implicit advertisements, and irregular usage of charts. In addition, the low approval requirements of HWPAs were not conducive to improvement of their service quality.


2019 ◽  
Vol 32 (Supplement_1) ◽  
Author(s):  
B R O’Connor ◽  
E Doherty ◽  
F Friedmacher ◽  
L Vernon ◽  
T S Paran

Abstract Introduction Increasingly in pediatric surgical practice, patients, their parents, and surgeons alike use the Internet as an easily and quickly accessible source of information about conditions and their treatment. The quality and reliability of this information may often be unregulated. We aim to objectively assess the online information available relating to esophageal atresia and its management. Methods We performed searches for ‘oesophageal atresia’ and ‘esophageal atresia’ using the Google, Yahoo, and Bing engines to encompass both European and American spellings. We assessed the first 20 results of each search and excluded duplicates or unrelated pages. The DISCERN score and the Health on the Net Foundation Code (HONcode) toolbar were utilized to assess the quality of information on each website. We evaluated readability with the Flesch reading ease (FRE) and the Flesch–Kincaid grade (FKG). Results Of the original 120 hits, 61 were excluded (51 duplicates, 10 unrelated). Out of 59 individual sites reviewed, only 13 sites were HONcode approved. The mean overall DISCERN score was 52.55 (range: 22–78). The mean DISCERN score for the search term ‘oesphageal atresia’ was 57 (range: 22–78) in comparison to 59.03 for ‘esophageal atresia’ (range: 27–78). Google search had the lowest overall mean DISCERN score at 54.83 (range: 35–78), followed by Yahoo at 58.03 (range: 22–78), and Bing with the highest overall mean score of 61.2 (range: 27–78). The majority of websites were graded excellent (≥63) or good (51–62), 43% and 27%, respectively; 20% were scored as fair (39–50), with 10% being either poor (27–38) or very poor (≤26). In terms of readability, the overall Flesch Reading Ease score was 33.02, and the overall Flesch–Kincaid grade level was 10.3. Conclusions The quality of freely available online information relating to esophageal atresia is generally good but may not be accessible to everyone due to being relatively difficult to read. We should direct parents towards comprehensive, high-quality, and easily readable information sources should they wish to supplement their knowledge about esophageal atresia and its management.


2019 ◽  
Author(s):  
Fuzhi Wang ◽  
Zhuoxin Wang ◽  
Weiwei Sun ◽  
Xiumu Yang ◽  
Zhiwei Bian ◽  
...  

BACKGROUND As representatives of health information communication platforms accessed through mobile phones and mobile terminals, health-related WeChat public accounts (HWPAs) have a large consumer base in the Chinese-speaking world. However, there is still a lack of general understanding of the status quo of HWPAs and the quality of the articles they release. OBJECTIVE The aims of this study were to assess the conformity of HWPAs to the Health on the Net Foundation Code of Conduct (HONcode) and to evaluate the suitability of articles disseminated by HWPAs. METHODS The survey was conducted from April 23 to May 5, 2019. Based on the monthly (March 1-31, 2019) WeChat Index provided by Qingbo Big Data, the top 100 HWPAs were examined to evaluate their HONcode compliance. The first four articles published by each HWPA on the survey dates were selected as samples to evaluate their suitability. All materials were assessed by three raters. The materials were assessed using the HONcode checklist and the Suitability Assessment of Materials (SAM) score sheet. Data analysis was performed with SPSS version 17.0 (SPSS Inc, Chicago, IL, USA) and Excel version 2013 (Microsoft Inc, Washington DC, USA). RESULTS A total of 93 HWPAs and 210 of their released articles were included in this study. For six of the eight principles, the 93 HWPAs nearly consistently did not meet the requirements of the HONcode. The HWPAs certified by Tencent Corporation (66/93, 71%) were generally slightly superior to those without such certification (27/93, 29%) in terms of compliance with HONcode principles. The mean SAM score for the 210 articles was 67.72 (SD 10.930), which indicated “adequate” suitability. There was no significant difference between the SAM scores of the articles published by certified and uncertified HWPAs (<i>P</i>=.07), except in the literacy requirements dimension (t<sub>df=97</sub>=–2.418, <i>P</i>=.02). CONCLUSIONS The HWPAs had low HONcode conformity. Although the suitability of health information released by HWPAs was at a moderate level, there were still problems identified, such as difficulty in tracing information sources, excessive implicit advertisements, and irregular usage of charts. In addition, the low approval requirements of HWPAs were not conducive to improvement of their service quality.


2020 ◽  
Author(s):  
Esam Halboub ◽  
Mohammed Sultan Al-Akhali ◽  
Hesham M Al-Mekhlafi ◽  
Mohammed Nasser Alhajj

Abstract Objective: The study sought to assess the quality and readability of the web-based Arabic health information on COVID-19. Methods: Selected search engines were searched on 13 April 2020 for specific Arabic terms on COVID-19. The first 100 consecutive websites from each engine were obtained. The quality of the websites was analyzed using the Health on the Net Foundation Code of Conduct (HONcode), the Journal of the American Medical Association (JAMA) benchmarks, and the DISCERN benchmarks instrument. The readability was assessed using an online readability calculator tool. Results: Overall, 36 websites were found eligible for quality and readability analyses. Only one website (2.7%) was HONcode certified. No single website attained a high score based on the DISCERN tool; the mean score of all websites was 31.5±12.55. Regarding JAMA benchmarks, a mean score of 2.08±1.05 was achieved by the websites; however, only 4 (11.1%) websites achieved all JAMA criteria. The average grade levels for readability were 7.2±7.5, 3.3±0.6 and 93.5±19.4 for Flesch Kincaid Grade level, SMOG, Flesch Reading Ease, respectively. Conclusion: Most of the available web-based Arabic health information on COVID-19 doesn’t have the required level of quality, irrespective of being easy to read and understand by most of the general people.


2018 ◽  
Vol 127 (7) ◽  
pp. 439-444 ◽  
Author(s):  
Nicole Leigh Aaronson ◽  
Johnathan Edward Castaño ◽  
Jeffrey P. Simons ◽  
Noel Jabbour

Objective: This study evaluates the quality and readability of websites on ankyloglossia, tongue tie, and frenulectomy. Methods: Google was queried with six search terms: tongue tie, tongue tie and breastfeeding, tongue tie and frenulectomy, ankyloglossia, ankyloglossia and breastfeeding, and ankyloglossia and frenulectomy. Website quality was assessed using the DISCERN instrument. Readability was evaluated using the Flesch-Kincaid Reading Grade Level, Flesch Reading Ease Score, and Fry readability formula. Correlations were calculated. Search terms were analyzed for frequency using Google Trends and the NCBI database. Results: Of the maximum of 80, average DISCERN score for the websites was 65.7 (SD = 9.1, median = 65). Mean score for the Flesch-Kincaid Reading Grade Level was 11.6 (SD = 3.0, median = 10.7). Two websites (10%) were in the optimal range of 6 to 8. Google Trends shows tongue tie searches increasing in frequency, although the NCBI database showed a decreased in tongue tie articles. Conclusions: Most of the websites on ankyloglossia were of good quality; however, a majority were above the recommended reading level for public health information. Parents increasingly seek information on ankyloglossia online, while fewer investigators are publishing articles on this topic.


Information ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 430
Author(s):  
Derar Eleyan ◽  
Abed Othman ◽  
Amna Eleyan

Comments are used to explain the meaning of code and ease communications between programmers themselves, quality assurance auditors, and code reviewers. A tool has been developed to help programmers write readable comments and measure their readability level. It is used to enhance software readability by providing alternatives to both keywords and comment statements from a local database and an online dictionary. It is also a word-finding query engine for developers. Readability level is measured using three different formulas: the fog index, the Flesch reading ease score, and Flesch–Kincaid grade levels. A questionnaire has been distributed to 42 programmers and 35 students to compare the readability aspect between both new comments written by the tool and the original comments written by previous programmers and developers. Programmers stated that the comments from the proposed tool had fewer complex words and took less time to read and understand. Nevertheless, this did not significantly affect the understandability of the text, as programmers normally have quite a high level of English. However, the results from students show that the tool affects the understandability of text and the time taken to read it, while text complexity results show that the tool makes new comment text that is more readable by changing the three studied variables.


2012 ◽  
Vol 83 (3) ◽  
pp. 500-506 ◽  
Author(s):  
Christos Livas ◽  
Konstantina Delli ◽  
Yijin Ren

ABSTRACT Objective: To investigate the quality of the data disseminated via the Internet regarding pain experienced by orthodontic patients. Materials and Methods: A systematic online search was performed for ‘orthodontic pain’ and ‘braces pain’ separately using five search engines. The first 25 results from each search term–engine combination were pooled for analysis. After excluding advertising sites, discussion groups, video feeds, and links to scientific articles, 25 Web pages were evaluated in terms of accuracy, readability, accessibility, usability, and reliability using recommended research methodology; reference textbook material, the Flesch Reading Ease Score; and the LIDA instrument. Author and information details were also recorded. Results: Overall, the results indicated a variable quality of the available informational material. Although the readability of the Web sites was generally acceptable, the individual LIDA categories were rated of medium or low quality, with average scores ranging from 16.9% to 86.2%. The orthodontic relevance of the Web sites was not accompanied by the highest assessment results, and vice versa. Conclusions: The quality of the orthodontic pain information cited by Web sources appears to be highly variable. Further structural development of health information technology along with public referral to reliable sources by specialists are recommended.


2017 ◽  
Vol 33 (04) ◽  
pp. 428-433 ◽  
Author(s):  
Amar Gupta ◽  
Dennis Bojrab ◽  
Adam Folbe ◽  
Michael Carron ◽  
Michael Nissan

AbstractHealth care providers should be aware of information available on the Internet to ensure proper patient care. The current analysis assesses the reliability, quality, and readability of internet information describing rhytidectomy. Previously validated survey instruments to assess the reliability, quality, and readability of online websites describing rhytidectomy were used. An internet search using Google with the search term “facelift” was conducted. The first 50 search results were reviewed, and 36 were deemed appropriate to be included in this analysis. Websites were divided based on type of authorship into professional organization, academic, physician based, and unidentified. The validated DISCERN instrument was used to determine reliability, quality, and overall rating of each site. The Flesch Reading Ease Score (FRES) and Flesch–Kincaid Grade Level (FKGL) were used to measure readability. A 1 to 3 point scale was used to rate websites, with a higher number indicating a website that possessed either greater reliability or greater quality. Mean scores for reliability ranged from 1.7 (±0.99) in the academic group to 2.0 (±0.12) in the unidentified group. Mean scores for quality ranged from 1.5 (±0.13) in the unidentified group to 1.7 (±0.38) in the physician-based group. The highest overall rating was 1.4 (±0.22 and ± 0.31, respectively) in the unidentified and physician-based groups. The lowest overall rating was 1 (±0.58) in the academic group. FRESs ranged from 21.6 to 74.6. FKGLs ranged from 6.9 to 13.9. Information available online regarding rhytidectomy may be significantly deficient in reliability, quality, and readability. These deficiencies are present in articles with all types of author affiliations. This underscores the clinicians' duty to provide patients with high-quality information at an adequate level of comprehension.


Sign in / Sign up

Export Citation Format

Share Document