scholarly journals Exploring the Vast Choice of Question Prompt Lists Available to Health Consumers via Google: Environmental Scan

10.2196/17002 ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. e17002
Author(s):  
Marguerite Clare Tracy ◽  
Heather L Shepherd ◽  
Pinika Patel ◽  
Lyndal Jane Trevena

Background There is increasing interest in shared decision making (SDM) in Australia. Question prompt lists (QPLs) support question asking by patients, a key part of SDM. QPLs have been studied in a variety of settings, and increasingly the internet provides a source of suggested questions for patients. Environmental scans have been shown to be useful in assessing the availability and quality of online SDM tools. Objective This study aimed to assess the number and readability of QPLs available to users via Google.com.au. Methods Our environmental scan used search terms derived from literature and reputable websites to search for QPLs available via Google.com.au. Following removal of duplicates from the 4000 URLs and 22 reputable sites, inclusion and exclusion criteria were applied to create a list of unique QPLs. A sample of 20 QPLs was further assessed for list length, proxy measures of quality such as a date of review, and evidence of doctor endorsement. Readability of the sample QPL instructions and QPLs themselves was assessed using Flesch Reading Ease and Flesch-Kincaid Grade Level scores. Results Our environmental scan identified 173 unique QPLs available to users. Lists ranged in length from 1 question to >200 questions. Of our sample, 50% (10/20) had a listed date of creation or update, and 60% (12/20) had evidence of authorship or source. Flesch-Kincaid Grade Level scores for instructions were higher than for the QPLs (grades 10.3 and 7.7, respectively). There was over a 1 grade difference between QPLs from reputable sites compared with other sites (grades 4.2 and 5.4, respectively). Conclusions People seeking questions to ask their doctor using Google.com.au encounter a vast number of question lists that they can use to prepare for consultations with their doctors. Markers of the quality or usefulness of various types of online QPLs, either surrogate or direct, have not yet been established, which makes it difficult to assess the value of the abundance of lists. Doctor endorsement of question asking has previously been shown to be an important factor in the effectiveness of QPLs, but information regarding this is not readily available online. Whether these diverse QPLs are endorsed by medical practitioners warrants further investigation.


2019 ◽  
Author(s):  
Marguerite Clare Tracy ◽  
Heather L Shepherd ◽  
Pinika Patel ◽  
Lyndal Jane Trevena

BACKGROUND There is increasing interest in shared decision making (SDM) in Australia. Question prompt lists (QPLs) support question asking by patients, a key part of SDM. QPLs have been studied in a variety of settings, and increasingly the internet provides a source of suggested questions for patients. Environmental scans have been shown to be useful in assessing the availability and quality of online SDM tools. OBJECTIVE This study aimed to assess the number and readability of QPLs available to users via Google.com.au. METHODS Our environmental scan used search terms derived from literature and reputable websites to search for QPLs available via Google.com.au. Following removal of duplicates from the 4000 URLs and 22 reputable sites, inclusion and exclusion criteria were applied to create a list of unique QPLs. A sample of 20 QPLs was further assessed for list length, proxy measures of quality such as a date of review, and evidence of doctor endorsement. Readability of the sample QPL instructions and QPLs themselves was assessed using Flesch Reading Ease and Flesch-Kincaid Grade Level scores. RESULTS Our environmental scan identified 173 unique QPLs available to users. Lists ranged in length from 1 question to >200 questions. Of our sample, 50% (10/20) had a listed date of creation or update, and 60% (12/20) had evidence of authorship or source. Flesch-Kincaid Grade Level scores for instructions were higher than for the QPLs (grades 10.3 and 7.7, respectively). There was over a 1 grade difference between QPLs from reputable sites compared with other sites (grades 4.2 and 5.4, respectively). CONCLUSIONS People seeking questions to ask their doctor using Google.com.au encounter a vast number of question lists that they can use to prepare for consultations with their doctors. Markers of the quality or usefulness of various types of online QPLs, either surrogate or direct, have not yet been established, which makes it difficult to assess the value of the abundance of lists. Doctor endorsement of question asking has previously been shown to be an important factor in the effectiveness of QPLs, but information regarding this is not readily available online. Whether these diverse QPLs are endorsed by medical practitioners warrants further investigation.



2020 ◽  
Vol 40 (11) ◽  
pp. NP636-NP642 ◽  
Author(s):  
Eric Barbarite ◽  
David Shaye ◽  
Samuel Oyer ◽  
Linda N Lee

Abstract Background In an era of widespread Internet access, patients increasingly look online for health information. Given the frequency with which cosmetic botulinum toxin injection is performed, there is a need to provide patients with high-quality information about this procedure. Objectives The aim of this study was to examine the quality of printed online education materials (POEMs) about cosmetic botulinum toxin. Methods An Internet search was performed to identify 32 websites of various authorship types. Materials were evaluated for accuracy and inclusion of key content points. Readability was measured by Flesch Reading Ease and Flesch-Kincaid Grade Level. Understandability and actionability were assessed with the Patient Education Materials Assessment Tool for Printed Materials. The effect of authorship was measured by undertaking analysis of variance between groups. Results The mean [standard deviation] accuracy score among all POEMs was 4.2 [0.7], which represents an accuracy of 76% to 99%. Mean comprehensiveness was 47.0% [16.4%]. Mean Flesch-Kincaid Grade Level and Flesch Reading Ease scores were 10.7 [2.1] and 47.9 [10.0], respectively. Mean understandability and actionability were 62.8% [18.8%] and 36.2% [26.5%], respectively. There were no significant differences between accuracy (P > 0.2), comprehensiveness (P > 0.5), readability (P > 0.1), understandability (P > 0.3), or actionability (P > 0.2) by authorship. Conclusions There is wide variability in the quality of cosmetic botulinum toxin POEMs regardless of authorship type. The majority of materials are written above the recommended reading level and fail to include important content points. It is critical that providers take an active role in the evaluation and endorsement of online patient education materials.



Rheumatology ◽  
2020 ◽  
Vol 59 (Supplement_2) ◽  
Author(s):  
Kieran Murray ◽  
Timothy Murray ◽  
Candice Low ◽  
Anna O'Rourke ◽  
Douglas J Veale

Abstract Background Osteoarthritis is the most common cause of disability in people over 65 years old. The readability of of online osteoarthritis information has never been assessed. A 2003 study found the quality of online osteoarthritis information to be poor. This study reviews the quality of online information regarding osteoarthritis in 2018 using three validated scoring systems. Readability is reviewed for the first time, again using three validated tools. Methods The term osteoarthritis was searched across the three most popular English language search engines. The first 25 pages from each search engine were analysed. Duplicate pages, websites featuring paid advertisements, inaccessible pages (behind a pay wall, not available for geographical reasons) and non-text pages were excluded. Readability was measured using Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL) and Gunning-Fog Index (GFI). Website quality was scored using the the Journal of the American Medical Association (JAMA) benchmark criteria and DISCERN criteria. Presence or absence of HONcode certification, age of content, content producer and author characteristics were noted. Results 37 unique websites were suitable for analysis. Readability varied by assessment tool from 8th to 12th grade level. This compares with the recommended 7- 8th grade level. One (2.7%) website met all four JAMA Criteria. Mean DISCERN quality of information for OA websites was “fair”, comparing favourably with the “poor” grading of a 2003 study. HONCode endorsed websites (43.2%) were of a statistically significantly higher quality. Conclusion Quality of online health information for OA is “fair”. 2.7% of websites met JAMA benchmark criteria for quality. Readability was equal to or more difficult than recommendations. HONcode certification was indicative of higher quality, but not readability. Disclosures K. Murray None. T. Murray None. C. Low None. A. O'Rourke None. D.J. Veale None.



2020 ◽  
Author(s):  
Esam Halboub ◽  
Mohammed Sultan Al-Akhali ◽  
Hesham M Al-Mekhlafi ◽  
Mohammed Nasser Alhajj

Abstract Objective: The study sought to assess the quality and readability of the web-based Arabic health information on COVID-19. Methods: Selected search engines were searched on 13 April 2020 for specific Arabic terms on COVID-19. The first 100 consecutive websites from each engine were obtained. The quality of the websites was analyzed using the Health on the Net Foundation Code of Conduct (HONcode), the Journal of the American Medical Association (JAMA) benchmarks, and the DISCERN benchmarks instrument. The readability was assessed using an online readability calculator tool. Results: Overall, 36 websites were found eligible for quality and readability analyses. Only one website (2.7%) was HONcode certified. No single website attained a high score based on the DISCERN tool; the mean score of all websites was 31.5±12.55. Regarding JAMA benchmarks, a mean score of 2.08±1.05 was achieved by the websites; however, only 4 (11.1%) websites achieved all JAMA criteria. The average grade levels for readability were 7.2±7.5, 3.3±0.6 and 93.5±19.4 for Flesch Kincaid Grade level, SMOG, Flesch Reading Ease, respectively. Conclusion: Most of the available web-based Arabic health information on COVID-19 doesn’t have the required level of quality, irrespective of being easy to read and understand by most of the general people.



2018 ◽  
Vol 127 (7) ◽  
pp. 439-444 ◽  
Author(s):  
Nicole Leigh Aaronson ◽  
Johnathan Edward Castaño ◽  
Jeffrey P. Simons ◽  
Noel Jabbour

Objective: This study evaluates the quality and readability of websites on ankyloglossia, tongue tie, and frenulectomy. Methods: Google was queried with six search terms: tongue tie, tongue tie and breastfeeding, tongue tie and frenulectomy, ankyloglossia, ankyloglossia and breastfeeding, and ankyloglossia and frenulectomy. Website quality was assessed using the DISCERN instrument. Readability was evaluated using the Flesch-Kincaid Reading Grade Level, Flesch Reading Ease Score, and Fry readability formula. Correlations were calculated. Search terms were analyzed for frequency using Google Trends and the NCBI database. Results: Of the maximum of 80, average DISCERN score for the websites was 65.7 (SD = 9.1, median = 65). Mean score for the Flesch-Kincaid Reading Grade Level was 11.6 (SD = 3.0, median = 10.7). Two websites (10%) were in the optimal range of 6 to 8. Google Trends shows tongue tie searches increasing in frequency, although the NCBI database showed a decreased in tongue tie articles. Conclusions: Most of the websites on ankyloglossia were of good quality; however, a majority were above the recommended reading level for public health information. Parents increasingly seek information on ankyloglossia online, while fewer investigators are publishing articles on this topic.



2020 ◽  
pp. 019459982096915
Author(s):  
Lena W. Chen ◽  
Vandra Chatrice Harris ◽  
Justin Lee Jia ◽  
Deborah Xingchun Xie ◽  
Ralph Patrick Tufano ◽  
...  

Objective Thyroidectomy is one of the most common procedures performed in head and neck surgery. The quality of online resources for thyroidectomy is unknown. We aim to evaluate search trends and online resource quality regarding thyroidectomy. Study Design Cross-sectional analysis. Setting Websites appearing on Google search. Methods The first 30 Google websites for thyroidectomy were reviewed, excluding research, video, and restricted sites. Search patterns were obtained with Google Trends. Quality was measured by readability (Flesch Reading Ease and Flesch-Kinkaid Grade Level), understandability and actionability (Patient Education Materials Assessment Tool), and clinical practice guideline (CPG) compatibility. Fleiss kappa interrater reliability analysis was performed for 2 raters. Results Twenty-one sites were evaluated. Search popularity for thyroidectomy has increased since 2004. Median reading ease was 42.2 (range, 15.4-62.7) on a scale from 1 to 100, with 100 indicating maximum readability. Median reading grade level was 12 (range, 7-16). Thyroidectomy resources were poorly understandable (median, 66%; range, 21%-88%) and actionable (median, 10%; range, 0%-60%). Median CPG compatibility was 4 out of 5 (range, 0-5). Interrater reliability ranged from substantial to moderate for understandability (0.78), actionability (0.57), and CPG compatibility (0.58), with P < .05 for all results. Conclusion Online resources about thyroidectomy vary in quality and reliability and are written at grade levels above the average reading level of the public. Providers should be aware of existing resources and work to create education resources that meet universal health literacy guidelines. The framework provided in this article may also serve as a guide and provide tangible steps that providers can take to help patients access care.



2018 ◽  
Author(s):  
Kieran Edward Murray ◽  
Timothy Eanna Murray ◽  
Anna Caroline O'Rourke ◽  
Candice Low ◽  
Douglas James Veale

BACKGROUND Osteoarthritis (OA) is the most common cause of disability in people older than 65 years. Readability of online OA information has never been assessed. A 2003 study found the quality of online OA information to be poor. OBJECTIVE The aim of this study was to review the readability and quality of current online information regarding OA. METHODS The term osteoarthritis was searched across the three most popular English language search engines. The first 25 pages from each search engine were analyzed. Duplicate pages, websites featuring paid advertisements, inaccessible pages (behind a pay wall, not available for geographical reasons), and nontext pages were excluded. Readability was measured using Flesch Reading Ease Score, Flesch-Kincaid Grade Level, and Gunning-Fog Index. Website quality was scored using the Journal of the American Medical Association (JAMA) benchmark criteria and the DISCERN criteria. Presence or absence of the Health On the Net Foundation Code of Conduct (HONcode) certification, age of content, content producer, and author characteristics were noted. RESULTS A total of 37 unique websites were found suitable for analysis. Readability varied by assessment tool from 8th to 12th grade level. This compares with the recommended 7th to 8th grade level. Of the 37, 1 (2.7%) website met all 4 JAMA criteria. Mean DISCERN quality of information for OA websites was “fair,” compared with the “poor” grading of a 2003 study. HONcode-endorsed websites (43%, 16/37) were of a statistically significant higher quality. CONCLUSIONS Readability of online health information for OA was either equal to or more difficult than the recommended level.



10.2196/12855 ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. e12855 ◽  
Author(s):  
Kieran Edward Murray ◽  
Timothy Eanna Murray ◽  
Anna Caroline O'Rourke ◽  
Candice Low ◽  
Douglas James Veale

Background Osteoarthritis (OA) is the most common cause of disability in people older than 65 years. Readability of online OA information has never been assessed. A 2003 study found the quality of online OA information to be poor. Objective The aim of this study was to review the readability and quality of current online information regarding OA. Methods The term osteoarthritis was searched across the three most popular English language search engines. The first 25 pages from each search engine were analyzed. Duplicate pages, websites featuring paid advertisements, inaccessible pages (behind a pay wall, not available for geographical reasons), and nontext pages were excluded. Readability was measured using Flesch Reading Ease Score, Flesch-Kincaid Grade Level, and Gunning-Fog Index. Website quality was scored using the Journal of the American Medical Association (JAMA) benchmark criteria and the DISCERN criteria. Presence or absence of the Health On the Net Foundation Code of Conduct (HONcode) certification, age of content, content producer, and author characteristics were noted. Results A total of 37 unique websites were found suitable for analysis. Readability varied by assessment tool from 8th to 12th grade level. This compares with the recommended 7th to 8th grade level. Of the 37, 1 (2.7%) website met all 4 JAMA criteria. Mean DISCERN quality of information for OA websites was “fair,” compared with the “poor” grading of a 2003 study. HONcode-endorsed websites (43%, 16/37) were of a statistically significant higher quality. Conclusions Readability of online health information for OA was either equal to or more difficult than the recommended level.



10.2196/18076 ◽  
2020 ◽  
Vol 6 (1) ◽  
pp. e18076
Author(s):  
Michael Yacob ◽  
Shamim Lotfi ◽  
Shannon Tang ◽  
Prasad Jetty

Background Medical students commonly refer to Wikipedia as their preferred online resource for medical information. The quality and readability of articles about common vascular disorders on Wikipedia has not been evaluated or compared against a standard textbook of surgery. Objective The aims of this study were to (1) compare the quality of Wikipedia articles to that of equivalent chapters in a standard undergraduate medical textbook of surgery, (2) identify any errors of omission in either resource, and (3) compare the readability of both resources using validated ease-of-reading and grade-level tools. Methods Using the Medical Council of Canada Objectives for the Qualifying Examination, 8 fundamental topics of vascular surgery were chosen. The articles were found on Wikipedia using Wikipedia’s native search engine. The equivalent chapters were identified in Schwartz Principles of Surgery (ninth edition). Medical learners (n=2) assessed each of the texts on their original platforms to independently evaluate readability, quality, and errors of omission. Readability was evaluated with Flesch Reading Ease scores and 5 grade-level scores (Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Simple Measure of Gobbledygook Index, and Automated Readability Index), quality was evaluated using the DISCERN instrument, and errors of omission were evaluated using a standardized scoring system that was designed by the authors. Results Flesch Reading Ease scores suggested that Wikipedia (mean 30.5; SD 8.4) was significantly easier to read (P=.03) than Schwartz (mean 20.2; SD 9.0). The mean grade level (calculated using all grade-level indices) of the Wikipedia articles (mean 14.2; SD 1.3) was significantly different (P=.02) than the mean grade level of Schwartz (mean 15.9; SD 1.4). The quality of the text was also assessed using the DISCERN instrument and suggested that Schwartz (mean 71.4; SD 3.1) had a significantly higher quality (P=.002) compared to that of Wikipedia (mean 52.9; SD 11.4). Finally, the Wikipedia error of omission rate (mean 12.5; SD 6.8) was higher than that of Schwartz (mean 21.3; SD 1.9) indicating that there were significantly fewer errors of omission in the surgical textbook (P=.008). Conclusions Online resources are increasingly easier to access but can vary in quality. Based on this comparison, the authors of this study recommend the use of vascular surgery textbooks as a primary source of learning material because the information within is more consistent in quality and has fewer errors of omission. Wikipedia can be a useful resource for quick reference, particularly because of its ease of reading, but its vascular surgery articles require further development.



2011 ◽  
Vol 15 (5) ◽  
pp. 885-893 ◽  
Author(s):  
Reiko Hirasawa ◽  
Kazumi Saito ◽  
Yoko Yachi ◽  
Yoko Ibe ◽  
Satoru Kodama ◽  
...  

AbstractObjectiveThe present study aimed to evaluate the quality of Internet information on the Mediterranean diet and to determine the relationship between the quality of information and the website source.DesignWebsite sources were categorized as institutional, pharmaceutical, non-pharmaceutical commercial, charitable, support and alternative medicine. Content quality was evaluated using the DISCERN rating instrument, the Health On the Net Foundation's (HON) code principles, andJournal of the American Medical Association(JAMA) benchmarks. Readability was graded by the Flesch Reading Ease score and Flesch–Kincaid Grade Level score.SettingThe phrase ‘Mediterranean diet’ was entered as a search term into the six most commonly used English-language search engines.SubjectsThe first thirty websites forthcoming by each engine were examined.ResultsOf the 180 websites identified, thirty-two met our inclusion criteria. Distribution of the website sources was: institutional,n8 (25 %); non-pharmaceutical commercial,n12 (38 %); and support,n12 (38 %). As evaluated by the DISCERN, thirty-one of the thirty-two websites were rated as fair to very poor. Non-pharmaceutical commercial sites scored significantly lower than institutional and support sites (P= 0·002). The mean Flesch Reading Ease score and mean Flesch–Kincaid Grade Level were 55·9 (fairly difficult) and 7·2, respectively. The Flesch–Kincaid Grade Level score determines the difficulty of material by measuring the length of words and sentences and converting the results into a grade level ranging from 0 to 12 (US grade level).ConclusionsDue to the poor quality of website information on the Mediterranean diet, patients or consumers who are interested in the Mediterranean diet should get advice from physicians or dietitians.



Sign in / Sign up

Export Citation Format

Share Document