scholarly journals Spelling Errors and Shouting Capitalization Lead to Additive Penalties to Trustworthiness of Online Health Information: Randomized Experiment With Laypersons (Preprint)

2019 ◽  
Author(s):  
Harry J Witchel ◽  
Georgina A Thompson ◽  
Christopher I Jones ◽  
Carina E I Westling ◽  
Juan Romero ◽  
...  

BACKGROUND The written format and literacy competence of screen-based texts can interfere with the perceived trustworthiness of health information in online forums, independent of the semantic content. Unlike in professional content, the format in unmoderated forums can regularly hint at incivility, perceived as deliberate rudeness or casual disregard toward the reader, for example, through spelling errors and unnecessary emphatic capitalization of whole words (online <i>shouting</i>). OBJECTIVE This study aimed to quantify the comparative effects of spelling errors and inappropriate capitalization on ratings of trustworthiness independently of lay insight and to determine whether these changes act synergistically or additively on the ratings. METHODS In web-based experiments, 301 UK-recruited participants rated 36 randomized short stimulus excerpts (in the format of information from an unmoderated health forum about multiple sclerosis) for trustworthiness using a semantic differential slider. A total of 9 control excerpts were compared with matching error-containing excerpts. Each matching error-containing excerpt included 5 instances of misspelling, or 5 instances of inappropriate capitalization (<i>shouting</i>), or a combination of 5 misspelling plus 5 inappropriate capitalization errors. Data were analyzed in a linear mixed effects model. RESULTS The mean trustworthiness ratings of the control excerpts ranged from 32.59 to 62.31 (rating scale 0-100). Compared with the control excerpts, excerpts containing only misspellings were rated as being 8.86 points less trustworthy, those containing inappropriate capitalization were rated as 6.41 points less trustworthy, and those containing the combination of misspelling and capitalization were rated as 14.33 points less trustworthy (<i>P</i>&lt;.001 for all). Misspelling and inappropriate capitalization show an additive effect. CONCLUSIONS Distinct indicators of incivility independently and additively penalize the perceived trustworthiness of online text independently of lay insight, eliciting a medium effect size.

10.2196/15171 ◽  
2020 ◽  
Vol 22 (6) ◽  
pp. e15171
Author(s):  
Harry J Witchel ◽  
Georgina A Thompson ◽  
Christopher I Jones ◽  
Carina E I Westling ◽  
Juan Romero ◽  
...  

Background The written format and literacy competence of screen-based texts can interfere with the perceived trustworthiness of health information in online forums, independent of the semantic content. Unlike in professional content, the format in unmoderated forums can regularly hint at incivility, perceived as deliberate rudeness or casual disregard toward the reader, for example, through spelling errors and unnecessary emphatic capitalization of whole words (online shouting). Objective This study aimed to quantify the comparative effects of spelling errors and inappropriate capitalization on ratings of trustworthiness independently of lay insight and to determine whether these changes act synergistically or additively on the ratings. Methods In web-based experiments, 301 UK-recruited participants rated 36 randomized short stimulus excerpts (in the format of information from an unmoderated health forum about multiple sclerosis) for trustworthiness using a semantic differential slider. A total of 9 control excerpts were compared with matching error-containing excerpts. Each matching error-containing excerpt included 5 instances of misspelling, or 5 instances of inappropriate capitalization (shouting), or a combination of 5 misspelling plus 5 inappropriate capitalization errors. Data were analyzed in a linear mixed effects model. Results The mean trustworthiness ratings of the control excerpts ranged from 32.59 to 62.31 (rating scale 0-100). Compared with the control excerpts, excerpts containing only misspellings were rated as being 8.86 points less trustworthy, those containing inappropriate capitalization were rated as 6.41 points less trustworthy, and those containing the combination of misspelling and capitalization were rated as 14.33 points less trustworthy (P<.001 for all). Misspelling and inappropriate capitalization show an additive effect. Conclusions Distinct indicators of incivility independently and additively penalize the perceived trustworthiness of online text independently of lay insight, eliciting a medium effect size.


Author(s):  
Andrew Pilny ◽  
C. Joseph Huber

Contact tracing is one of the oldest social network health interventions used to reduce the diffusion of various infectious diseases. However, some infectious diseases like COVID-19 amass at such a great scope that traditional methods of conducting contact tracing (e.g., face-to-face interviews) remain difficult to implement, pointing to the need to develop reliable and valid survey approaches. The purpose of this research is to test the effectiveness of three different egocentric survey methods for extracting contact tracing data: (1) a baseline approach, (2) a retrieval cue approach, and (3) a context-based approach. A sample of 397 college students were randomized into one condition each. They were prompted to anonymously provide contacts and populated places visited from the past four days depending on what condition they were given. After controlling for various demographic, social identity, psychological, and physiological variables, participants in the context-based condition were significantly more likely to recall more contacts (medium effect size) and places (large effect size) than the other two conditions. Theoretically, the research supports suggestions by field theory that assume network recall can be significantly improved by activating relevant activity foci. Practically, the research contributes to the development of innovative social network data collection methods for contract tracing survey instruments.


2016 ◽  
Vol 56 (4) ◽  
pp. 482-495
Author(s):  
Ilona Pezenka

Destination image is among the most studied constructs in tourism research. Many researchers are still convinced that the rating scale method is the most accurate for assessing destination image. This study presents alternative methods of data collection, namely, free-sorting and reduced paired comparisons, and investigates their applicability in a Web-based environment. The study then subjects these data collection methods to empirical analysis and compares the judgment task’s effects on perceived difficulty, fatigue, and boredom, on data quality, and on perceptual maps derived with MDS. The findings demonstrate that these methods are more accurate whenever a large number of objects have to be judged, which is particularly the case for positioning and competitiveness studies.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Laura Tucker ◽  
Alan Cuevas Villagomez ◽  
Tamar Krishnamurti

Abstract Background The United States is currently facing a maternal morbidity and mortality crisis, with the highest rates of any resource-rich nation. In efforts to address this, new guidelines for postpartum care suggest that mobile health (mHealth) apps can help provide complementary clinical support for new mothers during the postpartum period. However, to date no study has evaluated the quality of existing mHealth tools targeted to this time period in terms of sufficiency of maternal health information, inclusivity of people of color, and app usability. Methods Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards were used to review the peripartum apps from the Apple and Google Play stores in either the Health/Fitness, Medical, or Education categories. Apps were evaluated for extent and quality of maternal health information and inclusivity of people of color using an a priori coding scheme. App usability was evaluated using the Mobile Application Rating Scale (MARS) score. Results Of the 301 apps from the Apple and Google Play stores, 25 met criteria for final evaluation. Of the 30 maternal health topics coded for, the median number addressed by apps was 19.5 (65%). Peripartum behaviors were more frequently addressed than peripartum outpatient care topics and peripartum acute health risks. The coverage of maternal health information and inclusivity of people of color in app imagery both correlated positively with the MARS usability score of the app. Only 8 apps (32%) portrayed greater than 24% images of people of color- the percent of non-white Americans according to 2019 census estimates. There was no correlation between MARS usability score and number of app users, as estimated by number of ratings for the app available on the app store. In addition, apps with evidence-based maternal health information had greater MARS engagement, information, and aesthetics scores. However, presence of evidence-based information did not correlate with greater numbers of app users. Conclusions Current commercially available peripartum apps range widely in quality. Overall current app offerings generally do not provide adequate maternal health information and are not optimally accessible to the target users in terms of inclusivity of women of color or app usability. Apps delivering evidence-based information and more usable design are more likely to meet these standards but are not more likely to be downloaded by users.


10.2196/16148 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e16148
Author(s):  
Antonia Barke ◽  
Bettina K Doering

Background People often search the internet to obtain health-related information not only for themselves but also for family members and, in particular, their children. However, for a minority of parents, such searches may become excessive and distressing. Little is known about excessive web-based searching by parents for information regarding their children’s health. Objective This study aimed to develop and validate an instrument designed to assess parents' web-based health information searching behavior, the Children’s Health Internet Research, Parental Inventory (CHIRPI). Methods A pilot survey was used to establish the instrument (21 items). CHIRPI was validated online in a second sample (372/384, 96.9% mothers; mean age 32.7 years, SD 5.8). Item analyses, an exploratory factor analysis (EFA), and correlations with parents’ perception of their children’s health-related vulnerability (Child Vulnerability Scale, CVS), parental health anxiety (modified short Health Anxiety Inventory, mSHAI), and parental cyberchondria (Cyberchondria Severity Scale, CSS-15) were calculated. A subset of participants (n=73) provided retest data after 4 weeks. CHIRPI scores (total scores and subscale scores) of parents with a chronically ill child and parents who perceived their child to be vulnerable (CVS+; CVS>10) were compared with 2×2 analyses of variances (ANOVAs) with the factors Child’s Health Status (chronically ill vs healthy) and perceived vulnerability (CVS+ vs CVS−). Results CHIRPI’s internal consistency was standardized alpha=.89. The EFA identified three subscales: Symptom Focus (standardized alpha=.87), Implementing Advice (standardized alpha=.74) and Distress (standardized alpha=.89). The retest reliability of CHIRPI was measured as rtt=0.78. CHIRPI correlated strongly with CSS-15 (r=0.66) and mSHAI (r=0.39). The ANOVAs comparing the CHIRPI total score and the subscale scores for parents having a chronically ill child and parents perceiving their child as vulnerable revealed the main effects for perceiving one’s child as vulnerable but not for having a chronically ill child. No interactions were found. This pattern was observed for the CHIRPI total score (η2=0.053) and each subscale (Symptom Focus η2=0.012; Distress η2=0.113; and Implementing Advice η2=0.018). Conclusions The psychometric properties of CHIRPI are excellent. Correlations with mSHAI and CSS-15 indicate its validity. CHIRPI appears to be differentially sensitive to excessive searches owing to parents perceiving their child’s health to be vulnerable rather than to higher informational needs of parents with chronically ill children. Therefore, it may help to identify parents who search excessively for web-based health information. CHIRPI (and, in particular, the Distress subscale) seems to capture a pattern of factors related to anxious health-related cognitions, emotions, and behaviors of parents, which is also applied to their children.


2018 ◽  
Author(s):  
Kathleen Wade Reardon ◽  
Avante J Smack ◽  
Kathrin Herzhoff ◽  
Jennifer L Tackett

Although an emphasis on adequate sample size and statistical power has a long history in clinical psychological science (Cohen, 1992), increased attention to the replicability of scientific findings has again turned attention to the importance of statistical power (Bakker, van Dijk, &amp; Wicherts, 2012). These recent efforts have not yet circled back to modern clinical psychological research, despite the continued importance of sample size and power in producing a credible body of evidence. As one step in this process of scientific self-examination, the present study estimated an N-pact Factor (the statistical power of published empirical studies to detect typical effect sizes; Fraley &amp; Vazire, 2014) in two leading clinical journals (the Journal of Abnormal Psychology; JAP, and the Journal of Consulting and Clinical Psychology; JCCP) for the years 2000, 2005, 2010, and 2015. Study sample size, as one proxy for statistical power, is a useful focus because it allows direct comparisons with other subfields and may highlight some of the core methodological differences between clinical and other areas (e.g., hard-to-reach populations, greater emphasis on correlational designs). We found that, across all years examined, the average median sample size in clinical research is 179 participants (175 for JAP and 182 for JCCP). The power to detect a small-medium effect size of .20 is just below 80% for both journals. Although the clinical N-pact factor was higher than that estimated for social psychology, the statistical power in clinical journals is still limited to detect many effects of interest to clinical psychologists, with little evidence of improvement in sample sizes over time.


10.2196/17349 ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. e17349
Author(s):  
Aijing Luo ◽  
Zirui Xin ◽  
Yifeng Yuan ◽  
Tingxiao Wen ◽  
Wenzhao Xie ◽  
...  

Background With the rapid development of online health communities, increasing numbers of patients and families are seeking health information on the internet. Objective This study aimed to discuss how to fully reveal the health information needs expressed by patients with hypertension in their questions in a web-based environment and how to use the internet to help patients with hypertension receive personalized health education. Methods This study randomly selected 1000 text records from the question data of patients with hypertension from 2008 to 2018 collected from Good Doctor Online and constructed a classification system through literature research and content analysis. This paper identified the background characteristics and questioning intention of each patient with hypertension based on the patient’s question and used co-occurrence network analysis and the k-means clustering method to explore the features of the health information needs of patients with hypertension. Results The classification system for the health information needs of patients with hypertension included the following nine dimensions: drugs (355 names), symptoms and signs (395 names), tests and examinations (545 names), demographic data (526 kinds), diseases (80 names), risk factors (37 names), emotions (43 kinds), lifestyles (6 kinds), and questions (49 kinds). There were several characteristics of the explored web-based health information needs of patients with hypertension. First, more than 49% of patients described features, such as drugs, symptoms and signs, tests and examinations, demographic data, and diseases. Second, patients with hypertension were most concerned about treatment (778/1000, 77.80%), followed by diagnosis (323/1000, 32.30%). Third, 65.80% (658/1000) of patients asked physicians several questions at the same time. Moreover, 28.30% (283/1000) of patients were very concerned about how to adjust the medication, and they asked other treatment-related questions at the same time, including drug side effects, whether to take the drugs, how to treat the disease, etc. Furthermore, 17.60% (176/1000) of patients consulted physicians about the causes of clinical findings, including the relationship between the clinical findings and a disease, the treatment of a disease, and medications and examinations. Fourth, by k-means clustering, the questioning intentions of patients with hypertension were classified into the following seven categories: “how to adjust medication,” “what to do,” “how to treat,” “phenomenon explanation,” “test and examination,” “disease diagnosis,” and “disease prognosis.” Conclusions In a web-based environment, the health information needs expressed by Chinese patients with hypertension to physicians are common and distinct, that is, patients with different background features ask relatively common questions to physicians. The classification system constructed in this study can provide guidance to health information service providers for the construction of web-based health resources, as well as guidance for patient education, which could help solve the problem of information asymmetry in communication between physicians and patients.


1969 ◽  
Vol 40 (3) ◽  
pp. 259-270
Author(s):  
Carlos Ruiz ◽  
Claudia Gaviria ◽  
Miguel Gaitán ◽  
Rubén Manrique ◽  
Ángela Zuluaga ◽  
...  

Introduction: Implementation of teledermatology in primary care offers the possibility of treating patients using specific dermatologic knowledge in far away places with infrequent availability to these services. It is a priority to implement teledermatology services which demonstrate diagnostic reliability and satisfaction among users. Objectives and methods: To measure the diagnostic reliability of an asynchronous teledermatology web based application by means of intraobserver and interobserver concordance during teleconsultation and traditional presential («face to face») consultation. Furthermore, to evaluate user satisfaction regarding the teleconsultation and the web application.Results: A sample of 82 patients with 172 dermatologic diagnoses was obtained, in which an intraobserver concordance between 80.8% and 86.6%, and an interobserver concordance between 77.3% and 79.6% were found. Satisfaction was evaluated to be on an average of 92.5%.Conclusions: The teleconsultation reliability in teledermatology is evidenced to be high, and is susceptible of improvement through the implementation of health information standards and digital dermatologic photography protocols.


2021 ◽  
Author(s):  
Xiaochun Han ◽  
Yoni K. Ashar ◽  
Philip Kragel ◽  
Bogdan Petre ◽  
Victoria Schelkun ◽  
...  

Identifying biomarkers that predict mental states with large effect sizes and high test-retest reliability is a growing priority for fMRI research. We examined a well-established multivariate brain measure that tracks pain induced by nociceptive input, the Neurologic Pain Signature (NPS). In N = 295 participants across eight studies, NPS responses showed a very large effect size in predicting within-person single-trial pain reports (d = 1.45) and medium effect size in predicting individual differences in pain reports (d = 0.49, average r = 0.20). The NPS showed excellent short-term (within-day) test-retest reliability (ICC = 0.84, with average 69.5 trials/person). Reliability scaled with the number of trials within-person, with ≥60 trials required for excellent test-retest reliability. Reliability was comparable in two additional studies across 5-day (N = 29, ICC = 0.74, 30 trials/person) and 1-month (N = 40, ICC = 0.46, 5 trials/person) test-retest intervals. The combination of strong within-person correlations and only modest between-person correlations between the NPS and pain reports indicates that the two measures have different sources of between-person variance. The NPS is not a surrogate for individual differences in pain reports, but can serve as a reliable measure of pain-related physiology and mechanistic target for interventions.


Sign in / Sign up

Export Citation Format

Share Document