Consumer Choices Based on Online Ratings: The Diagnosticity of a Discrepant Rating

2016 ◽  
Author(s):  
Chan Jean Lee
Author(s):  
Lauren Rhue ◽  
Arun Sundararajan
Keyword(s):  

2021 ◽  
pp. 000348942110059
Author(s):  
Krystyne Basa ◽  
Nicolette Jabbour ◽  
Matthew Rohlfing ◽  
Sarah Schmoker ◽  
Claire M. Lawlor ◽  
...  

Objectives: This study compares hospital-generated online ratings to patient-generated online ratings in academic otolaryngology and evaluates physician factors influencing these results. Methods: Websites of academic otolaryngologists were assessed for inclusion of hospital-generated Press Ganey surveys. Corresponding scores on Healthgrades and Vitals.com were identified via internet search. Hospital ratings were compared with patient-generated ratings, including score, demographics, and number of ratings. All data was collected between July 15th 2019 and August 22nd 2019. Results: 742 academic otolaryngologists with hospital-generated ratings were identified. Mean hospital-generated rating was significantly higher ((4.70, 95% CI 4.69-4.72) than patient-generated rating (Vitals:4.26, 95% CI 4.18-4.34, and Healthgrades:4.02, 95% CI 3.87-4.18; P < .001). In patient-generated rating, an increased number of rating scores (>20) was associated with male gender, professor ranking, and >30 years in practice ( P < .005). Physician demographics did not impact number of ratings in hospital-generated setting. With patient-generated, lower aggregate score was associated with professor ranking ( P = .001). In hospital-generated, lower score was associated with >30+ years in practice ( P = .023). Across all platforms, comprehensive otolaryngologists and neurotologists/otologists were rated lower in comparison to other specialties (PGS: P < .001,Vitals: P = .027,Healthgrades: P = .016). Conclusion: Hospital-generated ratings yield higher mean scores than patient-generated platforms. Between sources, Healthgrades.com scores were lower than those of Vitals.com . Professors with >30 years of practice generated more reviews in patient-generated ratings, and these physicians were generally rated lower. Access to patient-generated ratings is universal and physicians should be aware of variability between online rating platforms as scores may affect referrals and practice patterns.


2021 ◽  
Vol 8 ◽  
pp. 237437352110077
Author(s):  
Daliah Wachs ◽  
Victoria Lorah ◽  
Allison Boynton ◽  
Amanda Hertzler ◽  
Brandon Nichols ◽  
...  

The purpose of this study was to explore patient perceptions of primary care providers and their offices relative to their physician’s philosophy (medical degree [MD] vs doctorate in osteopathic medicine [DO]), specialty (internal medicine vs family medicine), US region, and gender (male vs female). Using the Healthgrades website, the average satisfaction rating for the physician, office parameters, and wait time were collected and analyzed for 1267 physicians. We found female doctors tended to have lower ratings in the Midwest, and staff friendliness of female physicians were rated lower in the northwest. In the northeast, male and female MDs were rated more highly than DOs. Wait times varied regionally, with northeast and northwest regions having the shortest wait times. Overall satisfaction was generally high for most physicians. Regional differences in perception of a physician based on gender or degree may have roots in local culture, including proximity to a DO school, comfort with female physicians, and expectations for waiting times.


Author(s):  
Charles L. Nagle ◽  
Ivana Rehman

Abstract Listener-based ratings have become a prominent means of defining second language (L2) users’ global speaking ability. In most cases, local listeners are recruited to evaluate speech samples in person. However, in many teaching and research contexts, recruiting local listeners may not be possible or advisable. The goal of this study was to hone a reliable method of recruiting listeners to evaluate L2 speech samples online through Amazon Mechanical Turk (AMT) using a blocked rating design. Three groups of listeners were recruited: local laboratory raters and two AMT groups, one inclusive of the dialects to which L2 speakers had been exposed and another inclusive of a variety of dialects. Reliability was assessed using intraclass correlation coefficients, Rasch models, and mixed-effects models. Results indicate that online ratings can be highly reliable as long as appropriate quality control measures are adopted. The method and results can guide future work with online samples.


2011 ◽  
Vol 6 (2) ◽  
pp. 325-350
Author(s):  
Lee H. Wurm ◽  
Annmarie Cano ◽  
Diana A. Barenboym

Barenboym, Wurm, and Cano (2010) recently showed that significant differences emerged for ratings gathered online and in person. They also showed that researchers could reach different statistical conclusions in a regression analysis, depending on whether the norms were gathered online or in person. In the current study that research was extended. Familiarity ratings gathered online were significantly higher than those gathered in the lab, for a set of 300 potential stimuli. The in-person ratings correlated significantly better with an existing database of familiarity values. It is also shown that under three different grouping methods, online and in-person familiarity ratings produce different sets of stimuli. Finally, it is demonstrated that in each case, different conclusions are reached about variables that have a significant relationship with familiarity. Simulations show that the effects are driven disproportionately by higher intra-item variability in the online ratings. Studies in which stimuli are grouped on the basis of ratings can be affected by the choice of rating methodology.


Sign in / Sign up

Export Citation Format

Share Document