Online Ratings for Vascular Interventional Proceduralists Vary by Physician Specialty

2021 ◽  
Vol 70 ◽  
pp. 27-35
Author(s):  
Zachary J. Wanken ◽  
John B. Rode ◽  
Sarah Y. Bessen ◽  
Peter B. Anderson ◽  
J. Aaron Barnes ◽  
...  
2020 ◽  
Vol 63 ◽  
pp. 24
Author(s):  
Zachary J. Wanken ◽  
John B. Rode ◽  
Sarah Y. Bessen ◽  
Peter B. Anderson ◽  
J Aaron Barnes ◽  
...  

2021 ◽  
Author(s):  
Danish Saifee ◽  
Matthew Hudnall ◽  
Uzma Raja

BACKGROUND Online reviews of physicians have become exceedingly popular among healthcare consumers since the early 2010s. A factor that can potentially influence these reviews is the gender of the physician since the physician's gender has been found to influence patient-physician communication. When studying the direct relationship between the gender of physicians and their online reviews, it is important to account for clinical characteristics, such as the patient risk, associated with a physician to isolate the direct effect of a physician’s gender on their online reviews. It is also important to account for temporal factors that can influence physicians and their online reviews. Our study is among the first to conduct rigorous longitudinal analysis to study the effects of the gender of physicians on their reviews after accounting for several important clinical factors, including patient risk, physician specialty, as well as temporal factors using time fixed-effects. This study is also among the first ones to study the possible gender bias in online reviews using statewide data from Alabama. OBJECTIVE This study conducts a longitudinal empirical investigation of the relationship between the gender of physicians and their online reviews using data across the state of Alabama after accounting for patient risk and temporal effects. METHODS We created a unique dataset by combining data from online physician reviews from a popular physician review website RateMDs and clinical data from the Center for Medicare and Medicaid Services (CMS) for the state of Alabama. We utilized longitudinal econometric specifications to conduct the econometric analysis and controlled for several important clinical and review characteristics, including patient risk, physician specialty and latent topics embedded in the textual comments of the online reviews. We utilized the four rating dimensions (helpfulness, knowledge, staff, and punctuality) and overall rating from RateMDs as the dependent variables and gender of the physicians as the key explanatory variable in our panel regression models. RESULTS The panel used to conduct most of the analysis had approximately 1093 physicians. After controlling for clinical factors such as Medicare patient risk, number of Medicare beneficiaries, number of services provided, and physician specialty, review factors such as latent topics embedded in the review comments, and number of words in the comments and year fixed effects, the physician random-effects specifications showed that male physicians receive better online ratings than female physicians. The coefficients and the corresponding standard errors, P values of the binary variable GenderFemale (1 for female physicians and 0 otherwise) with different rating variables as outcomes are as follows: OverallRating (Coefficient: -0.194, Std. Error: 0.060, P=.001), HelpfulnessRating (Coefficient: -0.221, Std. Error: 0.069, P=.001), KnowledgeRating (Coefficient: -0.230, Std. Error: 0.065, P<.001), StaffRating (Coefficient: -0.123, Std. Error: 0.062, P=.049) and PunctualityRating (Coefficient: -0.200, Std.Error: 0.067, P=.003). CONCLUSIONS This study finds that female physicians do indeed receive lower online ratings than male physicians, and this finding is consistent even after accounting for several clinical characteristics associated with the physicians and temporal effects. Even though the magnitude of the coefficients of GenderFemale is relatively small, they are statistically significant. The findings of this study provide support to the findings on gender bias in the existing healthcare literature. We contribute to the existing literature by conducting a study using data across the state of Alabama and utilizing a longitudinal econometric analysis along with incorporating important clinical and review controls associated with the physicians.


2018 ◽  
Author(s):  
Hiroaki Saito ◽  
Tetsuya Tanimoto ◽  
Masahiro Kami ◽  
Yosuke Suzuki ◽  
Tomohiro Morita ◽  
...  

2021 ◽  
pp. 000348942110059
Author(s):  
Krystyne Basa ◽  
Nicolette Jabbour ◽  
Matthew Rohlfing ◽  
Sarah Schmoker ◽  
Claire M. Lawlor ◽  
...  

Objectives: This study compares hospital-generated online ratings to patient-generated online ratings in academic otolaryngology and evaluates physician factors influencing these results. Methods: Websites of academic otolaryngologists were assessed for inclusion of hospital-generated Press Ganey surveys. Corresponding scores on Healthgrades and Vitals.com were identified via internet search. Hospital ratings were compared with patient-generated ratings, including score, demographics, and number of ratings. All data was collected between July 15th 2019 and August 22nd 2019. Results: 742 academic otolaryngologists with hospital-generated ratings were identified. Mean hospital-generated rating was significantly higher ((4.70, 95% CI 4.69-4.72) than patient-generated rating (Vitals:4.26, 95% CI 4.18-4.34, and Healthgrades:4.02, 95% CI 3.87-4.18; P < .001). In patient-generated rating, an increased number of rating scores (>20) was associated with male gender, professor ranking, and >30 years in practice ( P < .005). Physician demographics did not impact number of ratings in hospital-generated setting. With patient-generated, lower aggregate score was associated with professor ranking ( P = .001). In hospital-generated, lower score was associated with >30+ years in practice ( P = .023). Across all platforms, comprehensive otolaryngologists and neurotologists/otologists were rated lower in comparison to other specialties (PGS: P < .001,Vitals: P = .027,Healthgrades: P = .016). Conclusion: Hospital-generated ratings yield higher mean scores than patient-generated platforms. Between sources, Healthgrades.com scores were lower than those of Vitals.com . Professors with >30 years of practice generated more reviews in patient-generated ratings, and these physicians were generally rated lower. Access to patient-generated ratings is universal and physicians should be aware of variability between online rating platforms as scores may affect referrals and practice patterns.


2021 ◽  
Vol 8 ◽  
pp. 237437352110077
Author(s):  
Daliah Wachs ◽  
Victoria Lorah ◽  
Allison Boynton ◽  
Amanda Hertzler ◽  
Brandon Nichols ◽  
...  

The purpose of this study was to explore patient perceptions of primary care providers and their offices relative to their physician’s philosophy (medical degree [MD] vs doctorate in osteopathic medicine [DO]), specialty (internal medicine vs family medicine), US region, and gender (male vs female). Using the Healthgrades website, the average satisfaction rating for the physician, office parameters, and wait time were collected and analyzed for 1267 physicians. We found female doctors tended to have lower ratings in the Midwest, and staff friendliness of female physicians were rated lower in the northwest. In the northeast, male and female MDs were rated more highly than DOs. Wait times varied regionally, with northeast and northwest regions having the shortest wait times. Overall satisfaction was generally high for most physicians. Regional differences in perception of a physician based on gender or degree may have roots in local culture, including proximity to a DO school, comfort with female physicians, and expectations for waiting times.


Sign in / Sign up

Export Citation Format

Share Document