Can You Trust Online Ratings? A Mutual Reinforcement Model for Trustworthy Online Rating Systems

2015 ◽  
Vol 45 (12) ◽  
pp. 1564-1576 ◽  
Author(s):  
Hyun-Kyo Oh ◽  
Sang-Wook Kim ◽  
Sunju Park ◽  
Ming Zhou
2021 ◽  
pp. 000348942110059
Author(s):  
Krystyne Basa ◽  
Nicolette Jabbour ◽  
Matthew Rohlfing ◽  
Sarah Schmoker ◽  
Claire M. Lawlor ◽  
...  

Objectives: This study compares hospital-generated online ratings to patient-generated online ratings in academic otolaryngology and evaluates physician factors influencing these results. Methods: Websites of academic otolaryngologists were assessed for inclusion of hospital-generated Press Ganey surveys. Corresponding scores on Healthgrades and Vitals.com were identified via internet search. Hospital ratings were compared with patient-generated ratings, including score, demographics, and number of ratings. All data was collected between July 15th 2019 and August 22nd 2019. Results: 742 academic otolaryngologists with hospital-generated ratings were identified. Mean hospital-generated rating was significantly higher ((4.70, 95% CI 4.69-4.72) than patient-generated rating (Vitals:4.26, 95% CI 4.18-4.34, and Healthgrades:4.02, 95% CI 3.87-4.18; P < .001). In patient-generated rating, an increased number of rating scores (>20) was associated with male gender, professor ranking, and >30 years in practice ( P < .005). Physician demographics did not impact number of ratings in hospital-generated setting. With patient-generated, lower aggregate score was associated with professor ranking ( P = .001). In hospital-generated, lower score was associated with >30+ years in practice ( P = .023). Across all platforms, comprehensive otolaryngologists and neurotologists/otologists were rated lower in comparison to other specialties (PGS: P < .001,Vitals: P = .027,Healthgrades: P = .016). Conclusion: Hospital-generated ratings yield higher mean scores than patient-generated platforms. Between sources, Healthgrades.com scores were lower than those of Vitals.com . Professors with >30 years of practice generated more reviews in patient-generated ratings, and these physicians were generally rated lower. Access to patient-generated ratings is universal and physicians should be aware of variability between online rating platforms as scores may affect referrals and practice patterns.


2021 ◽  
pp. 106895
Author(s):  
Hong-Liang Sun ◽  
Kai-Ping Liang ◽  
Hao Liao ◽  
Duan-Bing Chen

2016 ◽  
Vol 8 (2) ◽  
pp. 16-26 ◽  
Author(s):  
Zhihai Yang ◽  
Zhongmin Cai

Online rating data is ubiquitous on existing popular E-commerce websites such as Amazon, Yelp etc., which influences deeply the following customer choices about products used by E-businessman. Collaborative filtering recommender systems (CFRSs) play crucial role in rating systems. Since CFRSs are highly vulnerable to “shilling” attacks, it is common occurrence that attackers contaminate the rating systems with malicious rates to achieve their attack intentions. Despite detection methods based on such attacks have received much attention, the problem of detection accuracy remains largely unsolved. Moreover, few can scale up to handle large networks. This paper proposes a fast and effective detection method which combines two stages to find out abnormal users. Firstly, the manuscript employs a graph mining method to spot automatically suspicious nodes in a constructed graph with millions of nodes. And then, this manuscript continue to determine abnormal users by exploiting suspected target items based on the result of first stage. Experiments evaluate the effectiveness of the method.


Author(s):  
Mohammad Allahbakhsh ◽  
Aleksandar Ignjatovic ◽  
Boualem Benatallah ◽  
Seyed-Mehdi-Reza Beheshti ◽  
Elisa Bertino ◽  
...  

2017 ◽  
Vol 40 (3) ◽  
pp. 188-195 ◽  
Author(s):  
David Ackerman ◽  
Christina Chung

This article looks at how marketing student ratings of instructors and classes on online rating sites such as RateMyProfessor.com can be biased by prior student ratings of that class. Research has identified potential sources of bias of online student reviews administered by universities. Less has been done on the sources of bias inherent in a ratings site where those doing the rating can see prior ratings. To measure how student online ratings of a course can be influenced by existing online ratings, the study used five different prior ratings experiment conditions: mildly negative prior ratings, strongly negative prior ratings, mildly positive prior ratings, strongly positive prior ratings, and a control condition of no prior ratings. Results of this study suggest prior online ratings, both positive and negative, do affect subsequent online ratings and bias them. There are several implications. First, both negative and positive ratings can have an impact biasing subsequent ratings. Second, sometimes negative prior ratings must be strong in valence in order to bias subsequent ratings whereas even mildly positive ratings can have an impact. Last, this bias can potentially influence student course selection.


2019 ◽  
Vol 38 ◽  
pp. 1-12
Author(s):  
Lun Zhang ◽  
Sheng-Feng Wang ◽  
Zi-Zhan Lin ◽  
Ye Wu
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document