scholarly journals Creating and detecting fake reviews of online products

2022 ◽  
Vol 64 ◽  
pp. 102771
Author(s):  
Joni Salminen ◽  
Chandrashekhar Kandpal ◽  
Ahmed Mohamed Kamel ◽  
Soon-gyo Jung ◽  
Bernard J. Jansen
Keyword(s):  
2021 ◽  
Vol 1916 (1) ◽  
pp. 012153
Author(s):  
S Kiruthika ◽  
V Vishnu Priyan
Keyword(s):  

Author(s):  
Muhammad Saad Javed ◽  
Hammad Majeed ◽  
Hasan Mujtaba ◽  
Mirza Omer Beg
Keyword(s):  

Author(s):  
Sherry He ◽  
Brett Hollenbeck ◽  
Davide Proserpio
Keyword(s):  

2021 ◽  
Vol 13 (1) ◽  
pp. 1-16
Author(s):  
Michela Fazzolari ◽  
Francesco Buccafurri ◽  
Gianluca Lax ◽  
Marinella Petrocchi

Over the past few years, online reviews have become very important, since they can influence the purchase decision of consumers and the reputation of businesses. Therefore, the practice of writing fake reviews can have severe consequences on customers and service providers. Various approaches have been proposed for detecting opinion spam in online reviews, especially based on supervised classifiers. In this contribution, we start from a set of effective features used for classifying opinion spam and we re-engineered them by considering the Cumulative Relative Frequency Distribution of each feature. By an experimental evaluation carried out on real data from Yelp.com, we show that the use of the distributional features is able to improve the performances of classifiers.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 3765-3773
Author(s):  
Meiling Liu ◽  
Yue Shang ◽  
Qi Yue ◽  
Jiyun Zhou
Keyword(s):  

2021 ◽  
Vol 27 (1) ◽  
pp. 25-42
Author(s):  
Breno de Paula Andrade Cruz ◽  
Susana C. Silva ◽  
Steven Dutt Ross

Purpose – The social TV phenomenon has raised the interest of some researchers in studying the production of online reviews. However, little is known about the characteristics of reviewers that, without having had indeed a real experience of consumption, still dare to assess the service. The purpose of this research is to understand these reviewers better, using an experiment conducted in Brazil. Design/methodology/approach – Through a cluster analysis with 2547 reviewers of 7 restaurants that participated in a reality show in Brazil, we were able to create 4 fours. Using Spearman Correlation and Kruskal-Wallis Test, differences among groups were analysed in the search of behavioural changes among different types of reviewers. Findings – We conclude that social TV influence fake online reviews of restaurants that were involved in a tv show. Furthermore, we were able to verify that some reviewers indeed assess the service without indeed having tried the service, which strongly bias the influence they are going to cause in potential consumers. Four types of reviewers were identified: the real expert, the amateur reviewer, the speculator and the pseudo expert. The 2 latter types are analyzed through the anthropologic lens of the popular Brazilian culture and the TV influence in that country. Research limitations/implications – we were able to understand how TV can influence the construction of fake online reviews for restaurants. Practical implications – It is important for the restaurant and hospitality industry in general, to be able to be attentive to the phenomenon of fake reviews that can totally biased the advantages of this assessment system that was created to produce trust among consumers, but that can act exactly the other way around. Originality/value – This study highlights the relevance of taking into account cultural background of the country where the restaurant is located, as well as emphasizing the relevance of conducting a previous analysis of the decision of embarking on a reality show that it has high chances to biasedly influence consumers’ decisions.


Author(s):  
Neha Thomas ◽  
Susan Elias

 Abstract— Detection of fake review and reviewers is currently a challenging problem in cyber space. It is challenging primarily due to the dynamic nature of the methodology used to fake the review. There are several aspects to be considered when analyzing reviews to classify them effective into genuine and fake. Sentiment analysis, opinion mining and intend mining are fields of research that try to accomplish the goal through Natural Language Processing of the text content of the review.  In this paper, an approach that uses the review ratings evaluated along a timeline is presented. An Amazon dataset comprising of ratings indicated for a wide range of products was used for the analysis presented here. The analysis of the ratings was carried out for an electronic product over a period of six years.  The computed average rating helps to identify linear classifiers that define solution boundaries within the dataspace. This enables a product specific classification of review ratings and suitable recommendations can also be generated automatically. The paper explains a methodology to evaluate the average product ratings over time and presents the research outcomes using a novel classification tool. The proposed approach helps to determine the optimal point to distinguish between fake and genuine ratings for each product.    Index Terms: Fake reviews, Fake Ratings, Product Ratings, Online Shopping, Amazon Dataset.


Author(s):  
Paolo Figini ◽  
Laura Vici ◽  
Giampaolo Viglia

Purpose This study aims to compare the rating dynamics of the same hotels in two online review platforms (Booking.com and Trip Advisor), which mainly differ in requiring or not requiring proof of prior reservation before posting a review (respectively, a verified vs a non-verified platform). Design/methodology/approach A verified system, by definition, cannot host fake reviews. Should also the non-verified system be free from “ambiguous” reviews, the structure of ratings (valence, variability, dynamics) for the same items should also be similar. Any detected structural difference, on the contrary, might be linked to a possible review bias. Findings Travelers’ scores in the non-verified platform are higher and much more volatile than ratings in the verified platform. Additionally, the verified review system presents a faster convergence of ratings towards the long-term scores of individual hotels, whereas the non-verified system shows much more discordance in the early phases of the review window. Research limitations/implications The paper offers insights into how to detect suspicious reviews. Non-verified platforms should add indices of scores’ dispersion to existing information available in websites and mobile apps. Moreover, they can use time windows to delete older (and more likely biased) reviews. Findings also ring a warning bell to tourists about the reliability of ratings, particularly when only a few reviews are posted online. Originality/value The across-platform comparison of single items (in terms of ratings’ dynamics and speed of convergence) is a novel contribution that calls for extending the analysis to different destinations and types of platform.


Sign in / Sign up

Export Citation Format

Share Document