web spam
Recently Published Documents


TOTAL DOCUMENTS

149
(FIVE YEARS 16)

H-INDEX

18
(FIVE YEARS 2)

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Asim Shahzad ◽  
Nazri Mohd Nawi ◽  
Muhammad Zubair Rehman ◽  
Abdullah Khan

In this modern era, people utilise the web to share information and to deliver services and products. The information seekers use different search engines (SEs) such as Google, Bing, and Yahoo as tools to search for products, services, and information. However, web spamming is one of the most significant issues encountered by SEs because it dramatically affects the quality of SE results. Web spamming’s economic impact is enormous because web spammers index massive free advertising data on SEs to increase the volume of web traffic on a targeted website. Spammers trick an SE into ranking irrelevant web pages higher than relevant web pages in the search engine results pages (SERPs) using different web-spamming techniques. Consequently, these high-ranked unrelated web pages contain insufficient or inappropriate information for the user. To detect the spam web pages, several researchers from industry and academia are working. No efficient technique that is capable of catching all spam web pages on the World Wide Web (WWW) has been presented yet. This research is an attempt to propose an improved framework for content- and link-based web-spam identification. The framework uses stopwords, keywords’ frequency, part of speech (POS) ratio, spam keywords database, and copied-content algorithms for content-based web-spam detection. For link-based web-spam detection, we initially exposed the relationship network behind the link-based web spamming and then used the paid-link database, neighbour pages, spam signals, and link-farm algorithms. Finally, we combined all the content- and link-based spam identification algorithms to identify both types of spam. To conduct experiments and to obtain threshold values, WEBSPAM-UK2006 and WEBSPAM-UK2007 datasets were used. A promising F-measure of 79.6% with 81.2% precision shows the applicability and effectiveness of the proposed approach.


Author(s):  
Abdulrahman A. Alshdadi ◽  
Ahmed S. Alghamdi ◽  
Ali Daud ◽  
Saqib Hussain

Web spam is the unwanted request on websites, low-quality backlinks, emails, and reviews which is generated by an automated program. It is the big threat for website owners; because of it, they can lose their top keywords ranking from search engines, which will result in huge financial loss to the business. Over the years, researchers have tried to identify malicious domains based on specific features. However, lighthouse plugin, Ahrefs tool, and social media platforms features are ignored. In this paper, the authors are focused on detection of the spam domain name from a mixture of legit and spam domain name dataset. The dataset is taken from Google webmaster tools. Machine learning models are applied on individual, distributed, and hybrid features, which significantly improved the performance of existing malicious domain machine learning techniques. Better accuracy is achieved for support vector machine (SVM) classifier, as compared to Naïve Bayes, C4.5, AdaBoost, LogitBoost.


Author(s):  
Xu Zhuang ◽  
Yan Zhu ◽  
Qiang Peng ◽  
Faisal Khurshid

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Jiayong Liu ◽  
Yu Su ◽  
Shun Lv ◽  
Cheng Huang

Search engine is critical in people’s daily life because it determines the information quality people obtain through searching. Fierce competition for the ranking in search engines is not conducive to both users and search engines. Existing research mainly studies the content and links of websites. However, none of these techniques focused on semantic analysis of link and anchor text for detection. In this paper, we propose a web spam detection method by extracting novel feature sets from the homepage source code and choosing the random forest (RF) as the classifier. The novel feature sets are extracted from the homepage’s links, hypertext markup language (HTML) structure, and semantic similarity of content. We conduct experiments on the WEBSPAM-UK2007 and UK-2011 dataset using a five-fold cross-validation method. Besides, we design three sets of experiments to evaluate the performance of the proposed method. The proposed method with novel feature sets is compared with different indicators and has better performance than other methods with a precision of 0.929 and a recall of 0.930. Experiment results show that the proposed model could effectively detect web spam.


Author(s):  
Jingjing Wang ◽  
Lansheng Han ◽  
Man Zhou ◽  
Wenkui Qian ◽  
Dezhi An

Author(s):  
Joyce Jiyoung Whang ◽  
Yeonsung Jung ◽  
Seonggoo Kang ◽  
Dongho Yoo ◽  
Inderjit S. Dhillon
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document