scholarly journals Machine Learning Algorithms for Spam Detection in Social Networks

2019 ◽  
Vol 8 (S3) ◽  
pp. 41-44
Author(s):  
K. Nagaramani ◽  
K. Vandanarao ◽  
B. Mamatha

Most of the web based social systems like Face book, twitter, other mailing systems and social networks are developed for users to share their information, to interact and engage with the community. Most of the times these social networks will give some troubles to the users by spam messages, threaten messages, hackers and so on.. Many of the researchers worked on this and gave several approaches to detect the spam, hackers and other trouble shoots. In this paper we are discussing some tools to detect the spam messages in social networks. Here we are using RF, SVM, KNN and MLP machine learning algorithms across rapid miner and WEKA. It gives the better results when compared with other tools.

2017 ◽  
Vol 7 (1.3) ◽  
pp. 61
Author(s):  
M. Sangeetha ◽  
S. Nithyanantham ◽  
M. Jayanthi

Online Social Networks(OSNs) have mutual themes such as information sharing, person-to-person interaction and creation of shared and collaborative content.  Lots of micro blogging websites available like Twitter, Instagram, Tumblr. A standout amongst the most prominent online networking stages is Twitter. It has 313 million months to month dynamic clients which post of 500 million tweets for each day. Twitter allows users to send short text based messages with up to 140-character letters called "tweets". Enlisted clients can read and post tweets however the individuals who are unregistered can just read them. Due to the reputation it attracts the consideration of spammers for their vindictive points, for example, phishing true blue clients or spreading malevolent programming and promotes through URLs shared inside tweets, forcefully take after/unfollow valid clients and commandeer drifting subjects to draw in their consideration, proliferating obscenity. Twitter Spam has become a critical problem nowadays. By looking at the execution of an extensive variety of standard machine learning calculations, fundamentally expecting to distinguish the acceptable location execution in light of a lot of information by utilizing account-based and tweet content-based highlights.


2017 ◽  
Vol 28 ◽  
pp. v518
Author(s):  
H-L. Wong ◽  
T. Luechtefeld ◽  
A. Prawira ◽  
Z. Patterson ◽  
J. Workman ◽  
...  

Author(s):  
Joshua M. Nicholson ◽  
Ashish Uppala ◽  
Matthias Sieber ◽  
Peter Grabitz ◽  
Milo Mordaunt ◽  
...  

AbstractWikipedia is a widely used online reference work which cites hundreds of thousands of scientific articles across its entries. The quality of these citations has not been previously measured, and such measurements have a bearing on the reliability and quality of the scientific portions of this reference work. Using a novel technique, a massive database of qualitatively described citations, and machine learning algorithms, we analyzed 1,923,575 Wikipedia articles which cited a total of 824,298 scientific articles, and found that most scientific articles (57%) are uncited or untested by subsequent studies, while the remainder show a wide variability in contradicting or supporting evidence (2-41%). Additionally, we analyzed 51,804,643 scientific articles from journals indexed in the Web of Science and found that most (85%) were uncited or untested by subsequent studies, while the remainder show a wide variability in contradicting or supporting evidence (1-14%).


Sign in / Sign up

Export Citation Format

Share Document