web logs
Recently Published Documents


TOTAL DOCUMENTS

199
(FIVE YEARS 12)

H-INDEX

16
(FIVE YEARS 1)

Author(s):  
Christos Iliou ◽  
Theodoros Kostoulas ◽  
Theodora Tsikrika ◽  
Vasilios Katos ◽  
Stefanos Vrochidis ◽  
...  

Web bots vary in sophistication based on their purpose, ranging from simple automated scripts to advanced web bots that have a browser fingerprint, support the main browser functionalities, and exhibit a humanlike behaviour. Advanced web bots are especially appealing to malicious web bot creators, due to their browser-like fingerprint and humanlike behaviour which reduce their detectability. This work proposes a web bot detection framework that comprises two detection modules: (i) a detection module that utilises web logs, and (ii) a detection module that leverages mouse movements. The framework combines the results of each module in a novel way to capture the different temporal characteristics of the web logs and the mouse movements, as well as the spatial characteristics of the mouse movements. We assess its effectiveness on web bots of two levels of evasiveness, (a) moderate web bots that have a browser fingerprint and (b) advanced web bots that have a browser fingerprint and also exhibit a humanlike behaviour. We show that combining web logs with visitors? mouse movements is more effective and robust towards detecting advanced web bots that try to evade detection, as opposed to using only one of those approaches.


2020 ◽  
Vol 17 (8) ◽  
pp. 3444-3448
Author(s):  
S. L. Jany Shabu ◽  
V. Netaji Subhash Chandra Bose ◽  
Venkatesh Bandaru ◽  
Sardar Maran ◽  
J. Refonaa

Online reviews about the acquisition of items or administrations gave have become the primary wellspring of clients’ conclusions. So as to pick up benefit or acclaim, as a rule spam reviews are composed to advance or downgrade a couple of target items or administrations. This training is known as review spamming. In the previous barely any years, an assortment of strategies have been proposed so as to illuminate the issue of spam reviews. It is a mainstream correspondence and furthermore known as information trade media. Information could be of a book, numbers, figures or insights that are gotten to by a PC. These days, numerous individuals relies upon substance accessible in web-based social networking in their choices. Sharing of data with people groups has additionally pulled in social spammers to endeavor and spread spam messages to advance individual web logs, notices, advancements, phishing, trick, fakes, etc. The possibility that anyone will leave a review give a brilliant possibility for spammers to post spit audit with respect to item and administrations for different interests and possibilities. In this way, we propose a fake message detection system utilizing ML to recognize the spam and fake messages on the internet based life stage.


2020 ◽  
Vol 17 (9) ◽  
pp. 4432-4437
Author(s):  
Ramakrishnan M. Ramanathaiah ◽  
Bhawna Nigam ◽  
M. Niranjanamurthy

Web Usage Mining applies fewer techniques in record data to pull out the behavior of users. The knowledge mined from the web log can be utilized in web personalization, Prediction, prefetching, restructuring of web sites etc. It consists of three steps in preprocessing, pattern detection and analysis. Web log information is typically noisy and uncertain and preprocessing is a significant process ahead of mining. The Patterns discovered after applying the mining techniques are dependent on the accuracy of the weblog which in turn depends on the preprocessing phase. The output of preprocessing should be the user’s navigation session file. In this paper the techniques of preprocessing and the method for construction of user’s navigation session file is proposed.


PLoS ONE ◽  
2020 ◽  
Vol 15 (6) ◽  
pp. e0234663
Author(s):  
Ruben L. Bach ◽  
Alexander Wenz

2020 ◽  
Vol 28 (4) ◽  
pp. 546-557
Author(s):  
Gonzalo de la Torre-Abaitua ◽  
Luis F Lago-Fernández ◽  
David Arroyo

Abstract In cybersecurity, there is a call for adaptive, accurate and efficient procedures to identifying performance shortcomings and security breaches. The increasing complexity of both Internet services and traffic determines a scenario that in many cases impedes the proper deployment of intrusion detection and prevention systems. Although it is a common practice to monitor network and applications activity, there is not a general methodology to codify and interpret the recorded events. Moreover, this lack of methodology somehow erodes the possibility of diagnosing whether event detection and recording is adequately performed. As a result, there is an urge to construct general codification and classification procedures to be applied on any type of security event in any activity log. This work is focused on defining such a method using the so-called normalized compression distance (NCD). NCD is parameter-free and can be applied to determine the distance between events expressed using strings. As a first step in the concretion of a methodology for the integral interpretation of security events, this work is devoted to the characterization of web logs. On the grounds of the NCD, we propose an anomaly-based procedure for identifying web attacks from web logs. Given a web query as stored in a security log, a NCD-based feature vector is created and classified using a support vector machine. The method is tested using the CSIC-2010 data set, and the results are analyzed with respect to similar proposals.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Yixin Wu ◽  
Yuqiang Sun ◽  
Cheng Huang ◽  
Peng Jia ◽  
Luping Liu

Attackers upload webshell into a web server to achieve the purpose of stealing data, launching a DDoS attack, modifying files with malicious intentions, etc. Once these objects are accomplished, it will bring huge losses to website managers. With the gradual development of encryption and confusion technology, the most common detection approach using taint analysis and feature matching might become less useful. Instead of applying source file codes, POST contents, or all received traffic, this paper demonstrated an intelligent and efficient framework that employs precise sessions derived from the web logs to detect webshell communication. Features were extracted from the raw sequence data in web logs while a statistical method based on time interval was proposed to identify sessions specifically. Besides, the paper leveraged long short-term memory and hidden Markov model to constitute the framework, respectively. Finally, the framework was evaluated with real data. The experiment shows that the LSTM-based model can achieve a higher accuracy rate of 95.97% with a recall rate of 96.15%, which has a much better performance than the HMM-based model. Moreover, the experiment demonstrated the high efficiency of the proposed approach in terms of the quick detection without source code, especially when it only considers detecting for a period of time, as it takes 98.5% less time than the cited related approach to get the result. As long as the webshell behavior is detected, we can pinpoint the anomaly session and utilize the statistical method to find the webshell file accurately.


Sign in / Sign up

Export Citation Format

Share Document