Information Theoretic XSS Attack Detection in Web Applications

2014 ◽  
Vol 5 (3) ◽  
pp. 1-15 ◽  
Author(s):  
Hossain Shahriar ◽  
Sarah North ◽  
Wei-Chuen Chen ◽  
Edward Mawangi

Cross-Site Scripting (XSS) has been ranked among the top three vulnerabilities over the last few years. XSS vulnerability allows an attacker to inject arbitrary JavaScript code that can be executed in the victim's browser to cause unwanted behaviors and security breaches. Despite the presence of many mitigation approaches, the discovery of XSS is still widespread among today's web applications. As a result, there is a need to improve existing solutions and to develop novel attack detection techniques. This paper proposes a proxy-level XSS attack detection approach based on a popular information-theoretic measure known as Kullback-Leibler Divergence (KLD). Legitimate JavaScript code present in an application should remain similar or very close to the JavaScript code present in a rendered web page. A deviation between the two can be an indication of an XSS attack. This paper applies a back-off smoothing technique to effectively detect the presence of malicious JavaScript code in response pages. The proposed approach has been applied for a number of open-source PHP web applications containing XSS vulnerabilities. The initial results show that the approach can effectively detect XSS attacks and suffer from low false positive rate through proper choice of threshold values of KLD. Further, the performance overhead has been found to be negligible.

Author(s):  
Hossain Shahriar ◽  
Sarah North ◽  
Wei-Chuen Chen ◽  
Edward Mawangi

Cross-Site Scripting (XSS) has been ranked among the top three vulnerabilities over the last few years. XSS vulnerability allows an attacker to inject arbitrary JavaScript code that can be executed in the victim's browser to cause unwanted behaviors and security breaches. Despite the presence of many mitigation approaches, the discovery of XSS is still widespread among today's web applications. As a result, there is a need to improve existing solutions and to develop novel attack detection techniques. This paper proposes a proxy-level XSS attack detection approach based on a popular information-theoretic measure known as Kullback-Leibler Divergence (KLD). Legitimate JavaScript code present in an application should remain similar or very close to the JavaScript code present in a rendered web page. A deviation between the two can be an indication of an XSS attack. This paper applies a back-off smoothing technique to effectively detect the presence of malicious JavaScript code in response pages. The proposed approach has been applied for a number of open-source PHP web applications containing XSS vulnerabilities. The initial results show that the approach can effectively detect XSS attacks and suffer from low false positive rate through proper choice of threshold values of KLD. Further, the performance overhead has been found to be negligible.


2021 ◽  
Vol 13 (1) ◽  
pp. 1-6
Author(s):  
Dimaz Arno Prasetio ◽  
Kusrini Kusrini ◽  
M. Rudyanto Arief

This study aims to measure the classification accuracy of XSS attacks by using a combination of two methods of determining feature characteristics, namely using linguistic computation and feature selection. XSS attacks have a certain pattern in their character arrangement, this can be studied by learners using n-gram modeling, but in certain cases XSS characteristics can contain a certain meta and synthetic this can be learned using feature selection modeling. From the results of this research, hybrid feature modeling gives good accuracy with an accuracy value of 99.87%, it is better than previous studies which the average is still below 99%, this study also tries to analyze the false positive rate considering that the false positive rate in attack detection is very influential for the convenience of the information security team, with the modeling proposed, the false positive rate is very small, namely 0.039%


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Kang Leng Chiew ◽  
Jeffrey Soon-Fatt Choo ◽  
San Nah Sze ◽  
Kelvin S. C. Yong

Phishing attack is a cybercrime that can lead to severe financial losses for Internet users and entrepreneurs. Typically, phishers are fond of using fuzzy techniques during the creation of a website. They confuse the victim by imitating the appearance and content of a legitimate website. In addition, many websites are vulnerable to phishing attacks, including financial institutions, social networks, e-commerce, and airline websites. This paper is an extension of our previous work that leverages the favicon with Google image search to reveal the identity of a website. Our identity retrieval technique involves an effective mathematical model that can be used to assist in retrieving the right identity from the many entries of the search results. In this paper, we introduced an enhanced version of the favicon-based phishing attack detection with the introduction of the Domain Name Amplification feature and incorporation of addition features. Additional features are very useful when the website being examined does not have a favicon. We have collected a total of 5,000 phishing websites from PhishTank and 5,000 legitimate websites from Alexa to verify the effectiveness of the proposed method. From the experimental results, we achieved a 96.93% true positive rate with only a 4.13% false positive rate.


Computers ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 35 ◽  
Author(s):  
Xuan Dau Hoang ◽  
Ngoc Tuong Nguyen

Defacement attacks have long been considered one of prime threats to websites and web applications of companies, enterprises, and government organizations. Defacement attacks can bring serious consequences to owners of websites, including immediate interruption of website operations and damage of the owner reputation, which may result in huge financial losses. Many solutions have been researched and deployed for monitoring and detection of website defacement attacks, such as those based on checksum comparison, diff comparison, DOM tree analysis, and complicated algorithms. However, some solutions only work on static websites and others demand extensive computing resources. This paper proposes a hybrid defacement detection model based on the combination of the machine learning-based detection and the signature-based detection. The machine learning-based detection first constructs a detection profile using training data of both normal and defaced web pages. Then, it uses the profile to classify monitored web pages into either normal or attacked. The machine learning-based component can effectively detect defacements for both static pages and dynamic pages. On the other hand, the signature-based detection is used to boost the model’s processing performance for common types of defacements. Extensive experiments show that our model produces an overall accuracy of more than 99.26% and a false positive rate of about 0.27%. Moreover, our model is suitable for implementation of a real-time website defacement monitoring system because it does not demand extensive computing resources.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1556
Author(s):  
Haithem Ben Chikha ◽  
Ahmad Almadhor ◽  
Waqas Khalid

Modulation detection techniques have received much attention in recent years due to their importance in the military and commercial applications, such as software-defined radio and cognitive radios. Most of the existing modulation detection algorithms address the detection dedicated to the non-cooperative systems only. In this work, we propose the detection of modulations in the multi-relay cooperative multiple-input multiple-output (MIMO) systems for 5G communications in the presence of spatially correlated channels and imperfect channel state information (CSI). At the destination node, we extract the higher-order statistics of the received signals as the discriminating features. After applying the principal component analysis technique, we carry out a comparative study between the random committee and the AdaBoost machine learning techniques (MLTs) at low signal-to-noise ratio. The efficiency metrics, including the true positive rate, false positive rate, precision, recall, F-Measure, and the time taken to build the model, are used for the performance comparison. The simulation results show that the use of the random committee MLT, compared to the AdaBoost MLT, provides gain in terms of both the modulation detection and complexity.


2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Jiazhong Lu ◽  
Fengmao Lv ◽  
Zhongliu Zhuo ◽  
Xiaosong Zhang ◽  
Xiaolei Liu ◽  
...  

Advanced cyberattacks are often featured by multiple types, layers, and stages, with the goal of cheating the monitors. Existing anomaly detection systems usually search logs or traffics alone for evidence of attacks but ignore further analysis about attack processes. For instance, the traffic detection methods can only detect the attack flows roughly but fail to reconstruct the attack event process and reveal the current network node status. As a result, they cannot fully model the complex multistage attack. To address these problems, we present Traffic-Log Combined Detection (TLCD), which is a multistage intrusion analysis system. Inspired by multiplatform intrusion detection techniques, we integrate traffics with network device logs through association rules. TLCD correlates log data with traffic characteristics to reflect the attack process and construct a federated detection platform. Specifically, TLCD can discover the process steps of a cyberattack attack, reflect the current network status, and reveal the behaviors of normal users. Our experimental results over different cyberattacks demonstrate that TLCD works well with high accuracy and low false positive rate.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Bakare K. Ayeni ◽  
Junaidu B. Sahalu ◽  
Kolawole R. Adeyanju

With improvement in computing and technological advancements, web-based applications are now ubiquitous on the Internet. However, these web applications are becoming prone to vulnerabilities which have led to theft of confidential information, data loss, and denial of data access in the course of information transmission. Cross-site scripting (XSS) is a form of web security attack which involves the injection of malicious codes into web applications from untrusted sources. Interestingly, recent research studies on the web application security centre focus on attack prevention and mechanisms for secure coding; recent methods for those attacks do not only generate high false positives but also have little considerations for the users who oftentimes are the victims of malicious attacks. Motivated by this problem, this paper describes an “intelligent” tool for detecting cross-site scripting flaws in web applications. This paper describes the method implemented based on fuzzy logic to detect classic XSS weaknesses and to provide some results on experimentations. Our detection framework recorded 15% improvement in accuracy and 0.01% reduction in the false-positive rate which is considerably lower than that found in the existing work by Koli et al. Our approach also serves as a decision-making tool for the users.


2013 ◽  
Vol 5 (2) ◽  
pp. 94-97
Author(s):  
Dr. Vinod Kumar ◽  
Mr Sandeep Agarwal ◽  
Mr Avtar Singh

In this paper, we propose to design a cross-layer based intrusion detection technique for wireless networks. In this technique a combined weight value is computed from the Received Signal Strength (RSS) and Time Taken for RTS-CTS handshake between sender and receiver (TT). Since it is not possible for an attacker to assume the RSS exactly for a sender by a receiver, it is an useful measure for intrusion detection. We propose that we can develop a dynamic profile for the communicating nodes based on their RSS values through monitoring the RSS values periodically for a specific Mobile Station (MS) or a Base Station (BS) from a server. Monitoring observed TT values at the server provides a reliable passive detection mechanism for session hijacking attacks since it is an unspoofable parameter related to its measuring entity. If the weight value is greater than a threshold value, then the corresponding node is considered as an attacker. By suitably adjusting the threshold value and the weight constants, we can reduce the false positive rate, significantly. By simulation results, we show that our proposed technique attains low misdetection ratio and false positive rate while increasing the packet delivery ratio.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Futai Zou ◽  
Siyu Zhang ◽  
Weixiong Rao ◽  
Ping Yi

Malware remains a major threat to nowadays Internet. In this paper, we propose a DNS graph mining-based malware detection approach. A DNS graph is composed of DNS nodes, which represent server IPs, client IPs, and queried domain names in the process of DNS resolution. After the graph construction, we next transform the problem of malware detection to the graph mining task of inferring graph nodes’ reputation scores using the belief propagation algorithm. The nodes with lower reputation scores are inferred as those infected by malwares with higher probability. For demonstration, we evaluate the proposed malware detection approach with real-world dataset. Our real-world dataset is collected from campus DNS servers for three months and we built a DNS graph consisting of 19,340,820 vertices and 24,277,564 edges. On the graph, we achieve a true positive rate 80.63% with a false positive rate 0.023%. With a false positive of 1.20%, the true positive rate was improved to 95.66%. We detected 88,592 hosts infected by malware or C&C servers, accounting for the percentage of 5.47% among all hosts. Meanwhile, 117,971 domains are considered to be related to malicious activities, accounting for 1.5% among all domains. The results indicate that our method is efficient and effective in detecting malwares.


Sign in / Sign up

Export Citation Format

Share Document