scholarly journals Analysis of Vulnerability Detection Tool for Web Services

2018 ◽  
Vol 7 (3.12) ◽  
pp. 773
Author(s):  
Senthamil Preethi K ◽  
Murugan A

The demand of the web services requirement is increasing day by day, because of this the security of the web services was under risk. To prevent from distinct types of attacks the developer needs to select the vulnerability detection tools, since many tools are available in the market the major challenging task for the developer to find the best tool which suitable for his application requirements. The recent study shows that many vulnerability detection tools provide a low coverage as far as vulnerability detection and higher false positive rate. In this paper, proposed a benchmarking method to accessing and comparing the efficiency of vulnerability detection tools in the web service environment. This method was used to illustrate the two benchmarks for SQL injection and cross site scripting. The first one is depending on predefined set of web services and next one permits user to identify the workload (User defined web services). Proposed system used the open source and commercial tools to test the application with benchmarking standards. Result shows that the benchmarks perfectly depict the efficiency of vulnerability detection tools. 

Author(s):  
Jayashree K ◽  
Chithambaramani Ramalingam

Enterprise Service Bus is an infrastructure to facilitate Service Oriented Architecture (SOA). SOA has gained a lot of attention over the most recent years and has become the de-facto standard for web application and software component integration. Web services are the prominent model for interoperable applications across heterogeneous systems and electronic business which use SOA and it has been used in various applications. The web services available on the web is increasing day by day, hence web service discovery is becoming a difficult and time consuming task. To discover services, clustering web services is an efficient approach. It is also necessary to compose several web services in order to achieve the user's goal. The chapter presents the background of web services and the various data mining techniques used for clustering web services. The chapter presents the various web services clustering method and the related work that discusses the various techniques to cluster the web services will also be addressed.


2013 ◽  
Vol 10 (2) ◽  
pp. 633-649 ◽  
Author(s):  
Byungha Choi ◽  
Kyungsan Cho

In this paper, we propose an improved detection scheme to protect a Web server from detoured attacks, which disclose confidential/ private information or disseminate malware codes through outbound traffic. Our scheme has a two-step hierarchy, whose detection methods are complementary to each other. The first step is a signature-based detector that uses Snort and detects the marks of disseminating malware, XSS, URL Spoofing and information leakage from the Web server. The second step is an anomaly-based detector which detects attacks by using the probability evaluation in HMM, driven by both payload and traffic characteristics of outbound packets. Through the verification analysis under the attacked Web server environment, we show that our proposed scheme improves the False Positive rate and detection efficiency for detecting detoured attacks to a Web server.


2021 ◽  
Author(s):  
Junjun Guo ◽  
Zhengyuan Wang ◽  
Haonan Li ◽  
Yang Xue

Abstract Vulnerabilities can have very serious consequences for information security, with huge implications for economic, social, and even national security. Automated vulnerability detection has always been a keen topic for researchers. From traditional manual vulnerability mining to static detection and dynamic detection, all rely on human experts to define features. The rapid development of machine learning and deep learning has alleviated the tedious task of manually defining features by human experts while reducing the lack of objectivity caused by human subjective awareness. However, we still need to find an objective characterization method to define the features of vulnerabilities. Therefore, we use code metrics for code characterization, which are sequences of metrics that represent code. To use code metrics for vulnerability detection, we propose VulnExplore, a deep learning-based vulnerability detection model that uses a composite neural network of CNN + LSTM for feature extraction and learning of code metrics. Experimental results show that VulnExplore has a lower false positive rate, a lower miss rate, and a better accuracy rate compared to other deep learning-based vulnerability detection models.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xiang Yu ◽  
Wenchao Yu ◽  
Shudong Li ◽  
Xianfei Yang ◽  
Ying Chen ◽  
...  

Since the services on the Internet are becoming increasingly abundant, all walks of life are inextricably linked with the Internet. Simultaneously, the Internet’s WEB attacks have never stopped. Relative to other common WEB attacks, WEB DDoS (distributed denial of service) will cause serious damage to the availability of the target network or system resources in a short period of time. At present, most researches are centered around machine learning-related DDoS attack detection algorithms. According to previous studies, unsupervised methods generally have a high false positive rate, while supervisory methods cannot handle large amount of network traffic data, and the performance is often limited by noise and irrelevant data. Therefore, this paper proposes a semisupervised learning detection model combining spectral clustering and random forest to detect the DDoS attack of the WEB application layer and compares it with other existing detection schemes to verify the semisupervised learning model proposed in this paper. While ensuring a low false positive rate, there is a certain improvement in the detection rate, which is more suitable for the WEB application layer DDoS attack detection.


2012 ◽  
Vol 198-199 ◽  
pp. 1457-1461 ◽  
Author(s):  
You Chan Zhu ◽  
Hui Li Liang

the SQL injection is one of the common security vulnerabilities of the Web application. This paper studies how to find out the possible SQL injection vulnerabilities. The scheme this paper put forward is the technology of black-box test. The main steps are that firstly construct specific user input in the test period of the Web application system, and inject it into the application system, then get the vulnerability detection report according to the analysis of the test logs.


2021 ◽  
Author(s):  
Jonas Meisner ◽  
Anders Albrechtsen ◽  
Kristian Hanghøj

1AbstractIdentification of selection signatures between populations is often an important part of a population genetic study. Leveraging high-throughput DNA sequencing larger sample sizes of populations with similar ancestries has become increasingly common. This has led to the need of methods capable of identifying signals of selection in populations with a continuous cline of genetic differentiation. Individuals from continuous populations are inherently challenging to group into meaningful units which is why existing methods rely on principal components analysis for inference of the selection signals. These existing methods require called genotypes as input which is problematic for studies based on low-coverage sequencing data. Here, we present two selections statistics which we have implemented in the PCAngsd framework. These methods account for genotype uncertainty, opening for the opportunity to conduct selection scans in continuous populations from low and/or variable coverage sequencing data. To illustrate their use, we applied the methods to low-coverage sequencing data from human populations of East Asian and European ancestries and show that the implemented selection statistics can control the false positive rate and that they identify the same signatures of selection from low-coverage sequencing data as state-of-the-art software using high quality called genotypes. Moreover, we show that PCAngsd outperform selection statistics obtained from called genotypes from low-coverage sequencing data.


2021 ◽  
Vol 15 (1) ◽  
pp. 112-120
Author(s):  
Umar Farooq

In the current era, SQL Injection Attack is a serious threat to the security of the ongoing cyber world particularly for many web applications that reside over the internet. Many webpages accept the sensitive information (e.g. username, passwords, bank details, etc.) from the users and store this information in the database that also resides over the internet. Despite the fact that this online database has much importance for remotely accessing the information by various business purposes but attackers can gain unrestricted access to these online databases or bypass authentication procedures with the help of SQL Injection Attack. This attack results in great damage and variation to database and has been ranked as the topmost security risk by OWASP TOP 10. Considering the trouble of distinguishing unknown attacks by the current principle coordinating technique, a strategy for SQL injection detection dependent on Machine Learning is proposed. Our motive is to detect this attack by splitting the queries into their corresponding tokens with the help of tokenization and then applying our algorithms over the tokenized dataset. We used four Ensemble Machine Learning algorithms: Gradient Boosting Machine (GBM), Adaptive Boosting (AdaBoost), Extended Gradient Boosting Machine (XGBM), and Light Gradient Boosting Machine (LGBM). The results yielded by our models are near to perfection with error rate being almost negligible. The best results are yielded by LGBM with an accuracy of 0.993371, and precision, recall, f1 as 0.993373, 0.993371, and 0.993370, respectively. The LGBM also yielded less error rate with False Positive Rate (FPR) and Root Mean Squared Error (RMSE) to be 0.120761 and 0.007, respectively. The worst results are yielded by AdaBoost with an accuracy of 0.991098, and precision, recall, f1 as 0.990733, 0.989175, and 0.989942, respectively. The AdaBoost also yielded high False Positive Rate (FPR) to be 0.009.


2002 ◽  
Vol 41 (01) ◽  
pp. 37-41 ◽  
Author(s):  
S. Shung-Shung ◽  
S. Yu-Chien ◽  
Y. Mei-Due ◽  
W. Hwei-Chung ◽  
A. Kao

Summary Aim: Even with careful observation, the overall false-positive rate of laparotomy remains 10-15% when acute appendicitis was suspected. Therefore, the clinical efficacy of Tc-99m HMPAO labeled leukocyte (TC-WBC) scan for the diagnosis of acute appendicitis in patients presenting with atypical clinical findings is assessed. Patients and Methods: Eighty patients presenting with acute abdominal pain and possible acute appendicitis but atypical findings were included in this study. After intravenous injection of TC-WBC, serial anterior abdominal/pelvic images at 30, 60, 120 and 240 min with 800k counts were obtained with a gamma camera. Any abnormal localization of radioactivity in the right lower quadrant of the abdomen, equal to or greater than bone marrow activity, was considered as a positive scan. Results: 36 out of 49 patients showing positive TC-WBC scans received appendectomy. They all proved to have positive pathological findings. Five positive TC-WBC were not related to acute appendicitis, because of other pathological lesions. Eight patients were not operated and clinical follow-up after one month revealed no acute abdominal condition. Three of 31 patients with negative TC-WBC scans received appendectomy. They also presented positive pathological findings. The remaining 28 patients did not receive operations and revealed no evidence of appendicitis after at least one month of follow-up. The overall sensitivity, specificity, accuracy, positive and negative predictive values for TC-WBC scan to diagnose acute appendicitis were 92, 78, 86, 82, and 90%, respectively. Conclusion: TC-WBC scan provides a rapid and highly accurate method for the diagnosis of acute appendicitis in patients with equivocal clinical examination. It proved useful in reducing the false-positive rate of laparotomy and shortens the time necessary for clinical observation.


1993 ◽  
Vol 32 (02) ◽  
pp. 175-179 ◽  
Author(s):  
B. Brambati ◽  
T. Chard ◽  
J. G. Grudzinskas ◽  
M. C. M. Macintosh

Abstract:The analysis of the clinical efficiency of a biochemical parameter in the prediction of chromosome anomalies is described, using a database of 475 cases including 30 abnormalities. A comparison was made of two different approaches to the statistical analysis: the use of Gaussian frequency distributions and likelihood ratios, and logistic regression. Both methods computed that for a 5% false-positive rate approximately 60% of anomalies are detected on the basis of maternal age and serum PAPP-A. The logistic regression analysis is appropriate where the outcome variable (chromosome anomaly) is binary and the detection rates refer to the original data only. The likelihood ratio method is used to predict the outcome in the general population. The latter method depends on the data or some transformation of the data fitting a known frequency distribution (Gaussian in this case). The precision of the predicted detection rates is limited by the small sample of abnormals (30 cases). Varying the means and standard deviations (to the limits of their 95% confidence intervals) of the fitted log Gaussian distributions resulted in a detection rate varying between 42% and 79% for a 5% false-positive rate. Thus, although the likelihood ratio method is potentially the better method in determining the usefulness of a test in the general population, larger numbers of abnormal cases are required to stabilise the means and standard deviations of the fitted log Gaussian distributions.


Sign in / Sign up

Export Citation Format

Share Document