TokenCheck: Towards Deep Learning Based Security Vulnerability Detection In ERC-20 Tokens

Author(s):  
Subhasish Goswami ◽  
Rabijit Singh ◽  
Nayanjeet Saikia ◽  
Kaushik Kumar Bora ◽  
Utpal Sharma
2021 ◽  
Vol 3 (2(59)) ◽  
pp. 19-23
Author(s):  
Yevhenii Kubiuk ◽  
Gennadiy Kyselov

The object of research of this work is the methods of deep learning for source code vulnerability detection. One of the most problematic areas is the use of only one approach in the code analysis process: the approach based on the AST (abstract syntax tree) or the approach based on the program dependence graph (PDG). In this paper, a comparative analysis of two approaches for source code vulnerability detection was conducted: approaches based on AST and approaches based on the PDG. In this paper, various topologies of neural networks were analyzed. They are used in approaches based on the AST and PDG. As the result of the comparison, the advantages and disadvantages of each approach were determined, and the results were summarized in the corresponding comparison tables. As a result of the analysis, it was determined that the use of BLSTM (Bidirectional Long Short Term Memory) and BGRU (Bidirectional Gated Linear Unit) gives the best result in terms of problems of source code vulnerability detection. As the analysis showed, the most effective approach for source code vulnerability detection systems is a method that uses an intermediate representation of the code, which allows getting a language-independent tool. Also, in this work, our own algorithm for the source code analysis system is proposed, which is able to perform the following operations: predict the source code vulnerability, classify the source code vulnerability, and generate a corresponding patch for the found vulnerability. A detailed analysis of the proposed system’s unresolved issues is provided, which is planned to investigate in future researches. The proposed system could help speed up the software development process as well as reduce the number of software code vulnerabilities. Software developers, as well as specialists in the field of cybersecurity, can be stakeholders of the proposed system.


2020 ◽  
Vol 10 (22) ◽  
pp. 7954
Author(s):  
Lu Wang ◽  
Xin Li ◽  
Ruiheng Wang ◽  
Yang Xin ◽  
Mingcheng Gao ◽  
...  

Automated vulnerability detection is one of the critical issues in the realm of software security. Existing solutions to this problem are mostly based on features that are defined by human experts and directly lead to missed potential vulnerability. Deep learning is an effective method for automating the extraction of vulnerability characteristics. Our paper proposes intelligent and automated vulnerability detection while using deep representation learning and heterogeneous ensemble learning. Firstly, we transform sample data from source code by removing segments that are unrelated to the vulnerability in order to reduce code analysis and improve detection efficiency in our experiments. Secondly, we represent the sample data as real vectors by pre-training on the corpus and maintaining its semantic information. Thirdly, the vectors are fed to a deep learning model to obtain the features of vulnerability. Lastly, we train a heterogeneous ensemble classifier. We analyze the effectiveness and resource consumption of different network models, pre-training methods, classifiers, and vulnerabilities separately in order to evaluate the detection method. We also compare our approach with some well-known vulnerability detection commercial tools and academic methods. The experimental results show that our proposed method provides improvements in false positive rate, false negative rate, precision, recall, and F1 score.


2019 ◽  
Vol 79 (23-24) ◽  
pp. 16077-16091 ◽  
Author(s):  
JaeHan Jeong ◽  
Sungmoon Kwon ◽  
Man-Pyo Hong ◽  
Jin Kwak ◽  
Taeshik Shon

PLoS ONE ◽  
2019 ◽  
Vol 14 (8) ◽  
pp. e0221530
Author(s):  
Yuancheng Li ◽  
Longqiang Ma ◽  
Liang Shen ◽  
Junfeng Lv ◽  
Pan Zhang

Significance This presents a safety concern in some areas, and creates difficulties in guaranteeing that AI decisions are unbiased, for example in their treatment of different demographic groups. Impacts Adoption of deep learning AI in areas such as law, finance and employment will raise concerns over bias and discrimination. Lack of explainability represents a potential safety concern in areas such as medicine and autonomous vehicles. Inexplicable deep learning AI could be manipulated by malicious actors, creating a security vulnerability in defence applications.


Sign in / Sign up

Export Citation Format

Share Document