scholarly journals A Convolutional Neural Network for Language-Agnostic Source Code Summarization

Author(s):  
Jessica Moore ◽  
Ben Gelman ◽  
David Slater
2021 ◽  
Vol 97 ◽  
pp. 104075
Author(s):  
Francesco Barchi ◽  
Emanuele Parisi ◽  
Gianvito Urgese ◽  
Elisa Ficarra ◽  
Andrea Acquaviva

2021 ◽  
pp. 1-13
Author(s):  
Jingfei Chang ◽  
Yang Lu ◽  
Ping Xue ◽  
Xing Wei ◽  
Zhen Wei

Deep convolutional neural network (CNN) is difficult to deploy to mobile and portable devices due to its large number of parameters and floating-point operations (FLOPs). To tackle this problem, we propose a novel channel pruning method. We use the modified squeeze-and-excitation blocks (MSEB) to measure the importance of the channels in the convolutional layers. The unimportant channels, including convolutional kernels related to them, are pruned directly, which greatly reduces the storage cost and the number of calculations. For ResNet with basic blocks, we propose an approach to consistently prune all residual blocks in the same stage to ensure that the compact network structure is dimensionally correct. After pruning we retrain the compact network from scratch to restore the accuracy. Finally, we verify our method on CIFAR-10, CIFAR-100 and ILSVRC-2012. The results indicate that the performance of the compact network is better than the original network when the pruning rate is small. Even when the pruning amplitude is large, the accuracy can also be maintained or decreased slightly. On the CIFAR-100, when reducing the parameters and FLOPs up to 82% and 62% respectively, the accuracy of VGG-19 even improve by 0.54% after retraining. The source code is available at https://github.com/JingfeiChang/UCP.


Author(s):  
Loris Nanni ◽  
Sheryl Brahnam ◽  
Michelangelo Paci ◽  
Gianluca Maguolo

Recently, much attention has been devoted to finding highly efficient and powerful activation functions for CNN layers. Because activation functions inject different nonlinearities between layers that affect performance, varying them is one method for building robust ensembles of CNNs. The objective of this study is to examine the performance of CNN ensembles made with different activation functions, including six new ones presented here: 2D Mexican ReLU, TanELU, MeLU+GaLU, Symmetric MeLU, Symmetric GaLU, and Flexible MeLU. The highest performing ensemble was built with CNNs having different activation layers that randomly replaced the standard ReLU. A comprehensive evaluation of the proposed approach was conducted across fifteen biomedical data sets representing various classification tasks. The proposed method was tested on two basic CNN architectures: Vgg16 and ResNet50. Results demonstrate the superiority in performance of this approach. The MATLAB source code for this study will be available at https://github.com/LorisNanni.


2021 ◽  
Vol 7 ◽  
pp. e739
Author(s):  
Ahmed Bahaa Farid ◽  
Enas Mohamed Fathy ◽  
Ahmed Sharaf Eldin ◽  
Laila A. Abd-Elmegid

In recent years, the software industry has invested substantial effort to improve software quality in organizations. Applying proactive software defect prediction will help developers and white box testers to find the defects earlier, and this will reduce the time and effort. Traditional software defect prediction models concentrate on traditional features of source code including code complexity, lines of code, etc. However, these features fail to extract the semantics of source code. In this research, we propose a hybrid model that is called CBIL. CBIL can predict the defective areas of source code. It extracts Abstract Syntax Tree (AST) tokens as vectors from source code. Mapping and word embedding turn integer vectors into dense vectors. Then, Convolutional Neural Network (CNN) extracts the semantics of AST tokens. After that, Bidirectional Long Short-Term Memory (Bi-LSTM) keeps key features and ignores other features in order to enhance the accuracy of software defect prediction. The proposed model CBIL is evaluated on a sample of seven open-source Java projects of the PROMISE dataset. CBIL is evaluated by applying the following evaluation metrics: F-measure and area under the curve (AUC). The results display that CBIL model improves the average of F-measure by 25% compared to CNN, as CNN accomplishes the top performance among the selected baseline models. In average of AUC, CBIL model improves AUC by 18% compared to Recurrent Neural Network (RNN), as RNN accomplishes the top performance among the selected baseline models used in the experiments.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document