Image Recognition-based Deep Neural Network for Packed Malware Detection

Author(s):  
Xuchenming Sun ◽  
Yunchun Zhang ◽  
Chengjie Li ◽  
Xin Zhang ◽  
Yuting Zhong
Author(s):  
Syed Khurram Jah Rizvi ◽  
Warda Aslam ◽  
Muhammad Shahzad ◽  
Shahzad Saleem ◽  
Muhammad Moazam Fraz

AbstractEnterprises are striving to remain protected against malware-based cyber-attacks on their infrastructure, facilities, networks and systems. Static analysis is an effective approach to detect the malware, i.e., malicious Portable Executable (PE). It performs an in-depth analysis of PE files without executing, which is highly useful to minimize the risk of malicious PE contaminating the system. Yet, instant detection using static analysis has become very difficult due to the exponential rise in volume and variety of malware. The compelling need of early stage detection of malware-based attacks significantly motivates research inclination towards automated malware detection. The recent machine learning aided malware detection approaches using static analysis are mostly supervised. Supervised malware detection using static analysis requires manual labelling and human feedback; therefore, it is less effective in rapidly evolutionary and dynamic threat space. To this end, we propose a progressive deep unsupervised framework with feature attention block for static analysis-based malware detection (PROUD-MAL). The framework is based on cascading blocks of unsupervised clustering and features attention-based deep neural network. The proposed deep neural network embedded with feature attention block is trained on the pseudo labels. To evaluate the proposed unsupervised framework, we collected a real-time malware dataset by deploying low and high interaction honeypots on an enterprise organizational network. Moreover, endpoint security solution is also deployed on an enterprise organizational network to collect malware samples. After post processing and cleaning, the novel dataset consists of 15,457 PE samples comprising 8775 malicious and 6681 benign ones. The proposed PROUD-MAL framework achieved an accuracy of more than 98.09% with better quantitative performance in standard evaluation parameters on collected dataset and outperformed other conventional machine learning algorithms. The implementation and dataset are available at https://bit.ly/35Sne3a.


Author(s):  
Anna Ilina ◽  
Vladimir Korenkov

The task of counting the number of people is relevant when conducting various types of events, which may include seminars, lectures, conferences, meetings, etc. Instead of monotonous manual counting of participants, it is much more effective to use facial recognition technology, which makes it possible not only to quickly count those present, but also to recognize each of them, which makes it possible to conduct further analysis of this data, identify patterns in them and predict. The research conducted in this paper determines the quality assessment of the use of facial recognition technology in images andvideo streams, based on the use of a deep neural network, to solve the problem of automating attendance tracking.


2021 ◽  
Vol 2074 (1) ◽  
pp. 012083
Author(s):  
Xiangli Lin

Abstract With the vigorous development of electronic technology and computer technology, as well as the continuous advancement of research in the fields of neurophysiology, bionics and medicine, the artificial visual prosthesis has brought hope to the blind to restore their vision. Artificial optical prosthesis research has confirmed that prosthetic vision can restore part of the visual function of patients with non-congenital blindness, but the mechanism of early prosthetic image processing still needs to be clarified through neurophysiological research. The purpose of this article is to study neurophysiology based on deep neural networks under simulated prosthetic vision. This article uses neurophysiological experiments and mathematical statistical methods to study the vision of simulated prostheses, and test and improve the image processing strategies used to simulate the visual design of prostheses. In this paper, based on the low-pixel image recognition of the simulating irregular phantom view point array, the deep neural network is used in the image processing strategy of prosthetic vision, and the effect of the image processing method on object image recognition is evaluated by the recognition rate. The experimental results show that the recognition rate of the two low-pixel segmentation and low-pixel background reduction methods proposed by the deep neural network under simulated prosthetic vision is about 70%, which can significantly increase the impact of object recognition, thereby improving the overall recognition ability of visual guidance.


Convolutional neural network (CNN) is actually a deep neural network which plays an important role in image recognition. The CNN recognizes images similar to visual cortex in our eyes. In this proposed work, an accelerator is used for high efficient convolutional computations. The main aim of using the accelerator is to avoid ineffectusal computations and to improve performance and energy efficiency during image recognition without any loss in accuracy. However, the throughput of the accelerator is improved by adding max-pooling function only. Since the CNN includes multiple inputs and intermediate weights for its convolutional computation, the computational complexity is increased enormously. Hence, to reduce the computational complexity of the CNN, a CNN accelerator is proposed in this paper. The accelerator design is simulated and synthesized in Cadence RTL compiler tool with 90nm technology library.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Anand Ramachandran ◽  
Steven S. Lumetta ◽  
Eric W. Klee ◽  
Deming Chen

Abstract Background Modern Next Generation- and Third Generation- Sequencing methods such as Illumina and PacBio Circular Consensus Sequencing platforms provide accurate sequencing data. Parallel developments in Deep Learning have enabled the application of Deep Neural Networks to variant calling, surpassing the accuracy of classical approaches in many settings. DeepVariant, arguably the most popular among such methods, transforms the problem of variant calling into one of image recognition where a Deep Neural Network analyzes sequencing data that is formatted as images, achieving high accuracy. In this paper, we explore an alternative approach to designing Deep Neural Networks for variant calling, where we use meticulously designed Deep Neural Network architectures and customized variant inference functions that account for the underlying nature of sequencing data instead of converting the problem to one of image recognition. Results Results from 27 whole-genome variant calling experiments spanning Illumina, PacBio and hybrid Illumina-PacBio settings suggest that our method allows vastly smaller Deep Neural Networks to outperform the Inception-v3 architecture used in DeepVariant for indel and substitution-type variant calls. For example, our method reduces the number of indel call errors by up to 18%, 55% and 65% for Illumina, PacBio and hybrid Illumina-PacBio variant calling respectively, compared to a similarly trained DeepVariant pipeline. In these cases, our models are between 7 and 14 times smaller. Conclusions We believe that the improved accuracy and problem-specific customization of our models will enable more accurate pipelines and further method development in the field. HELLO is available at https://github.com/anands-repo/hello


Sign in / Sign up

Export Citation Format

Share Document