scholarly journals An Efficient DenseNet-Based Deep Learning Model for Malware Detection

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 344
Author(s):  
Jeyaprakash Hemalatha ◽  
S. Abijah Roseline ◽  
Subbiah Geetha ◽  
Seifedine Kadry ◽  
Robertas Damaševičius

Recently, there has been a huge rise in malware growth, which creates a significant security threat to organizations and individuals. Despite the incessant efforts of cybersecurity research to defend against malware threats, malware developers discover new ways to evade these defense techniques. Traditional static and dynamic analysis methods are ineffective in identifying new malware and pose high overhead in terms of memory and time. Typical machine learning approaches that train a classifier based on handcrafted features are also not sufficiently potent against these evasive techniques and require more efforts due to feature-engineering. Recent malware detectors indicate performance degradation due to class imbalance in malware datasets. To resolve these challenges, this work adopts a visualization-based method, where malware binaries are depicted as two-dimensional images and classified by a deep learning model. We propose an efficient malware detection system based on deep learning. The system uses a reweighted class-balanced loss function in the final classification layer of the DenseNet model to achieve significant performance improvements in classifying malware by handling imbalanced data issues. Comprehensive experiments performed on four benchmark malware datasets show that the proposed approach can detect new malware samples with higher accuracy (98.23% for the Malimg dataset, 98.46% for the BIG 2015 dataset, 98.21% for the MaleVis dataset, and 89.48% for the unseen Malicia dataset) and reduced false-positive rates when compared with conventional malware mitigation techniques while maintaining low computational time. The proposed malware detection solution is also reliable and effective against obfuscation attacks.

Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5731 ◽  
Author(s):  
Xiu-Zhi Chen ◽  
Chieh-Min Chang ◽  
Chao-Wei Yu ◽  
Yen-Lin Chen

Numerous vehicle detection methods have been proposed to obtain trustworthy traffic data for the development of intelligent traffic systems. Most of these methods perform sufficiently well under common scenarios, such as sunny or cloudy days; however, the detection accuracy drastically decreases under various bad weather conditions, such as rainy days or days with glare, which normally happens during sunset. This study proposes a vehicle detection system with a visibility complementation module that improves detection accuracy under various bad weather conditions. Furthermore, the proposed system can be implemented without retraining the deep learning models for object detection under different weather conditions. The complementation of the visibility was obtained through the use of a dark channel prior and a convolutional encoder–decoder deep learning network with dual residual blocks to resolve different effects from different bad weather conditions. We validated our system on multiple surveillance videos by detecting vehicles with the You Only Look Once (YOLOv3) deep learning model and demonstrated that the computational time of our system could reach 30 fps on average; moreover, the accuracy increased not only by nearly 5% under low-contrast scene conditions but also 50% under rainy scene conditions. The results of our demonstrations indicate that our approach is able to detect vehicles under various bad weather conditions without the need to retrain a new model.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262349
Author(s):  
Esraa A. Mohamed ◽  
Essam A. Rashed ◽  
Tarek Gaber ◽  
Omar Karam

Breast cancer is one of the most common diseases among women worldwide. It is considered one of the leading causes of death among women. Therefore, early detection is necessary to save lives. Thermography imaging is an effective diagnostic technique which is used for breast cancer detection with the help of infrared technology. In this paper, we propose a fully automatic breast cancer detection system. First, U-Net network is used to automatically extract and isolate the breast area from the rest of the body which behaves as noise during the breast cancer detection model. Second, we propose a two-class deep learning model, which is trained from scratch for the classification of normal and abnormal breast tissues from thermal images. Also, it is used to extract more characteristics from the dataset that is helpful in training the network and improve the efficiency of the classification process. The proposed system is evaluated using real data (A benchmark, database (DMR-IR)) and achieved accuracy = 99.33%, sensitivity = 100% and specificity = 98.67%. The proposed system is expected to be a helpful tool for physicians in clinical use.


Water ◽  
2021 ◽  
Vol 13 (19) ◽  
pp. 2664
Author(s):  
Sunil Saha ◽  
Jagabandhu Roy ◽  
Tusar Kanti Hembram ◽  
Biswajeet Pradhan ◽  
Abhirup Dikshit ◽  
...  

The efficiency of deep learning and tree-based machine learning approaches has gained immense popularity in various fields. One deep learning model viz. convolution neural network (CNN), artificial neural network (ANN) and four tree-based machine learning models, namely, alternative decision tree (ADTree), classification and regression tree (CART), functional tree and logistic model tree (LMT), were used for landslide susceptibility mapping in the East Sikkim Himalaya region of India, and the results were compared. Landslide areas were delimited and mapped as landslide inventory (LIM) after gathering information from historical records and periodic field investigations. In LIM, 91 landslides were plotted and classified into training (64 landslides) and testing (27 landslides) subsets randomly to train and validate the models. A total of 21 landslide conditioning factors (LCFs) were considered as model inputs, and the results of each model were categorised under five susceptibility classes. The receiver operating characteristics curve and 21 statistical measures were used to evaluate and prioritise the models. The CNN deep learning model achieved the priority rank 1 with area under the curve of 0.918 and 0.933 by using the training and testing data, quantifying 23.02% and 14.40% area as very high and highly susceptible followed by ANN, ADtree, CART, FTree and LMT models. This research might be useful in landslide studies, especially in locations with comparable geophysical and climatological characteristics, to aid in decision making for land use planning.


BMJ Open ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. e036423
Author(s):  
Zhigang Song ◽  
Chunkai Yu ◽  
Shuangmei Zou ◽  
Wenmiao Wang ◽  
Yong Huang ◽  
...  

ObjectivesThe microscopic evaluation of slides has been gradually moving towards all digital in recent years, leading to the possibility for computer-aided diagnosis. It is worthwhile to know the similarities between deep learning models and pathologists before we put them into practical scenarios. The simple criteria of colorectal adenoma diagnosis make it to be a perfect testbed for this study.DesignThe deep learning model was trained by 177 accurately labelled training slides (156 with adenoma). The detailed labelling was performed on a self-developed annotation system based on iPad. We built the model based on DeepLab v2 with ResNet-34. The model performance was tested on 194 test slides and compared with five pathologists. Furthermore, the generalisation ability of the learning model was tested by extra 168 slides (111 with adenoma) collected from two other hospitals.ResultsThe deep learning model achieved an area under the curve of 0.92 and obtained a slide-level accuracy of over 90% on slides from two other hospitals. The performance was on par with the performance of experienced pathologists, exceeding the average pathologist. By investigating the feature maps and cases misdiagnosed by the model, we found the concordance of thinking process in diagnosis between the deep learning model and pathologists.ConclusionsThe deep learning model for colorectal adenoma diagnosis is quite similar to pathologists. It is on-par with pathologists’ performance, makes similar mistakes and learns rational reasoning logics. Meanwhile, it obtains high accuracy on slides collected from different hospitals with significant staining configuration variations.


2021 ◽  
Vol 7 ◽  
pp. e551
Author(s):  
Nihad Karim Chowdhury ◽  
Muhammad Ashad Kabir ◽  
Md. Muhtadir Rahman ◽  
Noortaz Rezoana

The goal of this research is to develop and implement a highly effective deep learning model for detecting COVID-19. To achieve this goal, in this paper, we propose an ensemble of Convolutional Neural Network (CNN) based on EfficientNet, named ECOVNet, to detect COVID-19 from chest X-rays. To make the proposed model more robust, we have used one of the largest open-access chest X-ray data sets named COVIDx containing three classes—COVID-19, normal, and pneumonia. For feature extraction, we have applied an effective CNN structure, namely EfficientNet, with ImageNet pre-training weights. The generated features are transferred into custom fine-tuned top layers followed by a set of model snapshots. The predictions of the model snapshots (which are created during a single training) are consolidated through two ensemble strategies, i.e., hard ensemble and soft ensemble, to enhance classification performance. In addition, a visualization technique is incorporated to highlight areas that distinguish classes, thereby enhancing the understanding of primal components related to COVID-19. The results of our empirical evaluations show that the proposed ECOVNet model outperforms the state-of-the-art approaches and significantly improves detection performance with 100% recall for COVID-19 and overall accuracy of 96.07%. We believe that ECOVNet can enhance the detection of COVID-19 disease, and thus, underpin a fully automated and efficacious COVID-19 detection system.


2018 ◽  
Vol 45 (5) ◽  
pp. E12 ◽  
Author(s):  
Victor E. Staartjes ◽  
Carlo Serra ◽  
Giovanni Muscas ◽  
Nicolai Maldaner ◽  
Kevin Akeret ◽  
...  

OBJECTIVEGross-total resection (GTR) is often the primary surgical goal in transsphenoidal surgery for pituitary adenoma. Existing classifications are effective at predicting GTR but are often hampered by limited discriminatory ability in moderate cases and by poor interrater agreement. Deep learning, a subset of machine learning, has recently established itself as highly effective in forecasting medical outcomes. In this pilot study, the authors aimed to evaluate the utility of using deep learning to predict GTR after transsphenoidal surgery for pituitary adenoma.METHODSData from a prospective registry were used. The authors trained a deep neural network to predict GTR from 16 preoperatively available radiological and procedural variables. Class imbalance adjustment, cross-validation, and random dropout were applied to prevent overfitting and ensure robustness of the predictive model. The authors subsequently compared the deep learning model to a conventional logistic regression model and to the Knosp classification as a gold standard.RESULTSOverall, 140 patients who underwent endoscopic transsphenoidal surgery were included. GTR was achieved in 95 patients (68%), with a mean extent of resection of 96.8% ± 10.6%. Intraoperative high-field MRI was used in 116 (83%) procedures. The deep learning model achieved excellent area under the curve (AUC; 0.96), accuracy (91%), sensitivity (94%), and specificity (89%). This represents an improvement in comparison with the Knosp classification (AUC: 0.87, accuracy: 81%, sensitivity: 92%, specificity: 70%) and a statistically significant improvement in comparison with logistic regression (AUC: 0.86, accuracy: 82%, sensitivity: 81%, specificity: 83%) (all p < 0.001).CONCLUSIONSIn this pilot study, the authors demonstrated the utility of applying deep learning to preoperatively predict the likelihood of GTR with excellent performance. Further training and validation in a prospective multicentric cohort will enable the development of an easy-to-use interface for use in clinical practice.


2020 ◽  
Vol 105 ◽  
pp. 102154 ◽  
Author(s):  
Hamad Naeem ◽  
Farhan Ullah ◽  
Muhammad Rashid Naeem ◽  
Shehzad Khalid ◽  
Danish Vasan ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tianliang Lu ◽  
Yanhui Du ◽  
Li Ouyang ◽  
Qiuyu Chen ◽  
Xirui Wang

In recent years, the number of malware on the Android platform has been increasing, and with the widespread use of code obfuscation technology, the accuracy of antivirus software and traditional detection algorithms is low. Current state-of-the-art research shows that researchers started applying deep learning methods for malware detection. We proposed an Android malware detection algorithm based on a hybrid deep learning model which combines deep belief network (DBN) and gate recurrent unit (GRU). First of all, analyze the Android malware; in addition to extracting static features, dynamic behavioral features with strong antiobfuscation ability are also extracted. Then, build a hybrid deep learning model for Android malware detection. Because the static features are relatively independent, the DBN is used to process the static features. Because the dynamic features have temporal correlation, the GRU is used to process the dynamic feature sequence. Finally, the training results of DBN and GRU are input into the BP neural network, and the final classification results are output. Experimental results show that, compared with the traditional machine learning algorithms, the Android malware detection model based on hybrid deep learning algorithms has a higher detection accuracy, and it also has a better detection effect on obfuscated malware.


Sign in / Sign up

Export Citation Format

Share Document