scholarly journals Identifying Compiler and Optimization Options from Binary Code using Deep Learning Approaches

Author(s):  
Davide Pizzolotto ◽  
Katsuro Inoue
2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Author(s):  
Qian Wu ◽  
Weiling Zhao ◽  
Xiaobo Yang ◽  
Hua Tan ◽  
Lei You ◽  
...  

2020 ◽  
Author(s):  
Priyanka Meel ◽  
Farhin Bano ◽  
Dr. Dinesh K. Vishwakarma

2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shan Guleria ◽  
Tilak U. Shah ◽  
J. Vincent Pulido ◽  
Matthew Fasullo ◽  
Lubaina Ehsan ◽  
...  

AbstractProbe-based confocal laser endomicroscopy (pCLE) allows for real-time diagnosis of dysplasia and cancer in Barrett’s esophagus (BE) but is limited by low sensitivity. Even the gold standard of histopathology is hindered by poor agreement between pathologists. We deployed deep-learning-based image and video analysis in order to improve diagnostic accuracy of pCLE videos and biopsy images. Blinded experts categorized biopsies and pCLE videos as squamous, non-dysplastic BE, or dysplasia/cancer, and deep learning models were trained to classify the data into these three categories. Biopsy classification was conducted using two distinct approaches—a patch-level model and a whole-slide-image-level model. Gradient-weighted class activation maps (Grad-CAMs) were extracted from pCLE and biopsy models in order to determine tissue structures deemed relevant by the models. 1970 pCLE videos, 897,931 biopsy patches, and 387 whole-slide images were used to train, test, and validate the models. In pCLE analysis, models achieved a high sensitivity for dysplasia (71%) and an overall accuracy of 90% for all classes. For biopsies at the patch level, the model achieved a sensitivity of 72% for dysplasia and an overall accuracy of 90%. The whole-slide-image-level model achieved a sensitivity of 90% for dysplasia and 94% overall accuracy. Grad-CAMs for all models showed activation in medically relevant tissue regions. Our deep learning models achieved high diagnostic accuracy for both pCLE-based and histopathologic diagnosis of esophageal dysplasia and its precursors, similar to human accuracy in prior studies. These machine learning approaches may improve accuracy and efficiency of current screening protocols.


2021 ◽  
Author(s):  
Isidro Lloret ◽  
José A. Troyano ◽  
Fernando Enríquez ◽  
Juan-José González-de-la-Rosa

2021 ◽  
Vol 22 (15) ◽  
pp. 7911
Author(s):  
Eugene Lin ◽  
Chieh-Hsin Lin ◽  
Hsien-Yuan Lane

A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer’s disease (AD). In light of the latest advancements in neuroimaging and genomics, numerous deep learning models are being exploited to distinguish AD from normal controls and/or to distinguish AD from mild cognitive impairment in recent research studies. In this review, we focus on the latest developments for AD prediction using deep learning techniques in cooperation with the principles of neuroimaging and genomics. First, we narrate various investigations that make use of deep learning algorithms to establish AD prediction using genomics or neuroimaging data. Particularly, we delineate relevant integrative neuroimaging genomics investigations that leverage deep learning methods to forecast AD on the basis of incorporating both neuroimaging and genomics data. Moreover, we outline the limitations as regards to the recent AD investigations of deep learning with neuroimaging and genomics. Finally, we depict a discussion of challenges and directions for future research. The main novelty of this work is that we summarize the major points of these investigations and scrutinize the similarities and differences among these investigations.


Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2181
Author(s):  
Rafik Nafkha ◽  
Tomasz Ząbkowski ◽  
Krzysztof Gajowniczek

The electricity tariffs available to customers in Poland depend on the connection voltage level and contracted capacity, which reflect the customer demand profile. Therefore, before connecting to the power grid, each consumer declares the demand for maximum power. This amount, referred to as the contracted capacity, is used by the electricity provider to assign the proper connection type to the power grid, including the size of the security breaker. Maximum power is also the basis for calculating fixed charges for electricity consumption, which is controlled and metered through peak meters. If the peak demand exceeds the contracted capacity, a penalty charge is applied to the exceeded amount, which is up to ten times the basic rate. In this article, we present several solutions for entrepreneurs based on the implementation of two-stage and deep learning approaches to predict maximal load values and the moments of exceeding the contracted capacity in the short term, i.e., up to one month ahead. The forecast is further used to optimize the capacity volume to be contracted in the following month to minimize network charge for exceeding the contracted level. As confirmed experimentally with two datasets, the application of a multiple output forecast artificial neural network model and a genetic algorithm (two-stage approach) for load optimization delivers significant benefits to customers. As an alternative, the same benefit is delivered with a deep learning architecture (hybrid approach) to predict the maximal capacity demands and, simultaneously, to determine the optimal capacity contract.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1372
Author(s):  
Sanjanasri JP ◽  
Vijay Krishna Menon ◽  
Soman KP ◽  
Rajendran S ◽  
Agnieszka Wolk

Linguists have been focused on a qualitative comparison of the semantics from different languages. Evaluation of the semantic interpretation among disparate language pairs like English and Tamil is an even more formidable task than for Slavic languages. The concept of word embedding in Natural Language Processing (NLP) has enabled a felicitous opportunity to quantify linguistic semantics. Multi-lingual tasks can be performed by projecting the word embeddings of one language onto the semantic space of the other. This research presents a suite of data-efficient deep learning approaches to deduce the transfer function from the embedding space of English to that of Tamil, deploying three popular embedding algorithms: Word2Vec, GloVe and FastText. A novel evaluation paradigm was devised for the generation of embeddings to assess their effectiveness, using the original embeddings as ground truths. Transferability across other target languages of the proposed model was assessed via pre-trained Word2Vec embeddings from Hindi and Chinese languages. We empirically prove that with a bilingual dictionary of a thousand words and a corresponding small monolingual target (Tamil) corpus, useful embeddings can be generated by transfer learning from a well-trained source (English) embedding. Furthermore, we demonstrate the usability of generated target embeddings in a few NLP use-case tasks, such as text summarization, part-of-speech (POS) tagging, and bilingual dictionary induction (BDI), bearing in mind that those are not the only possible applications.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 344
Author(s):  
Jeyaprakash Hemalatha ◽  
S. Abijah Roseline ◽  
Subbiah Geetha ◽  
Seifedine Kadry ◽  
Robertas Damaševičius

Recently, there has been a huge rise in malware growth, which creates a significant security threat to organizations and individuals. Despite the incessant efforts of cybersecurity research to defend against malware threats, malware developers discover new ways to evade these defense techniques. Traditional static and dynamic analysis methods are ineffective in identifying new malware and pose high overhead in terms of memory and time. Typical machine learning approaches that train a classifier based on handcrafted features are also not sufficiently potent against these evasive techniques and require more efforts due to feature-engineering. Recent malware detectors indicate performance degradation due to class imbalance in malware datasets. To resolve these challenges, this work adopts a visualization-based method, where malware binaries are depicted as two-dimensional images and classified by a deep learning model. We propose an efficient malware detection system based on deep learning. The system uses a reweighted class-balanced loss function in the final classification layer of the DenseNet model to achieve significant performance improvements in classifying malware by handling imbalanced data issues. Comprehensive experiments performed on four benchmark malware datasets show that the proposed approach can detect new malware samples with higher accuracy (98.23% for the Malimg dataset, 98.46% for the BIG 2015 dataset, 98.21% for the MaleVis dataset, and 89.48% for the unseen Malicia dataset) and reduced false-positive rates when compared with conventional malware mitigation techniques while maintaining low computational time. The proposed malware detection solution is also reliable and effective against obfuscation attacks.


Sign in / Sign up

Export Citation Format

Share Document