scholarly journals Guidelines and Benchmarks for Deployment of Deep Learning Models on Smartphones as Real-Time Apps

2019 ◽  
Vol 1 (1) ◽  
pp. 450-465 ◽  
Author(s):  
Abhishek Sehgal ◽  
Nasser Kehtarnavaz

Deep learning solutions are being increasingly used in mobile applications. Although there are many open-source software tools for the development of deep learning solutions, there are no guidelines in one place in a unified manner for using these tools toward real-time deployment of these solutions on smartphones. From the variety of available deep learning tools, the most suited ones are used in this paper to enable real-time deployment of deep learning inference networks on smartphones. A uniform flow of implementation is devised for both Android and iOS smartphones. The advantage of using multi-threading to achieve or improve real-time throughputs is also showcased. A benchmarking framework consisting of accuracy, CPU/GPU consumption, and real-time throughput is considered for validation purposes. The developed deployment approach allows deep learning models to be turned into real-time smartphone apps with ease based on publicly available deep learning and smartphone software tools. This approach is applied to six popular or representative convolutional neural network models, and the validation results based on the benchmarking metrics are reported.

Author(s):  
E.Yu. Silantieva ◽  
V.A. Zabelina ◽  
G.A. Savchenko ◽  
I.M. Chernenky

This study presents an analysis of autoencoder models for the problems of detecting anomalies in network traffic. Results of the training were assessed using open source software on the UNB ICS IDS 2017 dataset. As deep learning models, we considered standard and variational autoencoder, Deep SSAD approaches for a normal autoencoder (AE-SAD) and a variational autoencoder (VAE-SAD). The constructed deep learning models demonstrated different indicators of anomaly detection accuracy; the best result in terms of the AUC metric of 98% was achieved with VAE-SAD model. In the future, it is planned to continue the analysis of the characteristics of neural network models in cybersecurity problems. One of directions is to study the influence of structure of network traffic on the performance indicators of using deep learning models. Based on the results, it is planned to develop an approach of robust identification of security events based on deep learning methods.


2021 ◽  
Vol 11 (15) ◽  
pp. 7147
Author(s):  
Jinmo Gu ◽  
Jinhyuk Na ◽  
Jeongeun Park ◽  
Hayoung Kim

Outbound telemarketing is an efficient direct marketing method wherein telemarketers solicit potential customers by phone to purchase or subscribe to products or services. However, those who are not interested in the information or offers provided by outbound telemarketing generally experience such interactions negatively because they perceive telemarketing as spam. In this study, therefore, we investigate the use of deep learning models to predict the success of outbound telemarketing for insurance policy loans. We propose an explainable multiple-filter convolutional neural network model called XmCNN that can alleviate overfitting and extract various high-level features using hundreds of input variables. To enable the practical application of the proposed method, we also examine ensemble models to further improve its performance. We experimentally demonstrate that the proposed XmCNN significantly outperformed conventional deep neural network models and machine learning models. Furthermore, a deep learning ensemble model constructed using the XmCNN architecture achieved the lowest false positive rate (4.92%) and the highest F1-score (87.47%). We identified important variables influencing insurance policy loan prediction through the proposed model, suggesting that these factors should be considered in practice. The proposed method may increase the efficiency of outbound telemarketing and reduce the spam problems caused by calling non-potential customers.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245177
Author(s):  
Xing Han Lu ◽  
Aihua Liu ◽  
Shih-Chieh Fuh ◽  
Yi Lian ◽  
Liming Guo ◽  
...  

Motivation Recurrent neural networks (RNN) are powerful frameworks to model medical time series records. Recent studies showed improved accuracy of predicting future medical events (e.g., readmission, mortality) by leveraging large amount of high-dimensional data. However, very few studies have explored the ability of RNN in predicting long-term trajectories of recurrent events, which is more informative than predicting one single event in directing medical intervention. Methods In this study, we focus on heart failure (HF) which is the leading cause of death among cardiovascular diseases. We present a novel RNN framework named Deep Heart-failure Trajectory Model (DHTM) for modelling the long-term trajectories of recurrent HF. DHTM auto-regressively predicts the future HF onsets of each patient and uses the predicted HF as input to predict the HF event at the next time point. Furthermore, we propose an augmented DHTM named DHTM+C (where “C” stands for co-morbidities), which jointly predicts both the HF and a set of acute co-morbidities diagnoses. To efficiently train the DHTM+C model, we devised a novel RNN architecture to model disease progression implicated in the co-morbidities. Results Our deep learning models confers higher prediction accuracy for both the next-step HF prediction and the HF trajectory prediction compared to the baseline non-neural network models and the baseline RNN model. Compared to DHTM, DHTM+C is able to output higher probability of HF for high-risk patients, even in cases where it is only given less than 2 years of data to predict over 5 years of trajectory. We illustrated multiple non-trivial real patient examples of complex HF trajectories, indicating a promising path for creating highly accurate and scalable longitudinal deep learning models for modeling the chronic disease.


Automatic fake news detection is a challenging problem in deception detection. While evaluating the performance of deep learning-based models, if all the models are giving higher accuracy on a test dataset, it will make it harder to validate the performance of the deep learning models under consideration. So, we will need a complex problem to validate the performance of a deep learning model. LIAR is one such complex, much resent, labeled benchmark dataset which is publicly available for doing research on fake news detection to model statistical and machine learning approaches to combating fake news. In this work, a novel fake news detection system is implemented using Deep Neural Network models such as CNN, LSTM, BiLSTM, and the performance of their attention mechanism is evaluated by analyzing their performance in terms of Accuracy, Precision, Recall, and F1-score with training, validation and test datasets of LIAR.


2018 ◽  
Vol 246 ◽  
pp. 03004
Author(s):  
Yaqiong Qin ◽  
Zhaohui Ye ◽  
Conghui Zhang

Traditional methods of dividing petroleum reservoirs are inefficient, and the accuracy of onehidden-layer BP neural network is not ideal when applied to dividing reservoirs. This paper proposes to use the deep learning models to solve the reservoir division problem. We apply multiple-hidden-layer BP neural network and convolutional neural network models, and adjust the network structures according to the characteristics of the reservoir problem. The results show that the deep learning models are better than onehidden- layer BP neural network, and the performance of the convolutional neural network is very close to the artificial work.


2020 ◽  
Vol 92 (1) ◽  
pp. 469-480
Author(s):  
William Luther Yeck ◽  
John M. Patton ◽  
Zachary E. Ross ◽  
Gavin P. Hayes ◽  
Michelle R. Guy ◽  
...  

Abstract Machine-learning algorithms continue to show promise in their application to seismic processing. The U.S. Geological Survey National Earthquake Information Center (NEIC) is exploring the adoption of these tools to aid in simultaneous local, regional, and global real-time earthquake monitoring. As a first step, we describe a simple framework to incorporate deep-learning tools into NEIC operations. Automatic seismic arrival detections made from standard picking methods (e.g., short-term average/long-term average [STA/LTA]) are fed to trained neural network models to improve automatic seismic-arrival (pick) timing and estimate seismic-arrival phase type and source-station distances. These additional data are used to improve the capabilities of the NEIC associator. We compile a dataset of 1.3 million seismic-phase arrivals that represent a globally distributed set of source-station paths covering a range of phase types, magnitudes, and source distances. We train three separate convolutional neural network models to predict arrival time onset, phase type, and distance. We validate the performance of the trained networks on a subset of our existing dataset and further extend validation by exploring the model performance when applied to NEIC automatic pick data feeds. We show that the information provided by these models can be useful in downstream event processing, specifically in seismic-phase association, resulting in reduced false associations and improved location estimates.


2021 ◽  
Author(s):  
Kanimozhi V ◽  
T. Prem Jacob

Abstract Although there exist various strategies for IoT Intrusion Detection, this research article sheds light on the aspect of how the application of top 10 Artificial Intelligence - Deep Learning Models can be useful for both supervised and unsupervised learning related to the IoT network traffic data. It pictures the detailed comparative analysis for IoT Anomaly Detection on sensible IoT gadgets that are instrumental in detecting IoT anomalies by the usage of the latest dataset IoT-23. Many strategies are being developed for securing the IoT networks, but still, development can be mandated. IoT security can be improved by the usage of various deep learning methods. This exploration has examined the top 10 deep-learning techniques, as the realistic IoT-23 dataset for improving the security execution of IoT network traffic. We built up various neural network models for identifying 5 kinds of IoT attack classes such as Mirai, Denial of Service (DoS), Scan, Man in the Middle attack (MITM-ARP), and Normal records. These attacks can be detected by using a "softmax" function of multiclass classification in deep-learning neural network models. This research was implemented in the Anaconda3 environment with different packages such as Pandas, NumPy, Scipy, Scikit-learn, TensorFlow 2.2, Matplotlib, and Seaborn. The utilization of AI-deep learning models embraced various domains like healthcare, banking and finance, findings and scientific researches, and the business organizations along with the concepts like the Internet of Things. We found that the top 10 deep-learning models are capable of increasing the accuracy; minimize the loss functions and the execution time for building that specific model. It contributes a major significance to IoT anomaly detection by using emerging technologies Artificial Intelligence and Deep Learning Neural Networks. Hence the alleviation of assaults that happen on an IoT organization will be effective. Among the top 10 neural networks, Convolutional neural networks, Multilayer perceptron, and Generative Adversarial Networks (GANs) output the highest accuracy scores of 0.996317, 0.996157, and 0.995829 with minimized loss function and less time pertain to the execution. This article added to completely grasp the quirks of irregularity identification of IoT anomalies. Henceforth, this research analysis depicts the implementations of the Top 10 AI-deep learning models, which come in handy that assist you to perceive different neural network models and IoT anomaly detection better.


2021 ◽  
pp. 1063293X2110031
Author(s):  
Maolin Yang ◽  
Auwal H Abubakar ◽  
Pingyu Jiang

Social manufacturing is characterized by its capability of utilizing socialized manufacturing resources to achieve value adding. Recently, a new type of social manufacturing pattern emerges and shows potential for core factories to improve their limited manufacturing capabilities by utilizing the resources from outside socialized manufacturing resource communities. However, the core factories need to analyze the resource characteristics of the socialized resource communities before making operation plans, and this is challenging due to the unaffiliated and self-driven characteristics of the resource providers in socialized resource communities. In this paper, a deep learning and complex network based approach is established to address this challenge by using socialized designer community for demonstration. Firstly, convolutional neural network models are trained to identify the design resource characteristics of each socialized designer in designer community according to the interaction texts posted by the socialized designer on internet platforms. During the process, an iterative dataset labelling method is established to reduce the time cost for training set labelling. Secondly, complex networks are used to model the design resource characteristics of the community according to the resource characteristics of all the socialized designers in the community. Two real communities from RepRap 3D printer project are used as case study.


2021 ◽  
pp. 188-198

The innovations in advanced information technologies has led to rapid delivery and sharing of multimedia data like images and videos. The digital steganography offers ability to secure communication and imperative for internet. The image steganography is essential to preserve confidential information of security applications. The secret image is embedded within pixels. The embedding of secret message is done by applied with S-UNIWARD and WOW steganography. Hidden messages are reveled using steganalysis. The exploration of research interests focused on conventional fields and recent technological fields of steganalysis. This paper devises Convolutional neural network models for steganalysis. Convolutional neural network (CNN) is one of the most frequently used deep learning techniques. The Convolutional neural network is used to extract spatio-temporal information or features and classification. We have compared steganalysis outcome with AlexNet and SRNeT with same dataset. The stegnalytic error rates are compared with different payloads.


2017 ◽  
Author(s):  
Charlie W. Zhao ◽  
Mark J. Daley ◽  
J. Andrew Pruszynski

AbstractFirst-order tactile neurons have spatially complex receptive fields. Here we use machine learning tools to show that such complexity arises for a wide range of training sets and network architectures, and benefits network performance, especially on more difficult tasks and in the presence of noise. Our work suggests that spatially complex receptive fields are normatively good given the biological constraints of the tactile periphery.


Sign in / Sign up

Export Citation Format

Share Document