scholarly journals Deep Convolution Neural Network Model for Credit-Card Fraud Detection and Alert

Author(s):  
Joy Iong-Zong Chen ◽  
Kong-Long Lai

With the exponential increase in the usage of the internet, numerous organisations, including the financial industry, have operationalized online services. The massive financial losses occur as a result of the global growth in financial fraud. Henceforth, devising advanced financial fraud detection systems can actively detect the risks such as illegal transactions and irregular attacks. Over the recent years, these issues are tackled to a larger extent by means of data mining and machine learning techniques. However, in terms of unknown attack pattern identification, big data analytics and speed computation, several improvements must be performed in these techniques. The Deep Convolution Neural Network (DCNN) scheme based financial fraud detection scheme using deep learning algorithm is proposed in this paper. When large volume of data is involved, the detection accuracy can be enhanced by using this technique. The existing machine learning models, auto-encoder model and other deep learning models are compared with the proposed model to evaluate the performance by using a real-time credit card fraud dataset. Over a time duration of 45 seconds, a detection accuracy of 99% has been obtained by using the proposed model as observed in the experimental results.

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1514
Author(s):  
Ali Aljofey ◽  
Qingshan Jiang ◽  
Qiang Qu ◽  
Mingqing Huang ◽  
Jean-Pierre Niyigena

Phishing is the easiest way to use cybercrime with the aim of enticing people to give accurate information such as account IDs, bank details, and passwords. This type of cyberattack is usually triggered by emails, instant messages, or phone calls. The existing anti-phishing techniques are mainly based on source code features, which require to scrape the content of web pages, and on third-party services which retard the classification process of phishing URLs. Although the machine learning techniques have lately been used to detect phishing, they require essential manual feature engineering and are not an expert at detecting emerging phishing offenses. Due to the recent rapid development of deep learning techniques, many deep learning-based methods have also been introduced to enhance the classification performance. In this paper, a fast deep learning-based solution model, which uses character-level convolutional neural network (CNN) for phishing detection based on the URL of the website, is proposed. The proposed model does not require the retrieval of target website content or the use of any third-party services. It captures information and sequential patterns of URL strings without requiring a prior knowledge about phishing, and then uses the sequential pattern features for fast classification of the actual URL. For evaluations, comparisons are provided between different traditional machine learning models and deep learning models using various feature sets such as hand-crafted, character embedding, character level TF-IDF, and character level count vectors features. According to the experiments, the proposed model achieved an accuracy of 95.02% on our dataset and an accuracy of 98.58%, 95.46%, and 95.22% on benchmark datasets which outperform the existing phishing URL models.


Webology ◽  
2021 ◽  
Vol 18 (Special Issue 04) ◽  
pp. 944-962
Author(s):  
K. Niha ◽  
Dr.S. Amutha ◽  
Dr. Aisha Banu

Disease in plants are a great challenge in the advancement of agriculture which affects farmers yield and the plants. In this modern research deep learning models got a spot light by increasing plant detection accuracy and classification. The proposed CNN (Convolutional Neural Network) model detect seven plant diseases out of healthy leaf, where the dataset considered in this work contain 8685 leaf images from Plant Village Dataset. The proposed modals performance are evaluated with respect to the performance metrics (F1 score, Precision and Recall) and are compared with SVM and ANN. Where the proposed CNN model outperforms the rest with the accuracy of 96.2% and the F1 score greater than 95%. The feasibility of the proposed model in plant detection and classification may provide a solution to the problem faced by farmers.


2021 ◽  
Vol 11 (15) ◽  
pp. 7050
Author(s):  
Zeeshan Ahmad ◽  
Adnan Shahid Khan ◽  
Kashif Nisar ◽  
Iram Haider ◽  
Rosilah Hassan ◽  
...  

The revolutionary idea of the internet of things (IoT) architecture has gained enormous popularity over the last decade, resulting in an exponential growth in the IoT networks, connected devices, and the data processed therein. Since IoT devices generate and exchange sensitive data over the traditional internet, security has become a prime concern due to the generation of zero-day cyberattacks. A network-based intrusion detection system (NIDS) can provide the much-needed efficient security solution to the IoT network by protecting the network entry points through constant network traffic monitoring. Recent NIDS have a high false alarm rate (FAR) in detecting the anomalies, including the novel and zero-day anomalies. This paper proposes an efficient anomaly detection mechanism using mutual information (MI), considering a deep neural network (DNN) for an IoT network. A comparative analysis of different deep-learning models such as DNN, Convolutional Neural Network, Recurrent Neural Network, and its different variants, such as Gated Recurrent Unit and Long Short-term Memory is performed considering the IoT-Botnet 2020 dataset. Experimental results show the improvement of 0.57–2.6% in terms of the model’s accuracy, while at the same time reducing the FAR by 0.23–7.98% to show the effectiveness of the DNN-based NIDS model compared to the well-known deep learning models. It was also observed that using only the 16–35 best numerical features selected using MI instead of 80 features of the dataset result in almost negligible degradation in the model’s performance but helped in decreasing the overall model’s complexity. In addition, the overall accuracy of the DL-based models is further improved by almost 0.99–3.45% in terms of the detection accuracy considering only the top five categorical and numerical features.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2021 ◽  
Vol 72 (1) ◽  
pp. 11-20
Author(s):  
Mingtao He ◽  
Wenying Li ◽  
Brian K. Via ◽  
Yaoqi Zhang

Abstract Firms engaged in producing, processing, marketing, or using lumber and lumber products always invest in futures markets to reduce the risk of lumber price volatility. The accurate prediction of real-time prices can help companies and investors hedge risks and make correct market decisions. This paper explores whether Internet browsing habits can accurately nowcast the lumber futures price. The predictors are Google Trends index data related to lumber prices. This study offers a fresh perspective on nowcasting the lumber price accurately. The novel outlook of employing both machine learning and deep learning methods shows that despite the high predictive power of both the methods, on average, deep learning models can better capture trends and provide more accurate predictions than machine learning models. The artificial neural network model is the most competitive, followed by the recurrent neural network model.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4575 ◽  
Author(s):  
Jihyun Lee ◽  
Jiyoung Woo ◽  
Ah Reum Kang ◽  
Young-Seob Jeong ◽  
Woohyun Jung ◽  
...  

Hypotensive events in the initial stage of anesthesia can cause serious complications in the patients after surgery, which could be fatal. In this study, we intended to predict hypotension after tracheal intubation using machine learning and deep learning techniques after intubation one minute in advance. Meta learning models, such as random forest, extreme gradient boosting (Xgboost), and deep learning models, especially the convolutional neural network (CNN) model and the deep neural network (DNN), were trained to predict hypotension occurring between tracheal intubation and incision, using data from four minutes to one minute before tracheal intubation. Vital records and electronic health records (EHR) for 282 of 319 patients who underwent laparoscopic cholecystectomy from October 2018 to July 2019 were collected. Among the 282 patients, 151 developed post-induction hypotension. Our experiments had two scenarios: using raw vital records and feature engineering on vital records. The experiments on raw data showed that CNN had the best accuracy of 72.63%, followed by random forest (70.32%) and Xgboost (64.6%). The experiments on feature engineering showed that random forest combined with feature selection had the best accuracy of 74.89%, while CNN had a lower accuracy of 68.95% than that of the experiment on raw data. Our study is an extension of previous studies to detect hypotension before intubation with a one-minute advance. To improve accuracy, we built a model using state-of-art algorithms. We found that CNN had a good performance, but that random forest had a better performance when combined with feature selection. In addition, we found that the examination period (data period) is also important.


Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 374
Author(s):  
Babacar Gaye ◽  
Dezheng Zhang ◽  
Aziguli Wulamu

With the extensive availability of social media platforms, Twitter has become a significant tool for the acquisition of peoples’ views, opinions, attitudes, and emotions towards certain entities. Within this frame of reference, sentiment analysis of tweets has become one of the most fascinating research areas in the field of natural language processing. A variety of techniques have been devised for sentiment analysis, but there is still room for improvement where the accuracy and efficacy of the system are concerned. This study proposes a novel approach that exploits the advantages of the lexical dictionary, machine learning, and deep learning classifiers. We classified the tweets based on the sentiments extracted by TextBlob using a stacked ensemble of three long short-term memory (LSTM) as base classifiers and logistic regression (LR) as a meta classifier. The proposed model proved to be effective and time-saving since it does not require feature extraction, as LSTM extracts features without any human intervention. We also compared our proposed approach with conventional machine learning models such as logistic regression, AdaBoost, and random forest. We also included state-of-the-art deep learning models in comparison with the proposed model. Experiments were conducted on the sentiment140 dataset and were evaluated in terms of accuracy, precision, recall, and F1 Score. Empirical results showed that our proposed approach manifested state-of-the-art results by achieving an accuracy score of 99%.


Author(s):  
Diana Gaifilina ◽  
Igor Kotenko

Introduction: The article discusses the problem of choosing deep learning models for detecting anomalies in Internet of Things (IoT) network traffic. This problem is associated with the necessity to analyze a large number of security events in order to identify the abnormal behavior of smart devices. A powerful technology for analyzing such data is machine learning and, in particular, deep learning. Purpose: Development of recommendations for the selection of deep learning models for anomaly detection in IoT network traffic. Results: The main results of the research are comparative analysis of deep learning models, and recommendations on the use of deep learning models for anomaly detection in IoT network traffic. Multilayer perceptron, convolutional neural network, recurrent neural network, long short-term memory, gated recurrent units, and combined convolutional-recurrent neural network were considered the basic deep learning models. Additionally, the authors analyzed the following traditional machine learning models: naive Bayesian classifier, support vector machines, logistic regression, k-nearest neighbors, boosting, and random forest. The following metrics were used as indicators of anomaly detection efficiency: accuracy, precision, recall, and F-measure, as well as the time spent on training the model. The constructed models demonstrated a higher accuracy rate for anomaly detection in large heterogeneous traffic typical for IoT, as compared to conventional machine learning methods. The authors found that with an increase in the number of neural network layers, the completeness of detecting anomalous connections rises. This has a positive effect on the recognition of unknown anomalies, but increases the number of false positives. In some cases, preparing traditional machine learning models takes less time. This is due to the fact that the application of deep learning methods requires more resources and computing power. Practical relevance: The results obtained can be used to build systems for network anomaly detection in Internet of Things traffic.


Sign in / Sign up

Export Citation Format

Share Document