Data processing using deep learning of the generative-adversarial neural network (GAN)

2021 ◽  
Author(s):  
V.Y. Ilichev ◽  
I.V. Chukhraev

The article is devoted to the consideration of one of the areas of application of modern and promising computer technology – machine learning. This direction is based on the creation of models consisting of neural networks and their deep learning. At present, there is a need to generate new, not yet existing, images of objects of different types. Most often, text files or images act as such objects. To achieve a high quality of results, a generation method based on the adversarial work of two neural networks (generator and discriminator) was once worked out. This class of neural network models is distinguished by the complexity of topography, since it is necessary to correctly organize the structure of neural layers in order to achieve maximum accuracy and minimal error. The described program is created using the Python language and special libraries that extend the set of commands for performing additional functions: working with neural networks Keras (main library), integrating with the operating system Os, outputting graphs Matplotlib, working with data arrays Numpy and others. A description is given of the type and features of each neural layer, as well as the use of library connection functions, input of initial data, compilation and training of the obtained model. Next, the implementation of the procedure for outputting the results of evaluating the errors of the generator and discriminator and the accuracy achieved by the model depending on the number of cycles (eras) of its training is considered. Based on the results of the work, conclusions were drawn and recommendations were made for the use and development of the considered methodology for creating and training generative and adversarial neural networks. Studies have demonstrated the procedure for operating with comparatively simple and accessible, but effective means of a universal Python language with the Keras library to create and teach a complex neural network model. In fact, it has been proved that the use of this method allows to achieve high-quality results of machine learning, previously achievable only when using special software systems for working with neural networks.

Author(s):  
Makhamisa Senekane ◽  
Mhlambululi Mafu ◽  
Molibeli Benedict Taele

Weather variations play a significant role in peoples’ short-term, medium-term or long-term planning. Therefore, understanding of weather patterns has become very important in decision making. Short-term weather forecasting (nowcasting) involves the prediction of weather over a short period of time; typically few hours. Different techniques have been proposed for short-term weather forecasting. Traditional techniques used for nowcasting are highly parametric, and hence complex. Recently, there has been a shift towards the use of artificial intelligence techniques for weather nowcasting. These include the use of machine learning techniques such as artificial neural networks. In this chapter, we report the use of deep learning techniques for weather nowcasting. Deep learning techniques were tested on meteorological data. Three deep learning techniques, namely multilayer perceptron, Elman recurrent neural networks and Jordan recurrent neural networks, were used in this work. Multilayer perceptron models achieved 91 and 75% accuracies for sunshine forecasting and precipitation forecasting respectively, Elman recurrent neural network models achieved accuracies of 96 and 97% for sunshine and precipitation forecasting respectively, while Jordan recurrent neural network models achieved accuracies of 97 and 97% for sunshine and precipitation nowcasting respectively. The results obtained underline the utility of using deep learning for weather nowcasting.


2021 ◽  
Author(s):  
L Jakaite ◽  
M Ciemny ◽  
S Selitskiy ◽  
Vitaly Schetinin

Abstract A theory of Efficient Market Hypothesis (EMH) has been introduced by Fama to analyse financial markets. In particular the EMH theory has been proven in real cases under different conditions, including financial crises and frauds. The EMH assumes to examine the prediction accuracy of models designed on retrospective data. Such prediction models could be designed in different ways that motivated us to explore Machine Learning (ML) methods known for building models providing a high prediction performance. In this study we propose a ``deep'' learning method for building high-performance prediction models. The proposed method is based on the Group Method of Data Handling (GMDH) that is the deep learning paradigm capable of building multilayer neural-network models of a near-optimal complexity on given data. We show that the developed GMDH-type neural network has outperformed the models built by the conventional ML methods on the Warsaw Stock Exchange data. It is important that the complexity of the designed GMDH-type neural-networks is defined by the number of layers and connections between neurons. The performances of models were compared in terms of the prediction errors. We report a significantly smaller prediction error of the proposed method than that of the conventional autoregressive and "shallow’’ neural-network models. This finally allows us to conclude that traders will be advantaged by the proposed method.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-21
Author(s):  
Jie Jiang ◽  
Qiuqiang Kong ◽  
Mark D. Plumbley ◽  
Nigel Gilbert ◽  
Mark Hoogendoorn ◽  
...  

Energy disaggregation, a.k.a. Non-Intrusive Load Monitoring, aims to separate the energy consumption of individual appliances from the readings of a mains power meter measuring the total energy consumption of, e.g., a whole house. Energy consumption of individual appliances can be useful in many applications, e.g., providing appliance-level feedback to the end users to help them understand their energy consumption and ultimately save energy. Recently, with the availability of large-scale energy consumption datasets, various neural network models such as convolutional neural networks and recurrent neural networks have been investigated to solve the energy disaggregation problem. Neural network models can learn complex patterns from large amounts of data and have been shown to outperform the traditional machine learning methods such as variants of hidden Markov models. However, current neural network methods for energy disaggregation are either computational expensive or are not capable of handling long-term dependencies. In this article, we investigate the application of the recently developed WaveNet models for the task of energy disaggregation. Based on a real-world energy dataset collected from 20 households over 2 years, we show that WaveNet models outperforms the state-of-the-art deep learning methods proposed in the literature for energy disaggregation in terms of both error measures and computational cost. On the basis of energy disaggregation, we then investigate the performance of two deep-learning based frameworks for the task of on/off detection which aims at estimating whether an appliance is in operation or not. The first framework obtains the on/off states of an appliance by binarising the predictions of a regression model trained for energy disaggregation, while the second framework obtains the on/off states of an appliance by directly training a binary classifier with binarised energy readings of the appliance serving as the target values. Based on the same dataset, we show that for the task of on/off detection the second framework, i.e., directly training a binary classifier, achieves better performance in terms of F1 score.


Author(s):  
Hyun-il Lim

The neural network is an approach of machine learning by training the connected nodes of a model to predict the results of specific problems. The prediction model is trained by using previously collected training data. In training neural network models, overfitting problems can occur from the excessively dependent training of data and the structural problems of the models. In this paper, we analyze the effect of DropConnect for controlling overfitting in neural networks. It is analyzed according to the DropConnect rates and the number of nodes in designing neural networks. The analysis results of this study help to understand the effect of DropConnect in neural networks. To design an effective neural network model, the DropConnect can be applied with appropriate parameters from the understanding of the effect of the DropConnect in neural network models.


2021 ◽  
Author(s):  
Kanimozhi V ◽  
T. Prem Jacob

Abstract Although there exist various strategies for IoT Intrusion Detection, this research article sheds light on the aspect of how the application of top 10 Artificial Intelligence - Deep Learning Models can be useful for both supervised and unsupervised learning related to the IoT network traffic data. It pictures the detailed comparative analysis for IoT Anomaly Detection on sensible IoT gadgets that are instrumental in detecting IoT anomalies by the usage of the latest dataset IoT-23. Many strategies are being developed for securing the IoT networks, but still, development can be mandated. IoT security can be improved by the usage of various deep learning methods. This exploration has examined the top 10 deep-learning techniques, as the realistic IoT-23 dataset for improving the security execution of IoT network traffic. We built up various neural network models for identifying 5 kinds of IoT attack classes such as Mirai, Denial of Service (DoS), Scan, Man in the Middle attack (MITM-ARP), and Normal records. These attacks can be detected by using a "softmax" function of multiclass classification in deep-learning neural network models. This research was implemented in the Anaconda3 environment with different packages such as Pandas, NumPy, Scipy, Scikit-learn, TensorFlow 2.2, Matplotlib, and Seaborn. The utilization of AI-deep learning models embraced various domains like healthcare, banking and finance, findings and scientific researches, and the business organizations along with the concepts like the Internet of Things. We found that the top 10 deep-learning models are capable of increasing the accuracy; minimize the loss functions and the execution time for building that specific model. It contributes a major significance to IoT anomaly detection by using emerging technologies Artificial Intelligence and Deep Learning Neural Networks. Hence the alleviation of assaults that happen on an IoT organization will be effective. Among the top 10 neural networks, Convolutional neural networks, Multilayer perceptron, and Generative Adversarial Networks (GANs) output the highest accuracy scores of 0.996317, 0.996157, and 0.995829 with minimized loss function and less time pertain to the execution. This article added to completely grasp the quirks of irregularity identification of IoT anomalies. Henceforth, this research analysis depicts the implementations of the Top 10 AI-deep learning models, which come in handy that assist you to perceive different neural network models and IoT anomaly detection better.


Author(s):  
Amey Thakur

The purpose of this study is to familiarise the reader with the foundations of neural networks. Artificial Neural Networks (ANNs) are algorithm-based systems that are modelled after Biological Neural Networks (BNNs). Neural networks are an effort to use the human brain's information processing skills to address challenging real-world AI issues. The evolution of neural networks and their significance are briefly explored. ANNs and BNNs are contrasted, and their qualities, benefits, and disadvantages are discussed. The drawbacks of the perceptron model and their improvement by the sigmoid neuron and ReLU neuron are briefly discussed. In addition, we give a bird's-eye view of the different Neural Network models. We study neural networks (NNs) and highlight the different learning approaches and algorithms used in Machine Learning and Deep Learning. We also discuss different types of NNs and their applications. A brief introduction to Neuro-Fuzzy and its applications with a comprehensive review of NN technological advances is provided.


2018 ◽  
Vol 6 (11) ◽  
pp. 216-216 ◽  
Author(s):  
Zhongheng Zhang ◽  
◽  
Marcus W. Beck ◽  
David A. Winkler ◽  
Bin Huang ◽  
...  

2021 ◽  
pp. 188-198

The innovations in advanced information technologies has led to rapid delivery and sharing of multimedia data like images and videos. The digital steganography offers ability to secure communication and imperative for internet. The image steganography is essential to preserve confidential information of security applications. The secret image is embedded within pixels. The embedding of secret message is done by applied with S-UNIWARD and WOW steganography. Hidden messages are reveled using steganalysis. The exploration of research interests focused on conventional fields and recent technological fields of steganalysis. This paper devises Convolutional neural network models for steganalysis. Convolutional neural network (CNN) is one of the most frequently used deep learning techniques. The Convolutional neural network is used to extract spatio-temporal information or features and classification. We have compared steganalysis outcome with AlexNet and SRNeT with same dataset. The stegnalytic error rates are compared with different payloads.


2021 ◽  
Vol 1 (1) ◽  
pp. 19-29
Author(s):  
Zhe Chu ◽  
Mengkai Hu ◽  
Xiangyu Chen

Recently, deep learning has been successfully applied to robotic grasp detection. Based on convolutional neural networks (CNNs), there have been lots of end-to-end detection approaches. But end-to-end approaches have strict requirements for the dataset used for training the neural network models and it’s hard to achieve in practical use. Therefore, we proposed a two-stage approach using particle swarm optimizer (PSO) candidate estimator and CNN to detect the most likely grasp. Our approach achieved an accuracy of 92.8% on the Cornell Grasp Dataset, which leaped into the front ranks of the existing approaches and is able to run at real-time speeds. After a small change of the approach, we can predict multiple grasps per object in the meantime so that an object can be grasped in a variety of ways.


2021 ◽  
Author(s):  
Wael Alnahari

Abstract In this paper, I proposed an iris recognition system by using deep learning via neural networks (CNN). Although CNN is used for machine learning, the recognition is achieved by building a non-trained CNN network with multiple layers. The main objective of the code the test pictures’ category (aka person name) with a high accuracy rate after having extracted enough features from training pictures of the same category which are obtained from a that I added to the code. I used IITD iris which included 10 iris pictures for 223 people.


Sign in / Sign up

Export Citation Format

Share Document