scholarly journals Scalable Polyhedral Verification of Recurrent Neural Networks

Author(s):  
Wonryong Ryou ◽  
Jiayu Chen ◽  
Mislav Balunovic ◽  
Gagandeep Singh ◽  
Andrei Dan ◽  
...  

AbstractWe present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and non-linear recurrent update functions by combining sampling, optimization, and Fermat’s theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron. Using Prover, we present the first study of certifying a non-trivial use case of recurrent neural networks, namely speech classification. To achieve this, we additionally develop custom abstractions for the non-linear speech preprocessing pipeline. Our evaluation shows that Prover successfully verifies several challenging recurrent models in computer vision, speech, and motion sensor data classification beyond the reach of prior work.

Processes ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1155
Author(s):  
Yi-Wei Lu ◽  
Chia-Yu Hsu ◽  
Kuang-Chieh Huang

With the development of smart manufacturing, in order to detect abnormal conditions of the equipment, a large number of sensors have been used to record the variables associated with production equipment. This study focuses on the prediction of Remaining Useful Life (RUL). RUL prediction is part of predictive maintenance, which uses the development trend of the machine to predict when the machine will malfunction. High accuracy of RUL prediction not only reduces the consumption of manpower and materials, but also reduces the need for future maintenance. This study focuses on detecting faults as early as possible, before the machine needs to be replaced or repaired, to ensure the reliability of the system. It is difficult to extract meaningful features from sensor data directly. This study proposes a model based on an Autoencoder Gated Recurrent Unit (AE-GRU), in which the Autoencoder (AE) extracts the important features from the raw data and the Gated Recurrent Unit (GRU) selects the information from the sequences to forecast RUL. To evaluate the performance of the proposed AE-GRU model, an aircraft turbofan engine degradation simulation dataset provided by NASA was used and a comparison made of different recurrent neural networks. The results demonstrate that the AE-GRU is better than other recurrent neural networks, such as Long Short-Term Memory (LSTM) and GRU.


Artificial Intelligence has been showing monumental growth in filling the gap between the capabilities of humans and machines. Researchers and scientists work on many aspects to make new things happen. Computer Vision is one of them. To make the system to visualize, neural networks are used. Some of the well-known Neural Networks include CNN, Feedforward Neural Networks (FNN), and Recurrent Neural Networks (RNN) and so on. Among them, CNN is the correct choice for computer vision because they learn relevant features from an image or video similar to the human brain. In this paper, the dataset used is CIFAR-10 (Canadian Institute for Advanced Research) which contains 60,000 images in the size of 32x32. Those images are divided into 10 different classes which contains both training and testing images. The training images are 50,000 and testing images are 10,000. The ten different classes contain airplanes, automobiles, birds, cat, ship, truck, deer, dog, frog and horse images. This paper was mainly concentrated on improving performance using normalization layers and comparing the accuracy achieved using different activation functions like ReLU and Tanh.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 148031-148046 ◽  
Author(s):  
Jun Zhang ◽  
Zhongcheng Wu ◽  
Fang Li ◽  
Jianfei Luo ◽  
Tingting Ren ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document