scholarly journals ERDNN: Error-Resilient Deep Neural Networks With a New Error Correction Layer and Piece-Wise Rectified Linear Unit

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 158702-158711
Author(s):  
Muhammad Salman Ali ◽  
Tauhid Bin Iqbal ◽  
Kang-Ho Lee ◽  
Abdul Muqeet ◽  
Seunghyun Lee ◽  
...  
2019 ◽  
Vol 220 (1) ◽  
pp. 323-334
Author(s):  
Jing Zheng ◽  
Shuaishuai Shen ◽  
Tianqi Jiang ◽  
Weiqiang Zhu

SUMMARY It is essential to pick P-wave and S-wave arrival times rapidly and accurately for the microseismic monitoring systems. Meanwhile, it is not easy to identify the arrivals at a true phase automatically using traditional picking method. This is one of the reasons that many researchers are trying to introduce deep neural networks to solve these problems. Convolutional neural networks (CNNs) are very attractive for designing automatic phase pickers especially after introducing the fundamental network structure from semantic segmentation field, which can give the probability outputs for every labelled phase at every sample in the recordings. The typical segmentation architecture consists of two main parts: (1) an encoder part trained to extracting coarse semantic features; (2) a decoder part responsible not only for recovering the input resolution at the output but also for obtaining sparse representation of the objects. The fundamental segmentation structure performs well; however, the influence of the parameters in the structure on the pickers has not been investigated. It means that the structure design just depends on experience and tests. In this paper, we solve two main questions to give some guidance on network design. First, we show what sparse features will learn from the three-component microseismic recordings using CNNs. Second, the influence of two key parameters in the network on pickers, namely, the depth of decoder and activation functions, is analysed. Increasing the number of levels for a certain layer in the decoder will increase the burden of demand on trainable parameters, but it is beneficial to the accuracy of the model. Reasonable depth of the decoder can balance prediction accuracy and the demand of labelled data, which is important for microseismic systems because manual labelling process will decrease the real-time performance in monitoring tasks. Standard rectified linear unit (ReLU) and leaky rectified linear unit (Leaky ReLU) with different negative slopes are compared for the analysis. Leaky ReLU with a small negative slope can improve the performance of a given model than ReLU activation function by keeping some information about the negative parts.


2020 ◽  
Vol 35 (12) ◽  
pp. 1987-2008 ◽  
Author(s):  
Han Wang ◽  
Haixian Zhang ◽  
Junjie Hu ◽  
Ying Song ◽  
Sen Bai ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2658
Author(s):  
Myunghoon Lee ◽  
Hyeonho Shin ◽  
Dabin Lee ◽  
Sung-Pil Choi

Grammatical Error Correction (GEC) is the task of detecting and correcting various grammatical errors in texts. Many previous approaches to the GEC have used various mechanisms including rules, statistics, and their combinations. Recently, the performance of the GEC in English has been drastically enhanced due to the vigorous applications of deep neural networks and pretrained language models. Following the promising results of the English GEC tasks, we apply the Transformer with Copying Mechanism into the Korean GEC task by introducing novel and effective noising methods for constructing Korean GEC datasets. Our comparative experiments showed that the proposed system outperforms two commercial grammar check and other NMT-based models.


2020 ◽  
Vol 32 (24) ◽  
pp. 18327-18345
Author(s):  
Christoph Schorn ◽  
Thomas Elsken ◽  
Sebastian Vogel ◽  
Armin Runge ◽  
Andre Guntoro ◽  
...  

Author(s):  
Hengjie Chen ◽  
Zhong Li

By applying fundamental mathematical knowledge, this paper proves that the function [Formula: see text] is an integer no less than [Formula: see text] has the property that the difference between the function value of middle point of arbitrarily two adjacent equidistant distribution nodes on [Formula: see text] and the mean of function values of these two nodes is a constant depending only on the number of nodes if and only if [Formula: see text] By them, we establish an important result about deep neural networks that the function [Formula: see text] can be interpolated by a deep Rectified Linear Unit (ReLU) network with depth [Formula: see text] on the equidistant distribution nodes in interval [Formula: see text] and the error of approximation is [Formula: see text] Then based on the main result that has just been proven and the Chebyshev orthogonal polynomials, we construct a deep network and give the error estimate of approximation to polynomials and continuous functions, respectively. In addition, this paper constructs one deep network with local sparse connections, shared weights and activation function [Formula: see text] and discusses its density and complexity.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

Sign in / Sign up

Export Citation Format

Share Document