parametric rectified linear unit
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 10)

H-INDEX

2
(FIVE YEARS 1)

Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 147
Author(s):  
Handan Jing ◽  
Shiyong Li ◽  
Ke Miao ◽  
Shuoguang Wang ◽  
Xiaoxi Cui ◽  
...  

To solve the problems of high computational complexity and unstable image quality inherent in the compressive sensing (CS) method, we propose a complex-valued fully convolutional neural network (CVFCNN)-based method for near-field enhanced millimeter-wave (MMW) three-dimensional (3-D) imaging. A generalized form of the complex parametric rectified linear unit (CPReLU) activation function with independent and learnable parameters is presented to improve the performance of CVFCNN. The CVFCNN structure is designed, and the formulas of the complex-valued back-propagation algorithm are derived in detail, in response to the lack of a machine learning library for a complex-valued neural network (CVNN). Compared with a real-valued fully convolutional neural network (RVFCNN), the proposed CVFCNN offers better performance while needing fewer parameters. In addition, it outperforms the CVFCNN that was used in radar imaging with different activation functions. Numerical simulations and experiments are provided to verify the efficacy of the proposed network, in comparison with state-of-the-art networks and the CS method for enhanced MMW imaging.


2021 ◽  
Author(s):  
Alshimaa Hamdy ◽  
Tarek Abed Soliman ◽  
Mohamed Rihan ◽  
Moawad I. Dessouky

Abstract Beamforming design is a crucial stage in millimeter-wave systems with massive antenna arrays. We propose a deep learning network for the design of the precoder and combiner in hybrid architectures. The proposed network employs a parametric rectified linear unit (PReLU) activation function which improves model accuracy with almost no complexity cost compared to other functions. The proposed network accepts practical channel estimation input and can be trained to enhance spectral efficiency considering the hardware limitation of the hybrid design. Simulation shows that the proposed network achieves small performance improvement when compared to the same network with the ReLU activation function.


2021 ◽  
Author(s):  
Abdullahi Mohammad ◽  
Christos Masouros ◽  
Yiannis Andreopoulos

We consider a downlink situation where the BS is equipped with four antennas (M = 4) that serve single users; and assume a single cell. We obtain the dataset from the channel realizations randomly generated from a normal distribution with zero mean and unit variance. The dataset is reshaped and converted to real number domain.<div>The input dataset is normalized by the transmit data symbol so that data entries are within the nominal range, potentially aiding the training. We generate 50,000 training samples and 2000 test samples, respectively. The transmit data symbols are modulated using a QPSK modulation scheme. The training SINR is obtained randomly from uniform distribution Γtrain∼U(Γlow, Γhigh). Stochastic gradient descent is used with the Lagrangian function as a loss metric. A parametric rectified linear unit (PReLu) activation function is used for convolutional and fully connected layers in a full-precision model and the low-bit activation function for the quantized model. After every iteration, the learning rate is reduced by a factor α= 0.65 to help the learning algorithm converge faster. <br></div>


2021 ◽  
Author(s):  
Abdullahi Mohammad ◽  
Christos Masouros ◽  
Yiannis Andreopoulos

We consider a downlink situation where the BS is equipped with four antennas (M = 4) that serve single users; and assume a single cell. We obtain the dataset from the channel realizations randomly generated from a normal distribution with zero mean and unit variance. The dataset is reshaped and converted to real number domain.<div>The input dataset is normalized by the transmit data symbol so that data entries are within the nominal range, potentially aiding the training. We generate 50,000 training samples and 2000 test samples, respectively. The transmit data symbols are modulated using a QPSK modulation scheme. The training SINR is obtained randomly from uniform distribution Γtrain∼U(Γlow, Γhigh). Stochastic gradient descent is used with the Lagrangian function as a loss metric. A parametric rectified linear unit (PReLu) activation function is used for convolutional and fully connected layers in a full-precision model and the low-bit activation function for the quantized model. After every iteration, the learning rate is reduced by a factor α= 0.65 to help the learning algorithm converge faster. <br></div>


2020 ◽  
Author(s):  
Kien Mai Ngoc ◽  
Donghun Yang ◽  
Iksoo Shin ◽  
Hoyong Kim ◽  
Myunggwon Hwang

2020 ◽  
Vol 10 (10) ◽  
pp. 3658
Author(s):  
Karshiev Sanjar ◽  
Olimov Bekhzod ◽  
Jaeil Kim ◽  
Jaesoo Kim ◽  
Anand Paul ◽  
...  

The early and accurate diagnosis of skin cancer is crucial for providing patients with advanced treatment by focusing medical personnel on specific parts of the skin. Networks based on encoder–decoder architectures have been effectively implemented for numerous computer-vision applications. U-Net, one of CNN architectures based on the encoder–decoder network, has achieved successful performance for skin-lesion segmentation. However, this network has several drawbacks caused by its upsampling method and activation function. In this paper, a fully convolutional network and its architecture are proposed with a modified U-Net, in which a bilinear interpolation method is used for upsampling with a block of convolution layers followed by parametric rectified linear-unit non-linearity. To avoid overfitting, a dropout is applied after each convolution block. The results demonstrate that our recommended technique achieves state-of-the-art performance for skin-lesion segmentation with 94% pixel accuracy and a 88% dice coefficient, respectively.


Information ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 356 ◽  
Author(s):  
Yuelei Xiao ◽  
Xing Xiao

Residual networks (ResNets) are prone to over-fitting for low-dimensional and small-scale datasets. And the existing intrusion detection systems (IDSs) fail to provide better performance, especially for remote-to-local (R2L) and user-to-root (U2R) attacks. To overcome these problems, a simplified residual network (S-ResNet) is proposed in this paper, which consists of several cascaded, simplified residual blocks. Compared with the original residual block, the simplified residual block deletes a weight layer and two batch normalization (BN) layers, adds a pooling layer, and replaces the rectified linear unit (ReLU) function with the parametric rectified linear unit (PReLU) function. Based on the S-ResNet, a novel IDS was proposed in this paper, which includes a data preprocessing module, a random oversampling module, a S-Resnet layer, a full connection layer and a Softmax layer. The experimental results on the NSL-KDD dataset show that the IDS based on the S-ResNet has a higher accuracy, recall and F1-score than the equal scale ResNet-based IDS, especially for R2L and U2R attacks. And the former has faster convergence velocity than the latter. It proves that the S-ResNet reduces the complexity of the network and effectively prevents over-fitting; thus, it is more suitable for low-dimensional and small-scale datasets than ResNet. Furthermore, the experimental results on the NSL-KDD datasets also show that the IDS based on the S-ResNet achieves better performance in terms of accuracy and recall compared to the existing IDSs, especially for R2L and U2R attacks.


Sign in / Sign up

Export Citation Format

Share Document