scholarly journals Deep Learning-Based Approach to Fast Power Allocation in SISO SWIPT Systems with a Power-Splitting Scheme

2020 ◽  
Vol 10 (10) ◽  
pp. 3634
Author(s):  
Huynh Thanh Thien ◽  
Pham-Viet Tuan ◽  
Insoo Koo

Recently, simultaneous wireless information and power transfer (SWIPT) systems, which can supply efficiently throughput and energy, have emerged as a potential research area in fifth-generation (5G) system. In this paper, we study SWIPT with multi-user, single-input single-output (SISO) system. First, we solve the transmit power optimization problem, which provides the optimal strategy for getting minimum power while satisfying sufficient signal-to-noise ratio (SINR) and harvested energy requirements to ensure receiver circuits work in SWIPT systems where receivers are equipped with a power-splitting structure. Although optimization algorithms are able to achieve relatively high performance, they often entail a significant number of iterations, which raises many issues in computation costs and time for real-time applications. Therefore, we aim at providing a deep learning-based approach, which is a promising solution to address this challenging issue. Deep learning architectures used in this paper include a type of Deep Neural Network (DNN): the Feed-Forward Neural Network (FFNN) and three types of Recurrent Neural Network (RNN): the Layer Recurrent Network (LRN), the Nonlinear AutoRegressive network with eXogenous inputs (NARX), and Long Short-Term Memory (LSTM). Through simulations, we show that the deep learning approaches can approximate a complex optimization algorithm that optimizes transmit power in SWIPT systems with much less computation time.

2019 ◽  
Author(s):  
Leihong Wu ◽  
Xiangwen Liu ◽  
Joshua Xu

Abstract Background: Researchers today are generating unprecedented amounts of biological data. One trend in current biological research is integrated analysis with multi-platform data. Effective integration of multi-platform data into the solution of a single or multi-task classification problem; however, is critical and challenging. In this study, we proposed HetEnc, a novel deep learning-based approach, for information domain separation. Results: HetEnc includes both an unsupervised feature representation module and a supervised neural network module to handle multi-platform gene expression datasets. It first constructs three different encoding networks to represent the original gene expression data using high-level abstracted features. A six-layer fully-connected feed-forward neural network is then trained using these abstracted features for each targeted endpoint. We applied HetEnc to the SEQC neuroblastoma dataset to demonstrate that it outperforms other machine learning approaches. Although we used multi-platform data in feature abstraction and model training, HetEnc does not need multi-platform data for prediction, enabling a broader application of the trained model by reducing the cost of gene expression profiling for new samples to a single platform. Thus, HetEnc provides a new solution to integrated gene expression analysis, accelerating modern biological research.


Author(s):  
Asma Husna ◽  
Saman Hassanzadeh Amin ◽  
Bharat Shah

Supply chain management (SCM) is a fast growing and largely studied field of research. Forecasting of the required materials and parts is an important task in companies and can have a significant impact on the total cost. To have a reliable forecast, some advanced methods such as deep learning techniques are helpful. The main goal of this chapter is to forecast the unit sales of thousands of items sold at different chain stores located in Ecuador with holistic techniques. Three deep learning approaches including artificial neural network (ANN), convolutional neural network (CNN), and long short-term memory (LSTM) are adopted here for predictions from the Corporación Favorita grocery sales forecasting dataset collected from Kaggle website. Finally, the performances of the applied models are evaluated and compared. The results show that LSTM network tends to outperform the other two approaches in terms of performance. All experiments are conducted using Python's deep learning library and Keras and Tensorflow packages.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1939
Author(s):  
Jun Wei Chen ◽  
Xanno K. Sigalingging ◽  
Jenq-Shiou Leu ◽  
Jun-Ichi Takada

In recent years, Chinese has become one of the most popular languages globally. The demand for automatic Chinese sentence correction has gradually increased. This research can be adopted to Chinese language learning to reduce the cost of learning and feedback time, and help writers check for wrong words. The traditional way to do Chinese sentence correction is to check if the word exists in the predefined dictionary. However, this kind of method cannot deal with semantic error. As deep learning becomes popular, an artificial neural network can be applied to understand the sentence’s context to correct the semantic error. However, there are still many issues that need to be discussed. For example, the accuracy and the computation time required to correct a sentence are still lacking, so maybe it is still not the time to adopt the deep learning based Chinese sentence correction system to large-scale commercial applications. Our goal is to obtain a model with better accuracy and computation time. Combining recurrent neural network and Bidirectional Encoder Representations from Transformers (BERT), a recently popular model, known for its high performance and slow inference speed, we introduce a hybrid model which can be applied to Chinese sentence correction, improving the accuracy and also the inference speed. Among the results, BERT-GRU has obtained the highest BLEU Score in all experiments. The inference speed of the transformer-based original model can be improved by 1131% in beam search decoding in the 128-word experiment, and greedy decoding can also be improved by 452%. The longer the sequence, the larger the improvement.


2019 ◽  
Author(s):  
Leihong Wu ◽  
Xiangwen Liu ◽  
Joshua Xu

Abstract Background: Researchers today are generating unprecedented amounts of biological data. One trend in current biological research is integrated analysis with multi-platform data. Effective integration of multi-platform data into the solution of a single or multi-task classification problem; however, is critical and challenging. In this study, we proposed HetEnc, a novel deep learning-based approach, for information domain separation. Results: HetEnc includes both an unsupervised feature representation module and a supervised neural network module to handle multi-platform gene expression datasets. It first constructs three different encoding networks to represent the original gene expression data using high-level abstracted features. A six-layer fully-connected feed-forward neural network is then trained using these abstracted features for each targeted endpoint. We applied HetEnc to the SEQC neuroblastoma dataset to demonstrate that it outperforms other machine learning approaches. Although we used multi-platform data in feature abstraction and model training, HetEnc does not need multi-platform data for prediction, enabling a broader application of the trained model by reducing the cost of gene expression profiling for new samples to a single platform. Thus, HetEnc provides a new solution to integrated gene expression analysis, accelerating modern biological research.


Energies ◽  
2019 ◽  
Vol 13 (1) ◽  
pp. 147 ◽  
Author(s):  
Muhammad Aslam ◽  
Jae-Myeong Lee ◽  
Hyung-Seung Kim ◽  
Seung-Jae Lee ◽  
Sugwon Hong

Microgrid is becoming an essential part of the power grid regarding reliability, economy, and environment. Renewable energies are main sources of energy in microgrids. Long-term solar generation forecasting is an important issue in microgrid planning and design from an engineering point of view. Solar generation forecasting mainly depends on solar radiation forecasting. Long-term solar radiation forecasting can also be used for estimating the degradation-rate-influenced energy potentials of photovoltaic (PV) panel. In this paper, a comparative study of different deep learning approaches is carried out for forecasting one year ahead hourly and daily solar radiation. In the proposed method, state of the art deep learning and machine learning architectures like gated recurrent units (GRUs), long short term memory (LSTM), recurrent neural network (RNN), feed forward neural network (FFNN), and support vector regression (SVR) models are compared. The proposed method uses historical solar radiation data and clear sky global horizontal irradiance (GHI). Even though all the models performed well, GRU performed relatively better compared to the other models. The proposed models are also compared with traditional state of the art methods for long-term solar radiation forecasting, i.e., random forest regression (RFR). The proposed models outperformed the traditional method, hence proving their efficiency.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Timothy Tiggeloven ◽  
Anaïs Couasnon ◽  
Chiem van Straaten ◽  
Sanne Muis ◽  
Philip J. Ward

AbstractTo improve coastal adaptation and management, it is critical to better understand and predict the characteristics of sea levels. Here, we explore the capabilities of artificial intelligence, from four deep learning methods to predict the surge component of sea-level variability based on local atmospheric conditions. We use an Artificial Neural Networks, Convolutional Neural Network, Long Short-Term Memory layer (LSTM) and a combination of the latter two (ConvLSTM), to construct ensembles of Neural Network (NN) models at 736 tide stations globally. The NN models show similar patterns of performance, with much higher skill in the mid-latitudes. Using our global model settings, the LSTM generally outperforms the other NN models. Furthermore, for 15 stations we assess the influence of adding complexity more predictor variables. This generally improves model performance but leads to substantial increases in computation time. The improvement in performance remains insufficient to fully capture observed dynamics in some regions. For example, in the tropics only modelling surges is insufficient to capture intra-annual sea level variability. While we focus on minimising mean absolute error for the full time series, the NN models presented here could be adapted for use in forecasting extreme sea levels or emergency response.


2019 ◽  
Author(s):  
Leihong Wu ◽  
Xiangwen Liu ◽  
Joshua Xu

Abstract Background: Researchers today are generating unprecedented amounts of biological data. One trend in current biological research is integrated analysis with multi-platform data. Effective integration of multi-platform data into the solution of a single or multi-task classification problem; however, is critical and challenging. In this study, we proposed HetEnc, a novel deep learning-based approach, for information domain separation. Results: HetEnc includes both an unsupervised feature representation module and a supervised neural network module to handle multi-platform gene expression datasets. It first constructs three different encoding networks to represent the original gene expression data using high-level abstracted features. A six-layer fully-connected feed-forward neural network is then trained using these abstracted features for each targeted endpoint. We applied HetEnc to the SEQC neuroblastoma dataset to demonstrate that it outperforms other machine learning approaches. Although we used multi-platform data in feature abstraction and model training, HetEnc does not need multi-platform data for prediction, enabling a broader application of the trained model by reducing the cost of gene expression profiling for new samples to a single platform. Thus, HetEnc provides a new solution to integrated gene expression analysis, accelerating modern biological research.


2019 ◽  
Author(s):  
Leihong Wu ◽  
Xiangwen Liu ◽  
Joshua Xu

Abstract Motivation Researchers today are generating unprecedented amounts of biological data. One trend in current biological research is integrated analysis with multi-platform data. Effective integration of multi-platform data into the solution of a single or multi-task classification problem; however, is critical and challenging. In this study, we proposed HetEnc, a novel deep learning-based approach, for information domain separation. Results HetEnc includes both an unsupervised feature representation module and a supervised neural network module to handle multi-platform gene expression datasets. It first constructs three different encoding networks to represent the original gene expression data using high-level abstracted features. A six-layer fully-connected feed-forward neural network is then trained using these abstracted features for each targeted endpoint. We applied HetEnc to the SEQC neuroblastoma dataset to demonstrate that it outperforms other machine learning approaches. Although we used multi-platform data in feature abstraction and model training, HetEnc does not need multi-platform data for prediction, enabling a broader application of the trained model by reducing the cost of gene expression profiling for new samples to a single platform. Thus, HetEnc provides a new solution to integrated gene expression analysis, accelerating modern biological research. Availability and Implementation The source code for HetEnc is available at: https://github.com/seldas/HetEnc_Code.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


2021 ◽  
Vol 10 (1) ◽  
pp. 18
Author(s):  
Quentin Cabanes ◽  
Benaoumeur Senouci ◽  
Amar Ramdane-Cherif

Cyber-Physical Systems (CPSs) are a mature research technology topic that deals with Artificial Intelligence (AI) and Embedded Systems (ES). They interact with the physical world via sensors/actuators to solve problems in several applications (robotics, transportation, health, etc.). These CPSs deal with data analysis, which need powerful algorithms combined with robust hardware architectures. On one hand, Deep Learning (DL) is proposed as the main solution algorithm. On the other hand, the standard design and prototyping methodologies for ES are not adapted to modern DL-based CPS. In this paper, we investigate AI design for CPS around embedded DL. The main contribution of this work is threefold: (1) We define an embedded DL methodology based on a Multi-CPU/FPGA platform. (2) We propose a new hardware design architecture of a Neural Network Processor (NNP) for DL algorithms. The computation time of a feed forward sequence is estimated to 23 ns for each parameter. (3) We validate the proposed methodology and the DL-based NNP using a smart LIDAR application use-case. The input of our NNP is a voxel grid hardware computed from 3D point cloud. Finally, the results show that our NNP is able to process Dense Neural Network (DNN) architecture without bias.


Sign in / Sign up

Export Citation Format

Share Document