scholarly journals Improved D2D Millimeter Wave Communications for 5G Networks Using Deep Learning.pdf

Author(s):  
Ahmed abdelreheem ◽  
Ahmed S. A. Mubarak ◽  
Osama A. Omer ◽  
Hamada Esmaiel ◽  
Usama S. Mohamed

Mode selection is normally used in conjunction with Device-to-Device (D2D) millimeter wave (mmWave) communications in 5G networks to overcome the low coverage area, poor reliability and vulnerable to path blocking of mmWave transmissions. Thus, producing a high-efficient D2D mmWave using mode selection based on select the optimal mode with low complexity turns to be a big challenge towards ubiquitous D2D mmWave communications. In this paper, low complexity and high-efficient mode selection in D2D mmWave communications based on deep learning is introduced utilizing the artificial intelligence. In which, deep learning is used to estimate the optimal mode y in the case of blocking of mmWave transmission or low coverage area of mmWave communications. Then, the proposed deep learning model is based on training the model with almost use cases in offline phase to predict the optimal mode for data relaying high-reliability communication in online phase. In mode selection process, the potential D2D transmitter select the mode to transmit the data either based on dedicated D2D communication or through the cellular uplink using the base station (BS) as a relay based on several criteria. The proposed deep learning model is developed to overcome the challenges of selected the optimal mode with low complexity and high efficiency. The simulation analysis show that the proposed mode selection algorithms outperform the conventional techniques in D2D mmWave communication in the spectral efficiency, energy efficiency and coverage probability.

2020 ◽  
Author(s):  
Ahmed abdelreheem ◽  
Ahmed S. A. Mubarak ◽  
Osama A. Omer ◽  
Hamada Esmaiel ◽  
Usama S. Mohamed

Mode selection is normally used in conjunction with Device-to-Device (D2D) millimeter wave (mmWave) communications in 5G networks to overcome the low coverage area, poor reliability and vulnerable to path blocking of mmWave transmissions. Thus, producing a high-efficient D2D mmWave using mode selection based on select the optimal mode with low complexity turns to be a big challenge towards ubiquitous D2D mmWave communications. In this paper, low complexity and high-efficient mode selection in D2D mmWave communications based on deep learning is introduced utilizing the artificial intelligence. In which, deep learning is used to estimate the optimal mode y in the case of blocking of mmWave transmission or low coverage area of mmWave communications. Then, the proposed deep learning model is based on training the model with almost use cases in offline phase to predict the optimal mode for data relaying high-reliability communication in online phase. In mode selection process, the potential D2D transmitter select the mode to transmit the data either based on dedicated D2D communication or through the cellular uplink using the base station (BS) as a relay based on several criteria. The proposed deep learning model is developed to overcome the challenges of selected the optimal mode with low complexity and high efficiency. The simulation analysis show that the proposed mode selection algorithms outperform the conventional techniques in D2D mmWave communication in the spectral efficiency, energy efficiency and coverage probability.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6555
Author(s):  
Radwa Ahmed Osman ◽  
Sherine Nagy Saleh ◽  
Yasmine N. M. Saleh

The co-existence of fifth-generation (5G) and Internet-of-Things (IoT) has become inevitable in many applications since 5G networks have created steadier connections and operate more reliably, which is extremely important for IoT communication. During transmission, IoT devices (IoTDs) communicate with IoT Gateway (IoTG), whereas in 5G networks, cellular users equipment (CUE) may communicate with any destination (D) whether it is a base station (BS) or other CUE, which is known as device-to-device (D2D) communication. One of the challenges that face 5G and IoT is interference. Interference may exist at BSs, CUE receivers, and IoTGs due to the sharing of the same spectrum. This paper proposes an interference avoidance distributed deep learning model for IoT and device to any destination communication by learning from data generated by the Lagrange optimization technique to predict the optimum IoTD-D, CUE-IoTG, BS-IoTD and IoTG-CUE distances for uplink and downlink data communication, thus achieving higher overall system throughput and energy efficiency. The proposed model was compared to state-of-the-art regression benchmarks, which provided a huge improvement in terms of mean absolute error and root mean squared error. Both analytical and deep learning models reached the optimal throughput and energy efficiency while suppressing interference to any destination and IoTG.


2021 ◽  
Author(s):  
Mohamed Saeid Shalaby ◽  
Hussein Mohamed Hussein ◽  
Mona Mohamed Sabry Shokair ◽  
Ahmed Mohamed Benaya

Abstract 5G networks and beyond can provide high data rate for the served users. Small cells, massive multiple input multiple outputs (mMIMO) as well as working in millimeter wave bands are emerging tools toward empowering 5G and beyond networks. The cellular mMIMO networks can provide high data rate for users, however their performance is not satisfied for the cell-edge users and shadowed users. Fortunately, the cell-Free mMIMO network can provide a satisfied performance for all users even if they are in shadowed areas or at cell edges. The distributed access points (APs) through the coverage area can allow users to get benefit of the best serving AP. Furthermore, the users can have services anywhere due to the existence of one AP at least. The cell-Free mMIMO networks can provide a high throughput when they are operated in the millimeter wave bands due to the high available bandwidth. The operation in millimeter wave bands can let the 5G networks and beyond have a high data rate. Therefore, this paper gives a great attention to the millimeter wave bands. In this paper, the performance of the cell-Free mMIMO network, operating in the millimeter wave bands, is mathematically evaluated and simulated. The performance can include the spectral efficiency (SE), bit error rate (BER), and energy efficiency (EE). It is observed that the centralized cooperation among the APs, level 4, can provide a high SE and EE even if the maximal ratio combining (MRC) is applied. Moreover, the cell-Free four cooperation levels can perform better than cellular mMIMO when the millimeter wave non-line-of-sight (NLOS) models are applied.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


2019 ◽  
Vol 9 (22) ◽  
pp. 4871 ◽  
Author(s):  
Quan Liu ◽  
Chen Feng ◽  
Zida Song ◽  
Joseph Louis ◽  
Jian Zhou

Earthmoving is an integral civil engineering operation of significance, and tracking its productivity requires the statistics of loads moved by dump trucks. Since current truck loads’ statistics methods are laborious, costly, and limited in application, this paper presents the framework of a novel, automated, non-contact field earthmoving quantity statistics (FEQS) for projects with large earthmoving demands that use uniform and uncovered trucks. The proposed FEQS framework utilizes field surveillance systems and adopts vision-based deep learning for full/empty-load truck classification as the core work. Since convolutional neural network (CNN) and its transfer learning (TL) forms are popular vision-based deep learning models and numerous in type, a comparison study is conducted to test the framework’s core work feasibility and evaluate the performance of different deep learning models in implementation. The comparison study involved 12 CNN or CNN-TL models in full/empty-load truck classification, and the results revealed that while several provided satisfactory performance, the VGG16-FineTune provided the optimal performance. This proved the core work feasibility of the proposed FEQS framework. Further discussion provides model choice suggestions that CNN-TL models are more feasible than CNN prototypes, and models that adopt different TL methods have advantages in either working accuracy or speed for different tasks.


Sign in / Sign up

Export Citation Format

Share Document