Machine-learning-enabled channel modeling

Author(s):  
Chen Huang ◽  
Ruisi He ◽  
Andreas F. Molisch ◽  
Zhangdui Zhong ◽  
Bo Ai

2019 ◽  
Vol 106 (1) ◽  
pp. 41-70 ◽  
Author(s):  
Saud Mobark Aldossari ◽  
Kwang-Cheng Chen






2021 ◽  
Vol 2021 ◽  
pp. 1-23
Author(s):  
Elmustafa Sayed Ali ◽  
Mohammad Kamrul Hasan ◽  
Rosilah Hassan ◽  
Rashid A. Saeed ◽  
Mona Bakri Hassan ◽  
...  

Recently, interest in Internet of Vehicles’ (IoV) technologies has significantly emerged due to the substantial development in the smart automobile industries. Internet of Vehicles’ technology enables vehicles to communicate with public networks and interact with the surrounding environment. It also allows vehicles to exchange and collect information about other vehicles and roads. IoV is introduced to enhance road users’ experience by reducing road congestion, improving traffic management, and ensuring the road safety. The promised applications of smart vehicles and IoV systems face many challenges, such as big data collection in IoV and distribution to attractive vehicles and humans. Another challenge is achieving fast and efficient communication between many different vehicles and smart devices called Vehicle-to-Everything (V2X). One of the vital questions that the researchers need to address is how to effectively handle the privacy of large groups of data and vehicles in IoV systems. Artificial Intelligence technology offers many smart solutions that may help IoV networks address all these questions and issues. Machine learning (ML) is one of the highest efficient AI tools that have been extensively used to resolve all mentioned problematic issues. For example, ML can be used to avoid road accidents by analyzing the driving behavior and environment by sensing data of the surrounding environment. Machine learning mechanisms are characterized by the time change and are critical to channel modeling in-vehicle network scenarios. This paper aims to provide theoretical foundations for machine learning and the leading models and algorithms to resolve IoV applications’ challenges. This paper has conducted a critical review with analytical modeling for offloading mobile edge-computing decisions based on machine learning and Deep Reinforcement Learning (DRL) approaches for the Internet of Vehicles (IoV). The paper has assumed a Secure IoV edge-computing offloading model with various data processing and traffic flow. The proposed analytical model considers the Markov decision process (MDP) and ML in offloading the decision process of different task flows of the IoV network control cycle. In the paper, we focused on buffer and energy aware in ML-enabled Quality of Experience (QoE) optimization, where many recent related research and methods were analyzed, compared, and discussed. The IoV edge computing and fog-based identity authentication and security mechanism were presented as well. Finally, future directions and potential solutions for secure ML IoV and V2X were highlighted.



Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3114
Author(s):  
Abdallah Mobark Aldosary ◽  
Saud Alhajaj Aldossari ◽  
Kwang-Cheng Chen ◽  
Ehab Mahmoud Mohamed ◽  
Ahmed Al-Saman

The exploitation of higher millimeter wave (MmWave) is promising for wireless communication systems. The goals of machine learning (ML) and its subcategories of deep learning beyond 5G (B5G) is to learn from the data and make a prediction or a decision other than relying on the classical procedures to enhance the wireless design. The new wireless generation should be proactive and predictive to avoid the previous drawbacks in the existing wireless generations to meet the 5G target services pillars. One of the aspects of Ultra-Reliable Low Latency Communications (URLLC) is moving the data processing tasks to the cellular base stations. With the rapid usage of wireless communications devices, base stations are required to execute and make decisions to ensure communication reliability. In this paper, an efficient new methodology using ML is applied to assist base stations in predicting the frequency bands and the path loss based on a data-driven approach. The ML algorithms that are used and compared are Multilelayers Perceptrons (MLP) as a neural networks branch and Random Forests. Systems that consume different bands such as base stations in telecommunications with uplink and downlink transmissions and other internet of things (IoT) devices need an urgent response between devices to alter bands to maintain the requirements of the new radios (NR). Thus, ML techniques are needed to learn and assist a base station to fluctuate between different bands based on a data-driven system. Then, to testify the proposed idea, we compare the analysis with other deep learning methods. Furthermore, to validate the proposed models, we applied these techniques to different case studies to ensure the success of the proposed works. To enhance the accuracy of supervised data learning, we modified the random forests by combining an unsupervised algorithm to the learning process. Eventually, the superiority of ML towards wireless communication demonstrated great accuracy at 90.24%.



Author(s):  
Jing-Ling Wang ◽  
Yun-Ruei Li ◽  
Abebe Belay Adege ◽  
Li-Chun Wang ◽  
Shiann-Shiun Jeng ◽  
...  


Author(s):  
Danshi Wang ◽  
Min Zhang

Techniques from artificial intelligence have been widely applied in optical communication and networks, evolving from early machine learning (ML) to the recent deep learning (DL). This paper focuses on state-of-the-art DL algorithms and aims to highlight the contributions of DL to optical communications. Considering the characteristics of different DL algorithms and data types, we review multiple DL-enabled solutions to optical communication. First, a convolutional neural network (CNN) is used for image recognition and a recurrent neural network (RNN) is applied for sequential data analysis. A variety of functions can be achieved by the corresponding DL algorithms through processing the different image data and sequential data collected from optical communication. A data-driven channel modeling method is also proposed to replace the conventional block-based modeling method and improve the end-to-end learning performance. Additionally, a generative adversarial network (GAN) is introduced for data augmentation to expand the training dataset from rare experimental data. Finally, deep reinforcement learning (DRL) is applied to perform self-configuration and adaptive allocation for optical networks.



2018 ◽  
Vol 60 (6) ◽  
pp. 2049-2052 ◽  
Author(s):  
Heegon Kim ◽  
Chunchun Sui ◽  
Kevin Cai ◽  
Bidyut Sen ◽  
Jun Fan


Sign in / Sign up

Export Citation Format

Share Document