Deterministic Video Streaming with Deep Learning Enabled Base Station Intervention for Stable Remote Driving System

Author(s):  
Kohei Kato ◽  
Katsuya Suto ◽  
Koya Sato
2020 ◽  
Vol 35 (03) ◽  
pp. 317-328
Author(s):  
Xunsheng Du ◽  
Yuchen Jin ◽  
Xuqing Wu ◽  
Yu Liu ◽  
Xianping (Sean) Wu ◽  
...  

Author(s):  
Shun Otsubo ◽  
Yasutake Takahashi ◽  
Masaki Haruna ◽  
◽  

This paper proposes an automatic driving system based on a combination of modular neural networks processing human driving data. Research on automatic driving vehicles has been actively conducted in recent years. Machine learning techniques are often utilized to realize an automatic driving system capable of imitating human driving operations. Almost all of them adopt a large monolithic learning module, as typified by deep learning. However, it is inefficient to use a monolithic deep learning module to learn human driving operations (accelerating, braking, and steering) using the visual information obtained from a human driving a vehicle. We propose combining a series of modular neural networks that independently learn visual feature quantities, routes, and driving maneuvers from human driving data, thereby imitating human driving operations and efficiently learning a plurality of routes. This paper demonstrates the effectiveness of the proposed method through experiments using a small vehicle.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6555
Author(s):  
Radwa Ahmed Osman ◽  
Sherine Nagy Saleh ◽  
Yasmine N. M. Saleh

The co-existence of fifth-generation (5G) and Internet-of-Things (IoT) has become inevitable in many applications since 5G networks have created steadier connections and operate more reliably, which is extremely important for IoT communication. During transmission, IoT devices (IoTDs) communicate with IoT Gateway (IoTG), whereas in 5G networks, cellular users equipment (CUE) may communicate with any destination (D) whether it is a base station (BS) or other CUE, which is known as device-to-device (D2D) communication. One of the challenges that face 5G and IoT is interference. Interference may exist at BSs, CUE receivers, and IoTGs due to the sharing of the same spectrum. This paper proposes an interference avoidance distributed deep learning model for IoT and device to any destination communication by learning from data generated by the Lagrange optimization technique to predict the optimum IoTD-D, CUE-IoTG, BS-IoTD and IoTG-CUE distances for uplink and downlink data communication, thus achieving higher overall system throughput and energy efficiency. The proposed model was compared to state-of-the-art regression benchmarks, which provided a huge improvement in terms of mean absolute error and root mean squared error. Both analytical and deep learning models reached the optimal throughput and energy efficiency while suppressing interference to any destination and IoTG.


Sign in / Sign up

Export Citation Format

Share Document