A Study on the Adaptability of Deep Learning-Based Polar-Coded NOMA in Ultra-Reliable Low-Latency Communications

Author(s):  
N. Iswarya ◽  
R. Venkateswari ◽  
N. Madhusudanan
Keyword(s):  
IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Adeeb Salh ◽  
Lukman Audah ◽  
Nor Shahida Mohd Shah ◽  
Abdulraqeb Alhammadi ◽  
Qazwan Abdullah ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 689
Author(s):  
Tom Springer ◽  
Elia Eiroa-Lledo ◽  
Elizabeth Stevens ◽  
Erik Linstead

As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems.


Author(s):  
Yang Liu ◽  
Yachao Yuan ◽  
Jing Liu

Abstract Automatic defect classification is vital to ensure product quality, especially for steel production. In the real world, the amount of collected samples with labels is limited due to high labor costs, and the gathered dataset is usually imbalanced, making accurate steel defect classification very challenging. In this paper, a novel deep learning model for imbalanced multi-label surface defect classification, named ImDeep, is proposed. It can be deployed easily in steel production lines to identify different defect types on the steel's surface. ImDeep incorporates three key techniques, i.e., Imbalanced Sampler, Fussy-FusionNet, and Transfer Learning. It improves the model's classification performance with multi-label and reduces the model's complexity over small datasets with low latency. The performance of different fusion strategies and three key techniques of ImDeep is verified. Simulation results prove that ImDeep accomplishes better performance than the state-of-the-art over the public dataset with varied sizes. Specifically, ImDeep achieves about 97% accuracy of steel surface defect classification over a small imbalanced dataset with a low latency, which improves about 10% compared with that of the state-of-the-art.


IEEE Network ◽  
2020 ◽  
Vol 34 (5) ◽  
pp. 219-225 ◽  
Author(s):  
Changyang She ◽  
Rui Dong ◽  
Zhouyou Gu ◽  
Zhanwei Hou ◽  
Yonghui Li ◽  
...  
Keyword(s):  

Author(s):  
Wei Zhang ◽  
Huiling Shi ◽  
Xinming Lu ◽  
Longquan Zhou

With the development of information technology, more and more people use multimedia conference system to communicate or work across regions. In this article, an ultra-reliable and low-latency solution based on Deep Learning and assisted by Cloud Computing for multimedia conference system, called UCCMCS, is designed and implemented. In UCCMCS, there are two-tiers in its data distribution structure which combines the advantages of cloud computing. And according to the requirements of ultra-reliability and low-latency, a bandwidth optimization model is proposed to improve the transmission efficiency of multimedia data so as to reduce the delay of the system. In order to improve the reliability of data distribution, the help of cloud computing node is used to carry out the retransmission of lost data. the experimental results show UCCMCS could improve the reliability and reduce the latency of the multimedia data distribution in multimedia conference system.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Jianxiong Pan ◽  
Neng Ye ◽  
Aihua Wang ◽  
Xiangming Li

The rapid booming of future smart city applications and Internet of things (IoT) has raised higher demands on the next-generation radio access technologies with respect to connection density, spectral efficiency (SE), transmission accuracy, and detection latency. Recently, faster-than-Nyquist (FTN) and nonorthogonal multiple access (NOMA) have been regarded as promising technologies to achieve higher SE and massive connections, respectively. In this paper, we aim to exploit the joint benefits of FTN and NOMA by superimposing multiple FTN-based transmission signals on the same physical recourses. Considering the complicated intra- and interuser interferences introduced by the proposed transmission scheme, the conventional detection methods suffer from high computational complexity. To this end, we develop a novel sliding-window detection method by incorporating the state-of-the-art deep learning (DL) technology. The data-driven offline training is first applied to derive a near-optimal receiver for FTN-based NOMA, which is deployed online to achieve high detection accuracy as well as low latency. Monte Carlo simulation results validate that the proposed detector achieves higher detection accuracy than minimum mean squared error-frequency domain equalization (MMSE-FDE) and can even approach the performance of the maximum likelihood-based receiver with greatly reduced computational complexity, which is suitable for IoT applications in smart city with low latency and high reliability requirements.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 26411-26417 ◽  
Author(s):  
Zhuofan Liao ◽  
Ruiming Zhang ◽  
Shiming He ◽  
Daojian Zeng ◽  
Jin Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document