scholarly journals Quantum-Driven Energy-Efficiency Optimization for Next-Generation Communications Systems

Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4090
Author(s):  
Su Fong Chien ◽  
Heng Siong Lim ◽  
Michail Alexandros Kourtis ◽  
Qiang Ni ◽  
Alessio Zappone ◽  
...  

The advent of deep-learning technology promises major leaps forward in addressing the ever-enduring problems of wireless resource control and optimization, and improving key network performances, such as energy efficiency, spectral efficiency, transmission latency, etc. Therefore, a common understanding for quantum deep-learning algorithms is that they exploit advantages of quantum hardware, enabling massive optimization speed ups, which cannot be achieved by using classical computer hardware. In this respect, this paper investigates the possibility of resolving the energy efficiency problem in wireless communications by developing a quantum neural network (QNN) algorithm of deep-learning that can be tested on a classical computer setting by using any popular numerical simulation tool, such as Python. The computed results show that our QNN algorithm can be indeed trainable and that it can lead to solution convergence during the training phase. We also show that the proposed QNN algorithm exhibits slightly faster convergence speed than its classical ANN counterpart, which was considered in our previous work. Finally, we conclude that our solution can accurately resolve the energy efficiency problem and that it can be extended to optimize other communications problems, such as the global optimal power control problem, with promising trainability and generalization ability.

2020 ◽  
Vol 63 (3) ◽  
pp. 629-643
Author(s):  
Chengshun Zhao ◽  
Longzhe Quan ◽  
Hailong Li ◽  
Ruiqi Liu ◽  
Jianyu Wang ◽  
...  

Abstract. With the development of precision agriculture, the selection of maize kernels has gained more importance in scientific research and practical significance in agricultural production. In this study, the deep learning technology of machine vision was used to select maize kernels, solving the problems of previous maize kernel selection for specific sorting problems, the cumbersome process of artificial feature modeling, the problem of a small number of features, and the challenge of limited data. First, the maximum size of a model based on convolutional neural networks (CNNs) that could run under finite hardware conditions was determined by experiments. Four different network models (Faster R-CNN, Model 1.0, Model 2.0, and Model 3.0) were then designed and trained using a data set of maize kernels. Finally, the accuracy of the models was verified by comparison test, and the detection results of the models were analyzed according to their precision, recall, FPR, F1, precision-recall curve, average precision (AP), mean average precision (mAP), and detection speed. The results show that for the validation set not used for training, Model 1.0 had the highest average recall rate of 98.42% among the four models. Without taking into account the identification of the removed kernels, only excellent maize kernels were identified, and the mAP of Model 1.0 was as high as 97.27%. Moreover, Model 1.0 requires less computer resources, and its computer hardware requirement is lower. The precision, recall, and F1 value of Model 2.0 were increased by 3.73%, 3.55%, and 3.79%, respectively, and the false positive rate of Model 2.0 was reduced by 1.31% on average compared with the Faster R-CNN model. By comparing Model 1.0, Model 2.0, and Model 3.0, it was found that the overall performance of Model 2.0 was best. The size of the network model has an effect on the accurate selection of maize kernels, and a moderate-size model is the best. This study laid a good foundation for the further application of deep learning technology in the real-time sorting of maize kernels and additional applications in the field of agriculture. Keywords: Convolutional neural networks, Deep learning, Maize kernel, Selection, Visualization.


2020 ◽  
Vol 39 (4) ◽  
pp. 5699-5711
Author(s):  
Shirong Long ◽  
Xuekong Zhao

The smart teaching mode overcomes the shortcomings of traditional teaching online and offline, but there are certain deficiencies in the real-time feature extraction of teachers and students. In view of this, this study uses the particle swarm image recognition and deep learning technology to process the intelligent classroom video teaching image and extracts the classroom task features in real time and sends them to the teacher. In order to overcome the shortcomings of the premature convergence of the standard particle swarm optimization algorithm, an improved strategy for multiple particle swarm optimization algorithms is proposed. In order to improve the premature problem in the search performance algorithm of PSO algorithm, this paper combines the algorithm with the useful attributes of other algorithms to improve the particle diversity in the algorithm, enhance the global search ability of the particle, and achieve effective feature extraction. The research indicates that the method proposed in this paper has certain practical effects and can provide theoretical reference for subsequent related research.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1579
Author(s):  
Dongqi Wang ◽  
Qinghua Meng ◽  
Dongming Chen ◽  
Hupo Zhang ◽  
Lisheng Xu

Automatic detection of arrhythmia is of great significance for early prevention and diagnosis of cardiovascular disease. Traditional feature engineering methods based on expert knowledge lack multidimensional and multi-view information abstraction and data representation ability, so the traditional research on pattern recognition of arrhythmia detection cannot achieve satisfactory results. Recently, with the increase of deep learning technology, automatic feature extraction of ECG data based on deep neural networks has been widely discussed. In order to utilize the complementary strength between different schemes, in this paper, we propose an arrhythmia detection method based on the multi-resolution representation (MRR) of ECG signals. This method utilizes four different up to date deep neural networks as four channel models for ECG vector representations learning. The deep learning based representations, together with hand-crafted features of ECG, forms the MRR, which is the input of the downstream classification strategy. The experimental results of big ECG dataset multi-label classification confirm that the F1 score of the proposed method is 0.9238, which is 1.31%, 0.62%, 1.18% and 0.6% higher than that of each channel model. From the perspective of architecture, this proposed method is highly scalable and can be employed as an example for arrhythmia recognition.


2021 ◽  
Author(s):  
Zhiting Chen ◽  
Hongyan Liu ◽  
Chongyang Xu ◽  
Xiuchen Wu ◽  
Boyi Liang ◽  
...  

Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 68
Author(s):  
Jiwei Fan ◽  
Xiaogang Yang ◽  
Ruitao Lu ◽  
Xueli Xie ◽  
Weipeng Li

Unmanned aerial vehicles (UAV) and related technologies have played an active role in the prevention and control of novel coronaviruses at home and abroad, especially in epidemic prevention, surveillance, and elimination. However, the existing UAVs have a single function, limited processing capacity, and poor interaction. To overcome these shortcomings, we designed an intelligent anti-epidemic patrol detection and warning flight system, which integrates UAV autonomous navigation, deep learning, intelligent voice, and other technologies. Based on the convolution neural network and deep learning technology, the system possesses a crowd density detection method and a face mask detection method, which can detect the position of dense crowds. Intelligent voice alarm technology was used to achieve an intelligent alarm system for abnormal situations, such as crowd-gathering areas and people without masks, and to carry out intelligent dissemination of epidemic prevention policies, which provides a powerful technical means for epidemic prevention and delaying their spread. To verify the superiority and feasibility of the system, high-precision online analysis was carried out for the crowd in the inspection area, and pedestrians’ faces were detected on the ground to identify whether they were wearing a mask. The experimental results show that the mean absolute error (MAE) of the crowd density detection was less than 8.4, and the mean average precision (mAP) of face mask detection was 61.42%. The system can provide convenient and accurate evaluation information for decision-makers and meets the requirements of real-time and accurate detection.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 255
Author(s):  
Lei Wang ◽  
Yigang He ◽  
Lie Li

High voltage direct current (HVDC) transmission systems play an increasingly important role in long-distance power transmission. Realizing accurate and timely fault location of transmission lines is extremely important for the safe operation of power systems. With the development of modern data acquisition and deep learning technology, deep learning methods have the feasibility of engineering application in fault location. The traditional single-terminal traveling wave method is used for fault location in HVDC systems. However, many challenges exist when a high impedance fault occurs including high sampling frequency dependence and difficulty to determine wave velocity and identify wave heads. In order to resolve these problems, this work proposed a deep hybrid convolutional neural network (CNN) and long short-term memory (LSTM) network model for single-terminal fault location of an HVDC system containing mixed cables and overhead line segments. Simultaneously, a variational mode decomposition–Teager energy operator is used in feature engineering to improve the effect of model training. 2D-CNN was employed as a classifier to identify fault segments, and LSTM as a regressor integrated the fault segment information of the classifier to achieve precise fault location. The experimental results demonstrate that the proposed method has high accuracy of fault location, with the effects of fault types, noise, sampling frequency, and different HVDC topologies in consideration.


2021 ◽  
Vol 13 (2) ◽  
pp. 195
Author(s):  
He Wang ◽  
Jingsong Yang ◽  
Jianhua Zhu ◽  
Lin Ren ◽  
Yahao Liu ◽  
...  

Sea state estimation from wide-swath and frequent-revisit scatterometers, which are providing ocean winds in the routine, is an attractive challenge. In this study, state-of-the-art deep learning technology is successfully adopted to develop an algorithm for deriving significant wave height from Advanced Scatterometer (ASCAT) aboard MetOp-A. By collocating three years (2016–2018) of ASCAT measurements and WaveWatch III sea state hindcasts at a global scale, huge amount data points (>8 million) were employed to train the multi-hidden-layer deep learning model, which has been established to map the inputs of thirteen sea state related ASCAT observables into the wave heights. The ASCAT significant wave height estimates were validated against hindcast dataset independent on training, showing good consistency in terms of root mean square error of 0.5 m under moderate sea condition (1.0–5.0 m). Additionally, reasonable agreement is also found between ASCAT derived wave heights and buoy observations from National Data Buoy Center for the proposed algorithm. Results are further discussed with respect to sea state maturity, radar incidence angle along with the limitations of the model. Our work demonstrates the capability of scatterometers for monitoring sea state, thus would advance the use of scatterometers, which were originally designed for winds, in studies of ocean waves.


Sign in / Sign up

Export Citation Format

Share Document