scholarly journals Low Complexity Cyclic Feature Recovery Based on Compressed Sampling

2015 ◽  
Vol 2015 ◽  
pp. 1-7 ◽  
Author(s):  
Zhuo Sun ◽  
Jia Hou ◽  
Siyuan Liu ◽  
Sese Wang ◽  
Xuantong Chen

To extract statistic features of communication signal from compressive samples, such as cyclostationary property, full-scale signal reconstruction is not actually necessary or somehow expensive. However, direct reconstruction of cyclic feature may not be practical due to the relative high processing complexity. In this paper, we propose a new cyclic feature recovery approach based on the reconstruction of autocorrelation sequence from sub-Nyquist samples, which can reduce the computation complexity and memory consumption significantly, while the recovery performance remains well in the same compressive ratio. Through theoretical analyses and simulations, we conducted to show and verify our statements and conclusions.

Author(s):  
R. A. Morozov ◽  
P. V. Trifonov

Introduction:Practical implementation of a communication system which employs a family of polar codes requires either to store a number of large specifications or to construct the codes by request. The first approach assumes extensive memory consumption, which is inappropriate for many applications, such as those for mobile devices. The second approach can be numerically unstable and hard to implement in low-end hardware. One of the solutions is specifying a family of codes by a sequence of subchannels sorted by reliability. However, this solution makes it impossible to separately optimize each code from the family.Purpose:Developing a method for compact specifications of polar codes and subcodes.Results:A method is proposed for compact specification of polar codes. It can be considered a trade-off between real-time construction and storing full-size specifications in memory. We propose to store compact specifications of polar codes which contain frozen set differences between the original pre-optimized polar codes and the polar codes constructed for a binary erasure channel with some erasure probability. Full-size specification needed for decoding can be restored from a compact one by a low-complexity hardware-friendly procedure. The proposed method can work with either polar codes or polar subcodes, allowing you to reduce the memory consumption by 15–50 times.Practical relevance:The method allows you to use families of individually optimized polar codes in devices with limited storage capacity. 


2009 ◽  
Vol 7 ◽  
pp. 17-22 ◽  
Author(s):  
C. H. Schmidt ◽  
T. F. Eibert

Abstract. The radiation of large antennas and those operating at low frequencies can be determined efficiently by near-field measurement techniques and a subsequent near-field far-field transformation. Various approaches and algorithms have been researched but for electrically large antennas and irregular measurement contours advanced algorithms with low computation complexity are required. In this paper an algorithm employing plane waves as equivalent sources and utilising efficient diagonal translation operators is presented. The efficiency is further enhanced using simple far-field translations in combination with the expensive near-field translations. In this way a low complexity near-field transformation is achieved, which works for arbitrary sample point distributions and incorporates a full probe correction without increasing the complexity.


Open Physics ◽  
2018 ◽  
Vol 16 (1) ◽  
pp. 1009-1023 ◽  
Author(s):  
Chengcheng Shao ◽  
Pengshuai Cui ◽  
Peng Xun ◽  
Yuxing Peng ◽  
Xinwen Jiang

Abstract Centrality is widely used to measure which nodes are important in a network. In recent decades, numerous metrics have been proposed with varying computation complexity. To test the idea that approximating a high-complexity metric by a low-complexity metric, researchers have studied the correlation between them. However, these works are based on Pearson correlation which is sensitive to the data distribution. Intuitively, a centrality metric is a ranking of nodes (or edges). It would be more reasonable to use rank correlation to do the measurement. In this paper, we use degree, a low-complexity metric, as the base to approximate three other metrics: closeness, betweenness, and eigenvector. We first demonstrate that rank correlation performs better than the Pearson one in scale-free networks. Then we study the correlation between centrality metrics in real networks, and find that the betweenness occupies the highest coefficient, closeness is at the middle level, and eigenvector fluctuates dramatically. At last, we evaluate the performance of using top degree nodes to approximate three other metrics in the real networks. We find that the intersection ratio of betweenness is the highest, and closeness and eigenvector follows; most often, the largest degree nodes could approximate largest betweenness and closeness nodes, but not the largest eigenvector nodes.


2011 ◽  
Vol 271-273 ◽  
pp. 458-463
Author(s):  
Rui Ping Chen ◽  
Zhong Xun Wang ◽  
Xin Qiao Yu

Decoding algorithms with strong practical value not only have good decoding performance, but also have the computation complexity as low as possible. For this purpose, the paper points out the modified min-sum decoding algorithm(M-MSA). On the condition of no increasing in the decoding complexity, it makes the error-correcting performance improved by adding the appropriate scaling factor based on the min-sum algorithm(MSA), and it is very suitable for hardware implementation. Simulation results show that this algorithm has good BER performance, low complexity and low hardware resource utilization, and it would be well applied in the future.


2018 ◽  
Vol 6 (2) ◽  
pp. 49 ◽  
Author(s):  
Samah A. Mustafa

This work is looking for a new physical layer of a multi-carrier wireless communication system to be implemented in low complexity way, resorting to suitable fast transform. The work presents and assesses a scheme based on Discrete Trigonometric Transform with appending symmetric redundancy either in each or multiple consecutive transformed blocks. A receiver front-end filter is proposed to enforce whole symmetry in the channel impulse response, and bank of one tap filter per sub-carrier is applied as an equalizer in the transform domain. The behaviour of the transceiver is studied in the context of practical impairments like fading channel, carrier frequency offset and narrow band interference. Moreover, the performance is evaluated in contrast with the state of art methods by means of computer simulations, and it has been found that the new scheme improves robustness and reliability of communication signal, and record lower peak to average power ratio. The study demonstrates that front-end matched filter effectively performs frequency synchronization to compensate the carrier frequency offset in the received signal.


2021 ◽  
Vol 263 (1) ◽  
pp. 5902-5909
Author(s):  
Yiya Hao ◽  
Shuai Cheng ◽  
Gong Chen ◽  
Yaobin Chen ◽  
Liang Ruan

Over the decades, the noise-suppression (NS) methods for speech enhancement (SE) have been widely utilized, including the conventional signal processing methods and the deep neural networks (DNN) methods. Although stationary-noise can be suppressed successfully using conventional or DNN methods, it is significantly challenging while suppressing the non-stationary noise, especially the transient noise. Compared to conventional NS methods, DNN NS methods may work more effectively under non-stationary noises by learning the noises' temporal-frequency characteristics. However, most DNN methods are challenging to be implemented on mobile devices due to their heavy computation complexity. Indeed, even a few low-complexity DNN methods are proposed for real-time purposes, the robustness and the generalization degrade for different types of noise. This paper proposes a single channel DNN-based NS method for transient noise with low computation complexity. The proposed method enhanced the signal-to-noise ratio (SNR) while minimizing the speech's distortion, resulting in a superior improvement of the speech quality over different noise types, including transient noise.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Kun Qian ◽  
Wen-Qin Wang ◽  
Huaizong Shao

Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO) communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR) performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR) beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4680
Author(s):  
Wenjia Zhang ◽  
Ling Ge ◽  
Yanci Zhang ◽  
Chenyu Liang ◽  
Zuyuan He

Low-complexity nonlinear equalization is critical for reliable high-speed short-reach optical interconnects. In this paper, we compare the complexity, efficiency and stability performance of pruned Volterra series-based equalization (VE) and neural network-based equalization (NNE) for 112 Gbps vertical cavity surface emitting laser (VCSEL) enabled optical interconnects. The design space of nonlinear equalizers and their pruning algorithms are carefully investigated to reveal fundamental reasons of powerful nonlinear compensation capability and restriction factors of efficiency and stability. The experimental results show that NNE has more than one order of magnitude bit error rate (BER) advantage over VE at the same computation complexity and pruned NNE has around 50% lower computation complexity compared to VE at the same BER level. Moreover, VE shows serious performance instability due to its intricate structure when communication channel conditions become tough. Moreover, pruned VE presents more consistent equalization performance within varying bias values than NNE.


Sign in / Sign up

Export Citation Format

Share Document